The most important features that a parallel programming language should provide are portability, modularity, and ease of use, as well as performance and efficiency. Current parallel languages are only characterized by some of these features. For instance, most of these languages allow programmers to efficiently exploit the massively parallel target machine. Unfortunately, the estimation of the performance of each application is usually made by the programmer, without the support of any tool. Moreover, the programs produced by using such languages are not portable or easily modifiable. Here, we present a methodology to easily write efficient, high performance and portable massively parallel programs. The methodology is based on the definition of a new explicitly parallel programming language, namely P3L, and of a set of compiling tools that perform automatic adaptation of the program features to the target architecture hardware. Target architectures taken into account here are general purpose, distributed memory, MIMD architectures. These architectures provide the scalability and low cost features that are necessary to tackle the goal of massively parallel computing. Following the P3L methodology, the programmer has just to specify the kind of parallelism he is going to exploit (pipeline, farm, data, etc.) in the parallel application. Then, P3L programming tools automatically generate the process network that implements and optimizes, for the given target architecture, the particular kind of parallelism the programmer indicated as the most suitable for the application. © 1992.

A methodology for the development and the support of massively parallel programs

ORLANDO, Salvatore;
1992-01-01

Abstract

The most important features that a parallel programming language should provide are portability, modularity, and ease of use, as well as performance and efficiency. Current parallel languages are only characterized by some of these features. For instance, most of these languages allow programmers to efficiently exploit the massively parallel target machine. Unfortunately, the estimation of the performance of each application is usually made by the programmer, without the support of any tool. Moreover, the programs produced by using such languages are not portable or easily modifiable. Here, we present a methodology to easily write efficient, high performance and portable massively parallel programs. The methodology is based on the definition of a new explicitly parallel programming language, namely P3L, and of a set of compiling tools that perform automatic adaptation of the program features to the target architecture hardware. Target architectures taken into account here are general purpose, distributed memory, MIMD architectures. These architectures provide the scalability and low cost features that are necessary to tackle the goal of massively parallel computing. Following the P3L methodology, the programmer has just to specify the kind of parallelism he is going to exploit (pipeline, farm, data, etc.) in the parallel application. Then, P3L programming tools automatically generate the process network that implements and optimizes, for the given target architecture, the particular kind of parallelism the programmer indicated as the most suitable for the application. © 1992.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/13944
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 31
  • ???jsp.display-item.citation.isi??? ND
social impact