Efficient message passing interface (MPI) for parallel computing on clusters of workstations

Jehoshua Bruck*, Danny Dolev, Ching Tien Ho, Marcel Catalin Rosu, Ray Strong

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

18 Scopus citations

Abstract

An efficient design and implementation of the collective communication part in a Message Passing Interface (MPI) that is optimized for clusters of workstations is described. The system which consist of two main components, the MPI-CCL layer and a User-level Reliable Transport Protocol (URTP), is integrated with the operating system via an efficient kernel extension mechanism. The system is then implemented on a collection of IBM RS/6000 workstations connected via a 10Mbit Ethernet LAN. Results indicate that the performance of the MPI Broadcast (on top of Ethernet) is about twice as fast as a recently published software implementation of broadcast on top of ATM.

Original languageEnglish
Pages64-73
Number of pages10
DOIs
StatePublished - 1995
Externally publishedYes
EventProceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA'95 - Santa Barbara, CA, USA
Duration: 17 Jul 199519 Jul 1995

Conference

ConferenceProceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA'95
CitySanta Barbara, CA, USA
Period17/07/9519/07/95

Fingerprint

Dive into the research topics of 'Efficient message passing interface (MPI) for parallel computing on clusters of workstations'. Together they form a unique fingerprint.

Cite this