HOW TO SHARE MEMORY IN A DISTRIBUTED SYSTEM.

Eli Upfal*, Avi Wigderson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

17 Scopus citations

Abstract

The authors study the power of shared-memory in models of parallel computation. They describe a novel distributed data structure that eliminates the need for shared memory without significantly increasing the run time of the parallel computation. More specifically they show how a complete network of processors can deterministicly simulate one PRAM step in O(log n(log log n)**2 ) time, when both models use n processors and the size of the PRAM's shared memory is polynomial in n. (The best previously known upper bound was the trivial O(n)). They also establish that this upper bound is nearly optimal. They prove that an online simulation of T PRAM steps by a complete network of processors requires OMEGA (T(log n)/(log log n)) time. A simple consequence of the upper bound is that an Ultracomputer (the only currently feasible general-purpose parallel machine) can simulate one step of a PRAM (the most convenient parallel model to program) in O((log n log log n)**2 ) steps.

Original languageEnglish
Title of host publicationAnnual Symposium on Foundations of Computer Science (Proceedings)
PublisherIEEE
Pages171-180
Number of pages10
ISBN (Print)081860591X, 9780818605918
DOIs
StatePublished - 1984
Externally publishedYes

Publication series

NameAnnual Symposium on Foundations of Computer Science (Proceedings)
ISSN (Print)0272-5428

Fingerprint

Dive into the research topics of 'HOW TO SHARE MEMORY IN A DISTRIBUTED SYSTEM.'. Together they form a unique fingerprint.

Cite this