Hardness vs. randomness - A survey

Noam Nisan*, Avi Wigderson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Summary form only given, as follows. Probabilistic algorithms are considered to be as practical as deterministic ones for solving computational problems. However, obvious practical and theoretical considerations have led to the questions of when, and at what cost, can one get rid of the randomness in these algorithms. A natural direction was to follow the lead of real computers and use deterministic functions to generate from few random bits many pseudorandom bits that will be random enough for the algorithm. One of the remarkable consequences of this line of research is that obtaining such upper bounds (on simulating probabilistic algorithms by deterministic ones) is intimately related to obtaining lower bounds (on the functions used to generate the pseudorandom bits). The authors survey the development of key ideas leading to understanding the connection between hardness and randomness, and its complexity theoretic implications.

Original languageAmerican English
Title of host publicationProc Struct Complexity Theor Fourth Ann Conf
Editors Anon
PublisherPubl by IEEE
Number of pages1
ISBN (Print)0818619589
StatePublished - 1989
Externally publishedYes
EventProceedings: Structure in Complexity Theory - Fourth Annual Conference - Eugene, OR, USA
Duration: 19 Jun 198922 Jun 1989

Publication series

NameProc Struct Complexity Theor Fourth Ann Conf


ConferenceProceedings: Structure in Complexity Theory - Fourth Annual Conference
CityEugene, OR, USA


Dive into the research topics of 'Hardness vs. randomness - A survey'. Together they form a unique fingerprint.

Cite this