TY - JOUR
T1 - Lower bounds for the noisy broadcast problem
AU - Goyal, Navin
AU - Kindler, Guy
AU - Saks, Michael
PY - 2007
Y1 - 2007
N2 - We prove the first nontrivial (superlinear) lower bound in the noisy broadcast model defined by El Gamal in [Open problems presented at the 1984 workshop on Specific Problems in Communication and Computation sponsored by Bell Communication Research, in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, eds., Springer-Verlag, New York, 1987, pp. 60-62]. In this model there are n + 1 processors P0, P1, ..., P n, each of which is initially given a private input bit x i. The goal is for Po to learn the value of f(x1, ..., xn), for some specified function f, using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager [IEEE Trans. Inform. Theory, 34 (1988), pp. 176-180] gave a noise-resistant protocol that allows P0 to learn the entire input with constant probability in O(n log log n) broadcasts. We prove that Gallager's protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of Ω(n log n) on the depth of a tree that learns the entire input. While the above lower bound is for an n-bit function, we also show an Ω(n log log n) lower bound for the number of broadcasts required to compute certain explicit boolean-valued functions, when the correct output must be attained with probability at least 1 - n-α for a constant parameter α > 0 (this bound applies to all threshold functions as well as any other boolean-valued function with linear sensitivity). This bound also follows by reduction from a lower bound of Ω(n log n) on the depth of generalized noisy decision trees that compute the same functions with the same error. We also show a (nontrivial) Ω(n) lower bound on the depth of generalized noisy decision trees that compute such functions with small constant error. Finally, we show the first protocol in the noisy broadcast model that computes the Hamming weight of the input using a linear number of broadcasts.
AB - We prove the first nontrivial (superlinear) lower bound in the noisy broadcast model defined by El Gamal in [Open problems presented at the 1984 workshop on Specific Problems in Communication and Computation sponsored by Bell Communication Research, in Open Problems in Communication and Computation, T. M. Cover and B. Gopinath, eds., Springer-Verlag, New York, 1987, pp. 60-62]. In this model there are n + 1 processors P0, P1, ..., P n, each of which is initially given a private input bit x i. The goal is for Po to learn the value of f(x1, ..., xn), for some specified function f, using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager [IEEE Trans. Inform. Theory, 34 (1988), pp. 176-180] gave a noise-resistant protocol that allows P0 to learn the entire input with constant probability in O(n log log n) broadcasts. We prove that Gallager's protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of Ω(n log n) on the depth of a tree that learns the entire input. While the above lower bound is for an n-bit function, we also show an Ω(n log log n) lower bound for the number of broadcasts required to compute certain explicit boolean-valued functions, when the correct output must be attained with probability at least 1 - n-α for a constant parameter α > 0 (this bound applies to all threshold functions as well as any other boolean-valued function with linear sensitivity). This bound also follows by reduction from a lower bound of Ω(n log n) on the depth of generalized noisy decision trees that compute the same functions with the same error. We also show a (nontrivial) Ω(n) lower bound on the depth of generalized noisy decision trees that compute such functions with small constant error. Finally, we show the first protocol in the noisy broadcast model that computes the Hamming weight of the input using a linear number of broadcasts.
KW - Computation in presence of noise
KW - Distributed computation
UR - http://www.scopus.com/inward/record.url?scp=55249086758&partnerID=8YFLogxK
U2 - 10.1137/060654864
DO - 10.1137/060654864
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:55249086758
SN - 0097-5397
VL - 37
SP - 1806
EP - 1841
JO - SIAM Journal on Computing
JF - SIAM Journal on Computing
IS - 6
ER -