## Abstract

Let X and Y be dependent random variables. This paper considers the problem of designing a scalar quantizer for Y to maximize the mutual information between the quantizer's output and X, and develops fundamental properties and bounds for this form of quantization, which is connected to the log-loss distortion criterion. The main focus is the regime of low I(X;Y), where it is shown that, if X is binary, a constant fraction of the mutual information can always be preserved using O(log (1/I(X;Y))) quantization levels, and there exist distributions for which this many quantization levels are necessary. Furthermore, for larger finite alphabets 2 < |X| < ∞, it is established that an η-fraction of the mutual information can be preserved using roughly (log (| X |/I(X;Y)))η\cdot (|X|-1)} quantization levels.

Original language | American English |
---|---|

Pages (from-to) | 2472-2487 |

Number of pages | 16 |

Journal | IEEE Transactions on Information Theory |

Volume | 67 |

Issue number | 4 |

DOIs | |

State | Published - Apr 2021 |

### Bibliographical note

Funding Information:Manuscript received December 7, 2018; revised September 21, 2020; accepted January 30, 2021. Date of publication February 12, 2021; date of current version March 18, 2021. This work was supported in part by the Israel Science Foundation (ISF) under Grant 1791/17; in part by the NSF CAREER Award under Grant CCF-12-53205; in part by the Center for Science of Information (CSoI); in part by an NSF Science and Technology Center under Grant CCF-09-39370; and in part by NSF under Grant CCF-1618800, Grant CCF-17-17842, and Grant ECCS-1808692. This article was presented in part at the 2017 International Symposium on Information Theory. (Corresponding author: Or Ordentlich.) Alankrita Bhatt is with the Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093 USA (e-mail: a2bhatt@eng.ucsd.edu).

Publisher Copyright:

© 1963-2012 IEEE.

## Keywords

- Quantization
- information bottleneck
- logarithmic loss