Non- and semiparametric statistics: Compared and contrasted

P. J. Bickel*, Y. Ritov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Current non- and semiparametric statistics are conventionally viewed as one field. In this paper we shall argue that the people working in these two areas constitute two cultures arising from distinct historical roots and operating under two different paradigms. Roughly speaking, modern nonparametric statistics arose from descriptive statistics and is concerned with the estimation of "ill-conditioned" objects. Modern semiparametric statistics is concerned with more classical inference concerning the behavior of well-conditioned infinite-dimensional objects or finite-dimensional objects in the presence of infinitely dimensional nuisance parameters. We present a framework for thinking about the goals of non- and semiparametric statistics. We consider the notion of parameters robust with respect to certain loss functions as ideal objects for statistical inference. We discuss regularization as a general principle of estimation of non-robust parameters with examples ranging from kernel density estimation, to Vapnik's statistical learning theory, the method of sieves, and the m out of n bootstrap. We then discuss the plug in principle, and we close with a radical proposal for inference for nonrobust parameters.

Original languageEnglish
Pages (from-to)209-228
Number of pages20
JournalJournal of Statistical Planning and Inference
Volume91
Issue number2
DOIs
StatePublished - 1 Dec 2000

Keywords

  • 62G20
  • 62G30
  • 62G35
  • Bootstrap
  • Nonparametric statistics
  • Plug-in
  • Primary 62G07
  • Robustness
  • Secondary 62G09
  • Semiparametric statistics

Fingerprint

Dive into the research topics of 'Non- and semiparametric statistics: Compared and contrasted'. Together they form a unique fingerprint.

Cite this