A Regularization Corrected Score Method for Nonlinear Regression Models with Covariate Error

David M. Zucker*, Malka Gorfine, Yi Li, Mahlet G. Tadesse, Donna Spiegelman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer.

Original languageEnglish
Pages (from-to)80-90
Number of pages11
JournalBiometrics
Volume69
Issue number1
DOIs
StatePublished - Mar 2013

Keywords

  • Errors in variables
  • Integral equations
  • Logistic regression
  • Nonlinear models

Fingerprint

Dive into the research topics of 'A Regularization Corrected Score Method for Nonlinear Regression Models with Covariate Error'. Together they form a unique fingerprint.

Cite this