Abstract
The probabilities of the two types of classification errors in competency testing are not equally manipulable. Whereas testers can successfully minimize the probability of Type II error (misidentification of true “nonmasters” as “masters”), they are much less able to do so relative to Type I error (misidentification of true “masters” as “nonmasters”). Consequently, the proportion of identified nonmasters is likely to be artifactually high. The currently used method for coping with this problem is the arbitrary setting of observed-score standards below 100%. This paper offers an alternative solution which does not involve arbitrary decisions -namely, self-correction of wrong answers. The paper presents the rationale underlying this solution and discusses its application.
| Original language | English |
|---|---|
| Pages (from-to) | 7-13 |
| Number of pages | 7 |
| Journal | Educational and Psychological Measurement |
| Volume | 50 |
| Issue number | 1 |
| DOIs | |
| State | Published - Mar 1990 |
Fingerprint
Dive into the research topics of 'Self-Correction of Wrong Answers as an Alternative to the Arbitrary Setting of Observed-Score Standards in Competency Testing'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver