Type I and Type II Errors

… a core concept used in Quantitative Methods and Atlas104

Click for OnlineStatBook page

Concept description

Wikipedia (reference below) defines a type I error as the incorrect rejection of a true null hypothesis (a “false positive”), while a type II error is incorrectly retaining a false null hypothesis (a “false negative”).

Wikipedia goes on to say:

“More simply stated, a type I error is the (false) detection of an effect that is not present, while a type II error is the failure to detect an effect that is present.”

OnlineStatBook (reference below) defines a null hypothesis as “the hypothesis that a parameter is zero or that a difference between parameters is zero” and says:

“… a Type I error occurs when a significance test results in the rejection of a true null hypothesis. By one common convention, if the probability value is below 0.05, then the null hypothesis is rejected. Another convention, although slightly less common, is to reject the null hypothesis if the probability value is below 0.01. The threshold for rejecting the null hypothesis is called the α (alpha) level or simply α. It is also called the significance level. As discussed in the section on significance testing, it is better to interpret the probability value as an indication of the weight of evidence against the null hypothesis than as part of a decision rule for making a reject or do-not-reject decision. Therefore, keep in mind that rejecting the null hypothesis is not an all-or-nothing decision.

“The Type I error rate is affected by the α level: the lower the α level, the lower the Type I error rate. It might seem that α is the probability of a Type I error. However, this is not correct. Instead, α is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is impossible to make a Type I error.

“The second type of error that can be made in significance testing is failing to reject a false null hypothesis. This kind of error is called a Type II error. Unlike a Type I error, a Type II error is not really an error. When a statistical test is not significant, it means that the data do not provide strong evidence that the null hypothesis is false. Lack of significance does not support the conclusion that the null hypothesis is true. Therefore, a researcher should not make the mistake of incorrectly concluding that the null hypothesis is true when a statistical test was not significant. Instead, the researcher should consider the test inconclusive. Contrast this with a Type I error in which the researcher erroneously concludes that the null hypothesis is false when, in fact, it is true.

“A Type II error can only occur if the null hypothesis is false. If the null hypothesis is false, then the probability of a Type II error is called β (beta). The probability of correctly rejecting a false null hypothesis equals 1- β and is called power.”

Atlas topic, subject, and course

Estimation and Hypothesis Testing (core topic) in Quantitative Methods and Atlas104 Quantitative Methods.

Sources

Wikipedia, Type I and type II errors, at https://en.wikipedia.org/wiki/Type_I_and_type_II_errors, accessed 12 June 2017.

David Lane, OnlineStatBook, at http://onlinestatbook.com/2/logic_of_hypothesis_testing/errors.html, and http://onlinestatbook.com/2/glossary/index.html, accessed 12 June 2017.

Page created by: Ian Clark, last modified 12 June 2017.

Image: OnlineStatBook, at http://onlinestatbook.com/2/logic_of_hypothesis_testing/errorsM.html, accessed 12 June 2017.