# Type I and Type II Errors

| Home | | Advanced Mathematics |

## Chapter: Biostatistics for the Health Sciences: Tests of Hypotheses

Type I and Type II Errors - Tests of Hypotheses | Biostatistics

TYPE I AND TYPE II ERRORS

In Section 9.1, we defined the type I error α as the probability of rejecting the null hypothesis when the null hypothesis is true. We saw that in the Neyman–Pearson formulation of hypothesis testing, the type I error rate is fixed at a certain low level. In practice, the choice is usually 0.05 or 0.01. In Sections 9.3 through 9.5, we saw examples of how critical regions were defined based on the distribution of the test statistic under the null hypothesis.

Also in Section 9.1, we defined the type II error as β. The type II error is the probability of not rejecting the null hypothesis when the null hypothesis is false. It depends on the “true” value of the parameter under the alternative hypothesis.

For example, suppose we are testing a null hypothesis that the population mean μ = μ0. The type II error depends on the value of μ = μ1 μ0 under the alternative hypothesis. In the next section, we see that the power of a test is defined as 1 – β. The term “power” refers to the probability of correctly rejecting the null hypothesis when it is in fact false. Given that β depends on the value of μ1 in the context of testing for a population mean, the power is a function of μ1; hence, we refer to a power function rather than a single number.

In sample size determination (Section 9.13), we will see that analogous to choosing a width d for a confidence interval, we will select a distance δ for | μ1μ0| such that we achieve a specific high value for the power at that δ. Usually, the value for 1 -β is chosen to be 0.80, 0.90, or 0.95.