The **power of a study**, usually presented as a percentage, is a measure of the probability that the null hypothesis is rejected when it is false.

When interpreting study results it is important to understand what is referred to as power as well as types of errors that may be encountered.

**Power of a study** is that one will not find an association between the dependent and independent variables if there is no true relationship between them.

If the measures of association (odds ratios, relative risk, etc.) of a study reach significance, then the **power of a study** is irrelevant and the findings stand as significant, assuming the study methods are valid.

However, results may not reach significance either because there is no true relationship between the dependent and independent variables or the study did not have enough power (usually an issue of sample size) to demonstrate the relationship .

Type I errors represent the chance that the null hypothesis is rejected when it is actually true, or that one finds a result that is significant by chance alone and there is no true underlying relationship between the dependent and independent variables. This is the rate of false alarms or false positives.

Type I errors are the equivalent to the significance level reported in studies (e.g., p values <.05). Type II errors are the chance that one does not reject the null hypothesis when it is false. Type II errors are the complement of power (type II error rate = 1 – power).

Type II errors are the chance that one will miss an effect when it is really there. In other words, it is the rate of failed alarms or false negatives.

Suggested Readings

Centre for Health Evidence. Usersâ€™ Guides to Evidence-Based Practice.

Hennekens CH, Buring JE, Mayrent SL. Epidemiology in medicine. Boston: Little, Brown, 1987.

Rothman KJ, Greenland S. Modern epidemiology, 2nd ed. Philadelphia: Lippincott-Raven, 1998.

Swinscow TDV, Campbell MJ. Statistics at square one, 10th ed. London: BMJ Books, 2002.