Jump to content

Consistency (statistics)

From Wikipedia, the free encyclopedia

In statistics, consistency of procedures, such as computing confidence intervals or conducting hypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely. In particular, consistency requires that as the dataset size increases, the outcome of the procedure approaches the correct outcome.[1] Use of the term in statistics derives from Sir Ronald Fisher in 1922.[2]

Use of the terms consistency and consistent in statistics is restricted to cases where essentially the same procedure can be applied to any number of data items. In complicated applications of statistics, there may be several ways in which the number of data items may grow. For example, records for rainfall within an area might increase in three ways: records for additional time periods; records for additional sites with a fixed area; records for extra sites obtained by extending the size of the area. In such cases, the property of consistency may be limited to one or more of the possible ways a sample size can grow.

Estimators

[edit]

A consistent estimator is one for which, when the estimate is considered as a random variable indexed by the number n of items in the data set, as n increases the estimates converge in probability to the value that the estimator is designed to estimate.

An estimator that has Fisher consistency is one for which, if the estimator were applied to the entire population rather than a sample, the true value of the estimated parameter would be obtained.

Tests

[edit]

A consistent test is one for which the power of the test for a fixed untrue hypothesis increases to one as the number of data items increases.[1]

Classification

[edit]

In statistical classification, a consistent classifier is one for which the probability of correct classification, given a training set, approaches, as the size of the training set increases, the best probability theoretically possible if the population distributions were fully known.

Relationship to unbiasedness

[edit]

An estimator or test may be consistent without being unbiased.[3] A classic example is the sample standard deviation which is a biased estimator, but converges to the expected standard deviation almost surely by the law of large numbers. Phrased otherwise, unbiasedness is not a requirement for consistency, so biased estimators and tests may be used in practice with the expectation that the outcomes are reliable, especially when the sample size is large (recall the definition of consistency). In contrast, an estimator or test which is not consistent may be difficult to justify in practice, since gathering additional data does not have the asymptotic guarantee of improving the quality of the outcome.

See also

[edit]

References

[edit]
  1. ^ a b Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9 (entries for consistency, consistent estimator, consistent test)
  2. ^ Upton, G.; Cook, I. (2006) Oxford Dictionary of Statistics, 2nd Edition, OUP. ISBN 978-0-19-954145-4
  3. ^ Vaart, A. W. van der (1998-10-13). Asymptotic Statistics. Cambridge University Press. ISBN 978-0-511-80225-6.