Internal consistency
Encyclopedia
In statistics
and research
, internal consistency is typically a measure based on the correlation
s between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.
Internal consistency is usually measured with Cronbach's alpha
, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between zero and one. A commonly accepted rule of thumb for describing internal consistency is as follows:
Very high reliabilities (0.95 or higher) are not necessarily desirable, as this indicates that the items may be entirely redundant . The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well.
An alternative way of thinking about internal consistency, however, is that it is the extent to which all of the items of a test measure the same latent variable. The advantage of this perspective over the notion of a high average correlation among the items of a test - the perspective underlying Cronbach's alpha - is that the average item correlation is affected by skewness (in the distribution of item correlations) just as any other average is. Thus, whereas the modal item correlation is zero when the items of a test measure several unrelated latent variables, the average item correlation in such cases will be greater than zero. Thus, whereas the ideal of measurement is for all items of a test to measure the same latent variable, alpha has been demonstrated many times to attain quite high values even when the set of items measures several unrelated latent variables. The hierarchical "Coefficient omega" may be a more appropriate index of the extent to which all of the items in a test measure the same latent variable. Several different measures of internal consistency are reviewed by Revelle & Zinbarg (2009).
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....
and research
Research
Research can be defined as the scientific search for knowledge, or as any systematic investigation, to establish novel facts, solve new or existing problems, prove new ideas, or develop new theories, usually using a scientific method...
, internal consistency is typically a measure based on the correlation
Correlation
In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence....
s between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores. For example, if a respondent expressed agreement with the statements "I like to ride bicycles" and "I've enjoyed riding bicycles in the past", and disagreement with the statement "I hate bicycles", this would be indicative of good internal consistency of the test.
Internal consistency is usually measured with Cronbach's alpha
Cronbach's alpha
Cronbach's \alpha is a coefficient of reliability. It is commonly used as a measure of the internal consistency or reliability of a psychometric test score for a sample of examinees. It was first named alpha by Lee Cronbach in 1951, as he had intended to continue with further coefficients...
, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between zero and one. A commonly accepted rule of thumb for describing internal consistency is as follows:
Cronbach's alpha Cronbach's alpha Cronbach's \alpha is a coefficient of reliability. It is commonly used as a measure of the internal consistency or reliability of a psychometric test score for a sample of examinees. It was first named alpha by Lee Cronbach in 1951, as he had intended to continue with further coefficients... | Internal consistency |
---|---|
α ≥ .9 | Excellent |
.9 > α ≥ .8 | Good |
.8 > α ≥ .7 | Acceptable |
.7 > α ≥ .6 | Questionable |
.6 > α ≥ .5 | Poor |
.5 > α | Unacceptable |
Very high reliabilities (0.95 or higher) are not necessarily desirable, as this indicates that the items may be entirely redundant . The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well.
An alternative way of thinking about internal consistency, however, is that it is the extent to which all of the items of a test measure the same latent variable. The advantage of this perspective over the notion of a high average correlation among the items of a test - the perspective underlying Cronbach's alpha - is that the average item correlation is affected by skewness (in the distribution of item correlations) just as any other average is. Thus, whereas the modal item correlation is zero when the items of a test measure several unrelated latent variables, the average item correlation in such cases will be greater than zero. Thus, whereas the ideal of measurement is for all items of a test to measure the same latent variable, alpha has been demonstrated many times to attain quite high values even when the set of items measures several unrelated latent variables. The hierarchical "Coefficient omega" may be a more appropriate index of the extent to which all of the items in a test measure the same latent variable. Several different measures of internal consistency are reviewed by Revelle & Zinbarg (2009).
External links
- http://www.wilderdom.com/personality/L3-2EssentialsGoodPsychologicalTest.html