Рет қаралды 221
Reliability is simply the trustworthiness or dependability of a thing. As much as the findings of a research are expected to be dependable for implementation, the research instrument used for the data collection that led to the findings must be trustworthy or dependable. This means that the instrument must be consistent in measuring what it purports to measure. Reliability is the degree to which a research measure, test, scale or instrument consistently measures what it is measuring. There are different types of reliability but a researcher's choice of a type of reliability is dependent on whether the researcher is interested in establishing individual agreement or item agreement about the scores of a research instrument.
The different types, forms, or methods of reliability in research and statistics are stability or test-retest reliability, equivalence or alternate form reliability, equivalence and stability form of reliability, internal consistency form of reliability, and scorer or rater reliability. The measurement of internal consistency reliability is of different types, which include split-half reliability, Kider-Richardson reliability (KR-20 & KR-21), Cronbach's alpha reliability, McDonald's omega reliability, the Greatest Lower Bound (GLB) reliability, Revelle's beta reliability, and others. The scorer or rater reliability has two types, which are inter-rater reliability, and intra-rater reliability.
Since the reliability of a research instrument is expressed numerically as reliability coefficient, it of computed with a statistic. The different reliability statistics are Pearson's product-moment correlation (PPMC), Spearman rho, Brown correction formula, Cronbach's alpha, Kuder-Richardson (KR-20 & KR-21), Revelle's beta, McDonald's omega, Cohen kappa, Krippendorff's alpha, and so on.