Reliability: Measuring Internal Consistency Using Cronbach's α

2

Click here to load reader

Transcript of Reliability: Measuring Internal Consistency Using Cronbach's α

Page 1: Reliability: Measuring Internal Consistency Using Cronbach's α

* Correspondi

1876-1399/$ - se

http://dx.doi.org

Clinical Simulation in Nursing (2013) 9, e179-e180

www.elsevier.com/locate/ecsn

Making Sense of Methods and Measurement

Reliability: Measuring InternalConsistency Using Cronbach’s a

Katie Anne Adamson, PhDa,*, Susan Prion, EdDbaUniversity of Washington Tacoma, Tacoma, WA 98402-3100, USAbUniversity of San Francisco, San Francisco, CA 94117-1080, USA

In previous articles we have explored the concepts ofreliability, validity, and the importance of psychometricallysound measures for simulation research. This article willfocus on how to measure the internal consistency amongitems on an instrument. A statistic commonly used tomeasure internal consistency is Cronbach’s alpha (a).Cronbach’s a can range from 0.0 to 1.0, and it quantifiesthe degree to which items on an instrument are correlatedwith one another (Connelly, 2011). In order to discussCronbach’s a in more detail, we will look at an exampleof a simulation evaluation instrument from the literature:the Lasater Clinical Judgment Rubric (LCJR; Lasater,2007). The LCJR is frequently used in simulation researchto measure students’ demonstration of clinical judgment.Although most would agree that clinical judgment is a nec-essary and observable trait, there is no graduated medicinecup or nomogram that can be used to accurately quantify it.Therefore, a scale was developed to measure the constructof clinical judgment and it is based on the Tanner ClinicalJudgment Model (Tanner, 2006). The LCJR includes 11items, and ratings (beginning, developing, accomplished,and exemplary) from these items are combined to reflecta composite clinical judgment score. If each of the itemson the LCJR measures the same construct (clinical judg-ment), the ratings on each should be correlated with one an-other. A perfect correlation would result in a ¼ 1.0 and theabsence of any correlation would result in a ¼ 0.0.

Similarly, if the items within the subscales on the LCJReach measure their respective construct, they should becorrelated with the other items within that subscale. The

ng author: [email protected] (K. A. Adamson).

e front matter � 2013 International Nursing Association for Clinica

/10.1016/j.ecns.2012.12.001

subscales on the LCJR include noticing, interpreting,responding, and reflecting. Each of the 11 items on theLCJR falls under one of these subscales. Recently, Mariani,Cantrell, Meakim, Prieto, and Dreifuerst (in press) esti-mated Cronbach’s a for the items on the LCJR at two dif-ferent time points to be 0.927 and 0.942, respectively, andthe a for items under each of the various subscales to be be-tween 0.800 and 0.909.

Cronbach’s a, like most statistical analyses, has severalweaknesses and special cases. First, a high correlationamong items reflects good internal consistency but tells uslittle about the validity of the measure. All of the itemscould be consistently measuring the wrong thing. For thisreason, we need to remember that validity and reliability gohand in hand. A measure may be reliable but invalid. Next,Cronbach’s a reflects the degree to which items on the scaleare interrelated but does not necessarily tell us anythingabout the unidimensionality of the construct or measure(Schmitt, 1996). Said another way, high correlations be-tween items on the LCJR may mean that they all measurehighly related constructs, but not necessarily a single con-struct: clinical judgment (Segars, 1997). Finally, Cron-bach’s a is the appropriate choice for measuring internalconsistency in scales where items have more than two re-sponse options. However, for scales with dichotomousitems, the Kuder-Richardson formula 20 (KR-20) is the ap-propriate choice (Cronbach, 1951).

The question remains: How internally consistent shoulda scale be? According to Bland and Altman (1997), scalesused in the clinical setting should have a minimuma ¼ 0.90, however, scales such as the LCJR used to com-pare groups may be acceptable with an a as low as 0.70.That said, the findings of Mariani et al. (in press) indicate

l Simulation and Learning. Published by Elsevier Inc. All rights reserved.

Page 2: Reliability: Measuring Internal Consistency Using Cronbach's α

Making Sense of Methods and Measurement e180

a high reliability of the LCJR. Each of these measures ofinternal consistency is specific to the sample they usedand should be recalculated with additional samples for fu-ture studies.

References

Bland, J. M., & Altman, D. G. (1997). Statistical notes: Cronbach’s alpha.

British Medical Journal, 314, 572.

Connelly, L. M. (2011). Research roundtable. Cronbach’s alpha. Medsurg

Nursing, 20, 1.

pp e179-

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests.

Psychometrika, 16(3), 297-334.

Mariani, B., Cantrell, M. A., Meakim, C., Prieto, P., & Dreifuerst, K. T. (In

press). Structured debriefing and students’ clinical judgment abilities in

simulation. Clinical Simulation in Nursing.

Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological

Assessment, 8(4), 350-353.

Segars, A. H. (1997). Assessing the unidimensionality of measurement: A

paradigm and illustration within the context of information systems

research. Omega, 25(1), 107-122.

Tanner, C. A. (2006). Thinking like a nurse: A research-based model of

clinical judgment in nursing. Journal of Nursing Education, 45(6),

204-211.

e180 � Clinical Simulation in Nursing � Volume 9 � Issue 5