Show simple item record

dc.contributor.authorNevill, Alan M.
dc.contributor.authorLane, Andrew M.
dc.contributor.authorKilgour, Lindsey J.
dc.contributor.authorBowes, Neal
dc.contributor.authorWhyte, Gregory P.
dc.date.accessioned2007-11-19T18:08:21Z
dc.date.available2007-11-19T18:08:21Z
dc.date.issued2001
dc.identifier.citationJournal of Sports Sciences 2001, 19(4): 273-278
dc.identifier.issn02640414,1466447X
dc.identifier.doi10.1080/026404101750158358
dc.identifier.urihttp://hdl.handle.net/2436/14655
dc.descriptionMetadata only
dc.description.abstractIn 1999, Wilson and Batterham proposed a new approach to assessing the test-retest stability of psychometric questionnaires. They recommended assessing the proportion of agreement - that is, the proportion of participants that record the same response to an item - using a test-retest design. They went on to use a bootstrapping technique to estimate the uncertainty of the proportion of agreement. The aims of this short communication are (1) to demonstrate that the sampling distribution of the proportion of agreement is well known (the binomial distribution), making the technique of 'bootstrapping' redundant, and (2) to suggest a much simpler, more sensitive method of assessing the stability of a psychometric questionnaire, based on the test-retest differences (within-individuals) for each item. Adopting methods similar to Wilson and Batterham, 97 sport students completed the Social Physique Anxiety Scale on two occasions. Test-retest differences were calculated for each item. Our results show that the proportion of agreement ignores the nature of disagreement. Items 4 and 11 showed similar agreement (44.3% and 43.3% respectively), but 89 of the participants (91.8%) differed by just ±1 point when responding to item 4, indicating a relatively stable item. In contrast, only 78 of the participants (80.4%) recorded a difference within ±1 point when responding to item 11, suggesting quite contrasting stability for the two items. We recommend that, when assessing the stability of self-report questionnaires using a 5-point scale, most participants (90%) should record test-retest differences within a reference value of ±1.
dc.format.extent-1 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherRoutledge
dc.relation.urlhttps://www.tandfonline.com/doi/abs/10.1080/026404101750158358
dc.subjectPsychometrics
dc.subjectSports psychology
dc.subjectBootstrapping
dc.subjectConsistency
dc.subjectMeasurement
dc.subjectTest-retest
dc.subjectReliability
dc.subjectValidity
dc.titleStability of psychometric questionnaires
dc.typeJournal article
dc.format.digYES
html.description.abstractIn 1999, Wilson and Batterham proposed a new approach to assessing the test-retest stability of psychometric questionnaires. They recommended assessing the proportion of agreement - that is, the proportion of participants that record the same response to an item - using a test-retest design. They went on to use a bootstrapping technique to estimate the uncertainty of the proportion of agreement. The aims of this short communication are (1) to demonstrate that the sampling distribution of the proportion of agreement is well known (the binomial distribution), making the technique of 'bootstrapping' redundant, and (2) to suggest a much simpler, more sensitive method of assessing the stability of a psychometric questionnaire, based on the test-retest differences (within-individuals) for each item. Adopting methods similar to Wilson and Batterham, 97 sport students completed the Social Physique Anxiety Scale on two occasions. Test-retest differences were calculated for each item. Our results show that the proportion of agreement ignores the nature of disagreement. Items 4 and 11 showed similar agreement (44.3% and 43.3% respectively), but 89 of the participants (91.8%) differed by just ±1 point when responding to item 4, indicating a relatively stable item. In contrast, only 78 of the participants (80.4%) recorded a difference within ±1 point when responding to item 11, suggesting quite contrasting stability for the two items. We recommend that, when assessing the stability of self-report questionnaires using a 5-point scale, most participants (90%) should record test-retest differences within a reference value of ±1.


This item appears in the following Collection(s)

Show simple item record