2.50
Hdl Handle:
http://hdl.handle.net/2436/14655
Title:
Stability of psychometric questionnaires
Authors:
Nevill, Alan M.; Lane, Andrew M.; Kilgour, Lindsey J.; Bowes, Neal; Whyte, Gregory P.
Abstract:
In 1999, Wilson and Batterham proposed a new approach to assessing the test-retest stability of psychometric questionnaires. They recommended assessing the proportion of agreement - that is, the proportion of participants that record the same response to an item - using a test-retest design. They went on to use a bootstrapping technique to estimate the uncertainty of the proportion of agreement. The aims of this short communication are (1) to demonstrate that the sampling distribution of the proportion of agreement is well known (the binomial distribution), making the technique of 'bootstrapping' redundant, and (2) to suggest a much simpler, more sensitive method of assessing the stability of a psychometric questionnaire, based on the test-retest differences (within-individuals) for each item. Adopting methods similar to Wilson and Batterham, 97 sport students completed the Social Physique Anxiety Scale on two occasions. Test-retest differences were calculated for each item. Our results show that the proportion of agreement ignores the nature of disagreement. Items 4 and 11 showed similar agreement (44.3% and 43.3% respectively), but 89 of the participants (91.8%) differed by just ±1 point when responding to item 4, indicating a relatively stable item. In contrast, only 78 of the participants (80.4%) recorded a difference within ±1 point when responding to item 11, suggesting quite contrasting stability for the two items. We recommend that, when assessing the stability of self-report questionnaires using a 5-point scale, most participants (90%) should record test-retest differences within a reference value of ±1.
Citation:
Journal of Sports Sciences 2001, 19(4): 273-278
Publisher:
Routledge
Issue Date:
2001
URI:
http://hdl.handle.net/2436/14655
DOI:
10.1080/026404101750158358
Additional Links:
http://www.informaworld.com/smpp/content?content=10.1080/026404101750158358; http://www.informaworld.com/smpp/title~content=t713721847
Type:
Article
Language:
en
Description:
Metadata only
ISSN:
02640414,1466447X
Appears in Collections:
Sport, Exercise and Health Research Group; Sport Performance; Learning and Teaching in Sport, Exercise and Performance

Full metadata record

DC FieldValue Language
dc.contributor.authorNevill, Alan M.-
dc.contributor.authorLane, Andrew M.-
dc.contributor.authorKilgour, Lindsey J.-
dc.contributor.authorBowes, Neal-
dc.contributor.authorWhyte, Gregory P.-
dc.date.accessioned2007-11-19T18:08:21Z-
dc.date.available2007-11-19T18:08:21Z-
dc.date.issued2001-
dc.identifier.citationJournal of Sports Sciences 2001, 19(4): 273-278en
dc.identifier.issn02640414,1466447X-
dc.identifier.doi10.1080/026404101750158358-
dc.identifier.urihttp://hdl.handle.net/2436/14655-
dc.descriptionMetadata onlyen
dc.description.abstractIn 1999, Wilson and Batterham proposed a new approach to assessing the test-retest stability of psychometric questionnaires. They recommended assessing the proportion of agreement - that is, the proportion of participants that record the same response to an item - using a test-retest design. They went on to use a bootstrapping technique to estimate the uncertainty of the proportion of agreement. The aims of this short communication are (1) to demonstrate that the sampling distribution of the proportion of agreement is well known (the binomial distribution), making the technique of 'bootstrapping' redundant, and (2) to suggest a much simpler, more sensitive method of assessing the stability of a psychometric questionnaire, based on the test-retest differences (within-individuals) for each item. Adopting methods similar to Wilson and Batterham, 97 sport students completed the Social Physique Anxiety Scale on two occasions. Test-retest differences were calculated for each item. Our results show that the proportion of agreement ignores the nature of disagreement. Items 4 and 11 showed similar agreement (44.3% and 43.3% respectively), but 89 of the participants (91.8%) differed by just ±1 point when responding to item 4, indicating a relatively stable item. In contrast, only 78 of the participants (80.4%) recorded a difference within ±1 point when responding to item 11, suggesting quite contrasting stability for the two items. We recommend that, when assessing the stability of self-report questionnaires using a 5-point scale, most participants (90%) should record test-retest differences within a reference value of ±1.en
dc.format.extent-1 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoenen
dc.publisherRoutledgeen
dc.relation.urlhttp://www.informaworld.com/smpp/content?content=10.1080/026404101750158358en
dc.relation.urlhttp://www.informaworld.com/smpp/title~content=t713721847en
dc.subjectBootstrappingen
dc.subjectConsistencyen
dc.subjectMeasurementen
dc.subjectTest-retesten
dc.subjectReliabilityen
dc.subjectValidityen
dc.subjectPsychometrics-
dc.subjectSports psychology-
dc.titleStability of psychometric questionnairesen
dc.typeArticleen
dc.format.digYES-
All Items in WIRE are protected by copyright, with all rights reserved, unless otherwise indicated.