Show simple item record

dc.contributor.authorCross, Vinette
dc.contributor.authorHicks, Carolyn
dc.contributor.authorBarwell, Fred
dc.date.accessioned2008-06-10T14:28:13Z
dc.date.available2008-06-10T14:28:13Z
dc.date.issued2001
dc.identifier.citationAssessment & Evaluation in Higher Education, 26 (3): 189-212
dc.identifier.issn02602938
dc.identifier.issn1469297X
dc.identifier.doi10.1080/02602930120052369
dc.identifier.urihttp://hdl.handle.net/2436/29798
dc.description.abstractThe range of conditions necessary for valid and reliable assessment of clinical competence may result in a gap between amount of evidence available from performance and that required for safe inferences of competence. Two assessment forms were compared for validity and reliability, using six video vignettes of undergraduates on placement. Form A was currently in use on the programme. Form B was developed from a Delphi study involving 108 physiotherapy practitioners' perceptions of competence. Effects of training on assessment decisions were also investigated. Results indicated wide differences in individual ability to assess students. Good students tended to be rated less positively than deserved and poor students better than deserved. Judgements were more valid and reliable on Form B than on Form A (A: ω = 0.496, rho = 0.61; B: ω = 0.647, rho = 0.71) Judgements on both forms were more reliable after training than before (before: ω = 0.524, rho = 0.62; after: ω = 0.620, rho = 0.70). Factor analysis of assessment data from both forms indicated Form B had greater validity amongst clinical assessors. It is concluded that video vignettes are effective in monitoring assessors' judgements and helping to identify the amount of evidence that can reasonably and reliably be collected by clinicians assessing undergraduates in the clinical environment.
dc.language.isoen
dc.publisherTaylor & Francis
dc.relation.urlhttp://www.informaworld.com/smpp/content~db=all?content=10.1080/02602930120052369
dc.titleExploring the Gap Between Evidence and Judgement: using video vignettes for practice-based assessment of physiotherapy undergraduates
dc.typeJournal article
dc.identifier.journalAssessment & Evaluation in Higher Education
html.description.abstractThe range of conditions necessary for valid and reliable assessment of clinical competence may result in a gap between amount of evidence available from performance and that required for safe inferences of competence. Two assessment forms were compared for validity and reliability, using six video vignettes of undergraduates on placement. Form A was currently in use on the programme. Form B was developed from a Delphi study involving 108 physiotherapy practitioners' perceptions of competence. Effects of training on assessment decisions were also investigated. Results indicated wide differences in individual ability to assess students. Good students tended to be rated less positively than deserved and poor students better than deserved. Judgements were more valid and reliable on Form B than on Form A (A: ω = 0.496, rho = 0.61; B: ω = 0.647, rho = 0.71) Judgements on both forms were more reliable after training than before (before: ω = 0.524, rho = 0.62; after: ω = 0.620, rho = 0.70). Factor analysis of assessment data from both forms indicated Form B had greater validity amongst clinical assessors. It is concluded that video vignettes are effective in monitoring assessors' judgements and helping to identify the amount of evidence that can reasonably and reliably be collected by clinicians assessing undergraduates in the clinical environment.


This item appears in the following Collection(s)

Show simple item record