Show simple item record

dc.contributor.authorCross, Vinette
dc.contributor.authorHicks, Carolyn
dc.contributor.authorBarwell, Fred
dc.date.accessioned2008-06-10T14:31:15Z
dc.date.available2008-06-10T14:31:15Z
dc.date.issued2001
dc.identifier.citationPhysiotherapy, 87 (7): 351-367
dc.identifier.doi10.1016/S0031-9406(05)60867-X
dc.identifier.urihttp://hdl.handle.net/2436/29799
dc.description.abstractQuality measurement in healthcare and higher education indicates the need for a systematic approach to developing undergraduate clinical competence assessment. Validity and reliability may be undermined by differences in assessors' interpretation of what is important. Differing contexts of undergraduates' clinical experience could result in assessors' ratings of activities being deemed less important, omitted or rendered meaningless. This study investigated the level of agreement across and within five clinical specialties in physiotherapy on the relative importance of 89 activities associated with clinical competence. One-way analysis of variance for each activity revealed 12 items differentially rated (p values = 0.05, 0.01 and 0.001). Kendall's coefficient of concordance demonstrated within-group agreement (p = < 0.000). Factor analysis of items upon which there was maximum agreement across specialties, combined with split half reliability analysis (Cronbach's alpha) resulted in eight reliable factors. These included task-specific and generic transferable skills. It was concluded that the factors provided a basis for discussion about clinicians' and academics' contributions to assessment, and a starting point for development of a clinical assessment instrument that could optimise the validity and reliability of clinical assessment decisions.
dc.language.isoen
dc.publisherElsevier
dc.relation.urlhttp://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B7CVK-4H9YR37-3&_user=1644469&_coverDate=07%2F31%2F2001&_rdoc=3&_fmt=high&_orig=browse&_srch=doc-info(%23toc%2318081%232001%23999129992%23608016%23FLA%23display%23Volume)&_cdi=18081&_sort=d&_docanchor=&_ct=19&_acct=C000054077&_version=1&_urlVersion=0&_userid=1644469&md5=c759d4f0e2c60b2a095f06fd86efd755
dc.subjectClinical Assessment
dc.subjectCompetence
dc.subjectPhysiotherapy
dc.titleComparing the importance of clinical competence criteria across specialties: impact on undergraduate assessment
dc.typeJournal article
dc.identifier.journalPhysiotherapy
html.description.abstractQuality measurement in healthcare and higher education indicates the need for a systematic approach to developing undergraduate clinical competence assessment. Validity and reliability may be undermined by differences in assessors' interpretation of what is important. Differing contexts of undergraduates' clinical experience could result in assessors' ratings of activities being deemed less important, omitted or rendered meaningless. This study investigated the level of agreement across and within five clinical specialties in physiotherapy on the relative importance of 89 activities associated with clinical competence. One-way analysis of variance for each activity revealed 12 items differentially rated (p values = 0.05, 0.01 and 0.001). Kendall's coefficient of concordance demonstrated within-group agreement (p = < 0.000). Factor analysis of items upon which there was maximum agreement across specialties, combined with split half reliability analysis (Cronbach's alpha) resulted in eight reliable factors. These included task-specific and generic transferable skills. It was concluded that the factors provided a basis for discussion about clinicians' and academics' contributions to assessment, and a starting point for development of a clinical assessment instrument that could optimise the validity and reliability of clinical assessment decisions.


Files in this item

Thumbnail
Name:
Publisher version

This item appears in the following Collection(s)

Show simple item record