• A comparison of the nature of pre-entry assessment in FE feeder colleges with those of the first year degree programme

      Buckley, Kevan; Davies, Jenny; Bentley, Hilary (University of Wolverhampton, 2005)
      Discusses differences in the style and content of assessment of students in Further Education colleges compared with assessment during their first year undergraduate programme in the School of Computing and Information technology at the University of Wolverhampton. Differences are analysed to identify strengths and potential areas of difficulty experienced by students.
    • A computer-aided environment for construction of multiple-choice tests

      Mitkov, Ruslan; Ha, Le An; Bernardes, Jon (University of Wolverhampton, 2005)
      Multiple choice tests have proved to be an efficient tool for measuring students' achievement and are used on a daily basis both for assessment and diagnostics worldwide. The objective of this project was to provide and alternative to the lengthy and demanding activity of developing multiple-choice tests and propose a new Natural Language Processing (NLP) based approach to generate tests from instructional texts (textbooks, encyclopaedias). Work on the pilot project has shown that the semi-automatic procedure is up to 3.8 times quicker than a completely manual one.
    • An investigation into the reasons why students do not collect marked assignments and the accompanying feedback

      Winter, Christopher; Dye, Vanessa L. (University of Wolverhampton, 2004)
      The major role played by assessment and feedback in any programme cannot be underestimated. It is through the process of assessment design that course/module learning outcomes are met and as a consequence student learning may be measured. Alongside the importance of assessment runs the value and effectiveness of feedback. In this study feedback is defined as commentaries made in respect of written assignment work. Rowntree’s (1987) seminal text about assessment provides a dramatic, yet highly pertinent claim that feedback “ is the life blood of learning”. The importance of assessment and feedback as a research focus continues to dominate the thinking behind designing appropriate and effective solutions to measure and support learning (Higgins, 2001; Mutch, 2003; Black and Wiliam, 2003; Rust et al 2003). So, why is it that some students do not collect assignment work and therefore cannot benefit from this supposed ‘transfusion’ for learning? Anecdotal evidence from within the School of Education would suggest that there is a small, but persistent, percentage of uncollected assignment work every year. The authors believed that such stories and figures would probably be mirrored within the School of Education and would be echoed across the University. This potential problem prompted the study to find out the extent of the actual problem. The issue of uncollected work and feedback may have consequences for student learning because students are unable to capitalise on any feedback or commentary provided by the tutor. In addition, the issue has particular implications for tutorial time, in terms of time spent writing feedback. This can be frustrating for tutors, who may have taken a great deal of time and thought in providing feedback, which is likely to be tailored to the individual needs of that particular student. The literature discussed with the findings tends to focus on the somewhat narrower dimensions of assessment and feedback, circumventing the larger picture of assessment processes within the wider arena of Higher Education. The report accepts as a given that within the University of Wolverhampton the outcomes based curriculum model is the prevalent design approach, and that alternative curriculum models may be used in other H. E. Institutions. The authors are cognisant that the lack of discussion around the possible influences of current curriculum models influencing H.E. programmes and modules, and consequently their impact on and for assessment and feedback, may pose a significant deficit in the scope of the background reading and discussion. However, as with any curriculum model, the process stands or falls on all the component parts working in synchronisation. If students are not involved or engaged in curriculum design and operation, including assessment processes, a few may feel disenfranchised. This may be a key reason why students neglect to collect assignment work.
    • Approach to learning undertaken by undergraduate distance learning students in law

      Mitchell, Brian; Williams, Stuart; Evans, Judith; Halstead, Peter (University of Wolverhampton, 2001)
    • Approaches to communication assessment with children and adults with profound intellectual and multiple disabilities

      Goldbart, Juliet; Buell, Susan; Chadwick, Darren (Wiley, 2018-11-14)
      Communication assessment of people with profound intellectual and multiple disabilities (PIMD) has seldom been investigated. Here we explore approaches and decision making in undertaking communication assessments in this group of people. A questionnaire was sent to UK practitioners. The questionnaire elicited information about assessment approaches used and rationales for assessment choices. Fifty-five speech and language therapists (SLTs) responded. Findings revealed that the Preverbal Communication Schedule, the Affective Communication Assessment and the Checklist of Communication Competence were the most frequently used published assessments. Both published and unpublished assessments were often used. Rationales for assessment choice related to assessment utility, sensitivity to detail and change and their applicability to people with PIMD. Underpinning evidence for assessments was seldom mentioned demonstrating the need for more empirical support for assessments used. Variability in practice and the eclectic use of a range of assessments was evident, underpinned by practice-focused evidence based on tacit knowledge.
    • Assessing by viva voce

      Callery, Dymphna; Hale, Kate (University of Wolverhampton, 2002)
      The idea of introducing vive voce assessments emerged during a review of the assessment profile of the Drama Department. Despite the practical orientation of the programme, assessments were dominated by 60% Practical Project , 40% Essay weightings. Good practical marks were frequently undermined by weaker grades for written work, despite students’ evident development of understanding through practice, and written evaluations were generally of poor quality. In addition, staff had reported an unhealthy split in the focus of practical modules where written course-work was a requirement. In the drama professions it is more necessary to be able to explain ideas and creative concepts orally and pursue them somatically: the process of making work is physically and vocally-based; critical reflection comes orally too in the form of direction, post-show discussions and de-briefings. Teaching strategies for practical work embrace this, applying theoretical concepts in concrete praxis. Students’ development on such courses requires them to invest in sensory and experiential learning and a progressively intensive approach to practice. Presenting work to tutors and peers for critical feedback is the major teaching and learning mode. Having to change tack and focus on conceptualising theory, rather than exploring through creativity, and essay writing rather than practical skills, constrained tutors and students. The introduction of an oral examination – a viva voce – to assess students’ ability to critically reflect on and evaluate their practice could provide a viable alternative. Viva voces would both acknowledge and play to the strengths of students’ oral communication skills and offer them the chance to develop more formal interview techniques, as well as acknowledging the vocal and oral nature of the discipline. The aim of the project was to introduce viva voce exams as a method of assessing critical reflection on practical work in order primarily to improve the range of asessments, but in addition to give students an opportunity to sustain their achievement on practical modules. The focus was on finding and implementing strategies that would promote good practice in assessment.
    • Assessment criteria: reflections on current practices

      Woolf, Harvey (Routledge (Taylor & Francis), 2004)
      This article reviews the findings of a small-scale investigation into the criteria used by a number of SACWG departments for assessing final-year project modules in business and history and other written history assignments. The findings provide the basis for a broader discussion of the issues relating to the formulation and use of assessment criteria. Assessment entails academics making professional judgements about the standards and quality of students' work. However, for the educational value of the work entailed in developing assessment criteria to be fully realized, there needs to be a higher level of shared understanding than currently exists (among students, tutors and other stakeholders) of the language in which criteria are couched and the ways in which criteria are applied.
    • Automatic Generation of Factual Questions from Video Documentaries

      Mitkov, Ruslan; Specia, Lucia; Ha, Le An; Skalban, Yvonne (University of Wolverhampton, 2013-10)
      Questioning sessions are an essential part of teachers’ daily instructional activities. Questions are used to assess students’ knowledge and comprehension and to promote learning. The manual creation of such learning material is a laborious and time-consuming task. Research in Natural Language Processing (NLP) has shown that Question Generation (QG) systems can be used to efficiently create high-quality learning materials to support teachers in their work and students in their learning process. A number of successful QG applications for education and training have been developed, but these focus mainly on supporting reading materials. However, digital technology is always evolving; there is an ever-growing amount of multimedia content available, and more and more delivery methods for audio-visual content are emerging and easily accessible. At the same time, research provides empirical evidence that multimedia use in the classroom has beneficial effects on student learning. Thus, there is a need to investigate whether QG systems can be used to assist teachers in creating assessment materials from these different types of media that are being employed in classrooms. This thesis serves to explore how NLP tools and techniques can be harnessed to generate questions from non-traditional learning materials, in particular videos. A QG framework which allows the generation of factual questions from video documentaries has been developed and a number of evaluations to analyse the quality of the produced questions have been performed. The developed framework uses several readily available NLP tools to generate questions from the subtitles accompanying a video documentary. The reason for choosing video vii documentaries is two-fold: firstly, they are frequently used by teachers and secondly, their factual nature lends itself well to question generation, as will be explained within the thesis. The questions generated by the framework can be used as a quick way of testing students’ comprehension of what they have learned from the documentary. As part of this research project, the characteristics of documentary videos and their subtitles were analysed and the methodology has been adapted to be able to exploit these characteristics. An evaluation of the system output by domain experts showed promising results but also revealed that generating even shallow questions is a task which is far from trivial. To this end, the evaluation and subsequent error analysis contribute to the literature by highlighting the challenges QG from documentary videos can face. In a user study, it was investigated whether questions generated automatically by the system developed as part of this thesis and a state-of-the-art system can successfully be used to assist multimedia-based learning. Using a novel evaluation methodology, the feasibility of using a QG system’s output as ‘pre-questions’ with different types of prequestions (text-based and with images) used was examined. The psychometric parameters of the automatically generated questions by the two systems and of those generated manually were compared. The results indicate that the presence of pre-questions (preferably with images) improves the performance of test-takers and they highlight that the psychometric parameters of the questions generated by the system are comparable if not better than those of the state-of-the-art system. In another experiment, the productivity of questions in terms of time taken to generate questions manually vs. time taken to post-edit system-generated questions was analysed. A viii post-editing tool which allows for the tracking of several statistics such as edit distance measures, editing time, etc, was used. The quality of questions before and after postediting was also analysed. Not only did the experiments provide quantitative data about automatically and manually generated questions, but qualitative data in the form of user feedback, which provides an insight into how users perceived the quality of questions, was also gathered.
    • Automatically marked summative assessment using internet tools

      Penfold, Brian (University of Wolverhampton, 2001)
      With very large groups, individual assessment is becoming increasingly difficult. We are constantly aware of the cost of the time taken in traditional forms of assessment and the effect of marking fatigue on quality. The system described here is a ‘home-grown’ system to present summative multiple-choice question (MCQ) papers in an efficient, cost effective and simple way. The system directly replaces manually marked MCQ tests and because of its nature opens up new more sophisticated multimedia assessment formats.
    • Cross modular tracking, academic counselling and retention of students on traditional delivery, technology supported learning, flexible access and other awards

      Oliver, Ken; Musgrove, Nick; Smith, John (University of Wolverhampton, 2002)
      The increasing emphasis in recruitment of ‘non traditional’ student cohorts (Year 0, part- time evening only, Flexible Access, additional needs etc) combined with multi-staffed modules and technology supported learning (TSL) delivery is mitigating against the traditional tutor overview of cross-modular student performance and may be hiding student problems until a point of no return when formal summative evidence of failure is validated. In addition the trend towards minimising formal assessment loading can be seen as reducing the numbers of performance benchmarks available to establish learner profiles. The project aims to implement a continuous cross-modular tracking and assessment structure, initially for first year Environmental Science (ES) students, in order to provide such ‘early warning’ of student difficulties as will permit viable counselling and remedial support. It is anticipated that such a strategy will reduce the incidence of ‘under performance’, ‘drop outs’ and ‘resits’ by making support available at the point problems arise and not when formal failure is established.
    • Ed-blogs: the use of weblogs in learning, teaching and assessment

      Jones, Mark; Magill, Kevin (University of Wolverhampton, 2003)
    • Feed-forward: improving students' use of tutor's comments.

      Duncan, Neil; Prowse, Steve; Wakeman, Chris; Harrison, Ruth (University of Wolverhampton, 2004)
      Anecdotal evidence, considerable practitioner experience, and research within this University (Winter and Dye, 2004) indicate that many students do not collect their work once it has been assessed. Many others show little interest in the written or oral advice offered to them by the markers (Wojtas, 1998). This means that tutors become used to repeating important advice to some students, with no evidence that they have read, understood, or learned from the points raised by them. There are many reasons for students not using tutor feedback. For some students, only the numerical grade is of interest to them – simple, unambiguous and meaningful in terms of achievement and progression (Ecclestone, 1998). Some students will only read the qualitative comments if the quantitative mark is outside their expectations – perhaps to complain if it is surprisingly low, or to bask in the praise of an unexpected A grade. Some students may not read/heed the advice due to a combination of not fully understanding the comments (Chanock, 2000), and not realising their potential value; it is those students that this intervention hoped to target. This study developed from the frustration of tutors who were reduced to pleading that students should engage with their assignment feedback in order to avoid having the same negative remarks appearing on their work in future. One of the student responses to these pleas was that the summative assignments for modules were conclusive and self-contained, and it was difficult to see how comments about raising the grade for a completed module on, say Dyslexia, could help improve grades on the next essay on, say Autism. Indeed, this example uses cognate topic areas, whereas the modular system allows for much more disparate choices of topic, especially in a joint subject degree. Clearly, some students found it difficult to unpick the subject-specific, or topic-content advice from the generic advice to improve future achievement. Developing a solution to this problem required some means of using individual students’ academic histories and applying them to current assessment tasks.
    • First year law students: the impact of assessment type on attainment

      Jones, Dawn; Ellison, Lynn (Taylor and Francis, 2018-11-02)
      This article describes an action research project that was undertaken to address a poor progression rate at the end of the first year of a single honours law degree. An attainment gap due to gender, age and ethnicity was also noted. The students were predominantly assessed by examinations; therefore a change of assessment to coursework and portfolio in some areas was proposed and actioned as a potential way to increase attainment and consequently progression. Data on pass rates for two years prior to the change of assessment and two years after the change were analysed. The impact of a change of assessment from examination to coursework raised attainment levels overall, but the gender, age and ethnicity gap remained.
    • Improving the attention students pay to, and the extent to which they act upon feedback.

      Davies, Jenny; Wrighton, Naomi (University of Wolverhampton, 2004)
      That learning is a cyclical process and that assessment drives learning are established facts. It is essential that an assessment regime considers not only what a student should know but also their approach to their learning. If students are required to evaluate, for instance, the ethical implications of IT, then it is not appropriate to use an assessment instrument that simply asks for regurgitation of information. In order to improve future performances, feedback on work presented by a knowledgeable other person, whether tutor, placement supervisor or peer, is essential.2 Staff perceive that feedback prompts student discussion of their work, enables understanding and improves learning. The aims of this project were to improve the efficacy of the feedback process and the quality of assessment feedback in the School of Computing and Information Technology (SCIT). This was through the implementation of a range of steps, based on those proposed by Gibbs during the University of Wolverhampton Campaign on Assessment 2002/03).
    • Making assessment transparent:the use of grade criteria combined with peer marking for assessment tasks in practical exercises in biochemistry

      Bartlett, Terry; Sutton, Raul; Bellamy, Matthew; Fincham, Daron A.; Perry, Christopher (University of Wolverhampton, 2003)
    • Self-help in research and development relating to assessment: a case study

      Yorke, Mantz; Barnett, Greg; Evanson, Peter; Haines, Chris; Jenkins, Don; Knight, Peter; Scurry, David; Stowell, Marie; Woolf, Harvey (Routledge, 2004)
      This article briefly chronicles nearly a decade of research and development activity undertaken in the area of assessment by a group of committed volunteers, 'The Student Assessment and Classification Working Group' (SACWG). However, greater attention is given to demonstrating what a self-help approach can achieve in respect of research and development in higher education, and to identifying the factors that contribute to success in this respect. It is suggested that the approach has transfer value, provided that certain conditions are met.
    • The management and assessment of groupwork

      Luft, Sarah (University of Wolverhampton, 2001)
      Within the School of Health Sciences it had become evident that in many modules students were expected to work together and produce work that would be formally assessed. Each module leader devised their own way to manage and assess that group work activity which meant that there was little, if any consistency. In general the group work presentation would be awarded a grade which was the same for all the students in that group. Sometimes that would not be fair, as it was possible for a student to contribute very little to the group work and yet they would be awarded the same grade as their peers. On several occasions students grumbled about this lack of equity. Being able to work effectively with other people is a key skill and group work offers the opportunity to assess many of the key skills and these also include communication, gathering information, organising both material and time, working independently and also interdependently and problem solving. In order to assess these skills, there is a need to manage and assess the group working processes as well as the final piece of work that the group presents. The purpose of this project therefore was to devise a model of managing and assessing group work and pilot it for levels 1, 2 and 3 over the academic year.
    • The nature of the student cohort and factors influencing first time pass rates

      Vallely, Christine (University of Wolverhampton, 2001)