CDTL    Publications     Mailing List     About Brief

 

   

People within the educational community—policymakers, schools, administrators, teachers and students—use assessments for different purposes. This issue of CDTL Brief presents some discussions on the issues surrounding Assessment.

March 2003, Vol. 6, No. 3 Print Ready ArticlePrint-Ready
Student Assessment in Problem-based Learning: A Challenge Beyond Reliability and Validity
 
Associate Professor Tan Chay Hoon
Department of Pharmacology & Medical Education Unit
Professor Matthew C.E. Gwee
Department of Pharmacology & Medical Education Unit/
Associate Director, CDTL
 

“Elaborating an assessment plan that respects PBL principles, is reliable and valid, and has no negative steering effect remains a challenging task” (Nendaz & Tekian, 1999).

Introduction

Problem-based learning (PBL) is an innovative pathway to learning in medical education. The PBL curriculum provides students with a more holistic approach to medical education that goes beyond just content knowledge acquisition. Value-added outcomes in PBL include the acquisition of educational process/life skills such as:

  • Critical thinking;
  • Problem-solving and clinical reasoning skills;
  • Self-directed learning;
  • Interpersonal/social skills; and
  • Group/team-work skills.

Student assessment in PBL should therefore ensure “a match between assessment procedures used and the curricular tenets of PBL” (Nendaz & Tekian, 1999). An overview of some of the more useful current strategies and test instruments used will be discussed.

Multiple-choice Questions (MCQs): Need for Testing Knowledge Acquisition

“I do not believe that there is any real evidence to support the claim for problem-solving skills independent of knowledge. [The] evidence from the last two decades of research overwhelmingly argues against this premise” (Norman, 1997).

One of the major aims of PBL is to improve the problem-solving skills of students that can be applied to a wide variety of clinical situations. As problem solving is dependent on sound knowledge, it is therefore imperative that assessment strategies must also test for knowledge. From a psychometric perspective then, how best can we assess knowledge gained from a PBL environment? Norman (1997) clearly states that he is “unequivocally on the side of multiple-choice questions”, on the presumption that the design construction of the MCQ test instrument includes “a rich clinical stem, [that]...involve higher order skills, and hence… more discriminating”. MCQs offer the advantage of high consistency and reliability as it allows for sampling of broad content areas, as well as high validity if appropriately constructed.

The Progress Test (PT)

“[Progress test]…reflects the end objectives of the curriculum and samples knowledge across all disciplines and content areas in medicine” (Nendaz & Tekian, 1999).

The primary objective of PT is to overcome the potential negative steering effect often associated with summative examinations. The PT consists of items drawn from all areas of medicine and is administered several times in a year.

There is convincing evidence that the PT does not have a negative steering effect on student learning “either at the level of individual learning approaches, learning style (memorisation vs. concept learning) or tutorial function. [PT]…is a reliable measure of student knowledge acquisition, with test re-test reliability…of the order of 0.6–0.7. [Besides, PT]…also has predictive validity, demonstrated by a correlation of about 0.6…” (Blake, et al., 1996; Norman, 1997). PT also has high validity with high correlation (r = 0.93) in respect of testing clinical reasoning skills (Boshuizen, et al; 1997). Thus, the PT provides a valuable test instrument to monitor the learning progress of students and can be used for both formative and summative assessments.

Process-oriented Test Strategies (POTS)

The acquisition of process skills by students is also an important educational objective in PBL. Several strategies have been used to assess one or more process skills. Thus, process-oriented test strategies are an essential component in the overall PBL assessment as they can “provide a positive steering effect on learning and useful education”. (Nendaz & Tekian, 1999). However, on psychometric grounds, POTS are generally considered to be less rigorous than the more outcome-oriented tests and not recommended for use in isolation for summative decision-making.

  • Tutor, Peer and Self-assessment

    “Tutor, peer and self-assessment develop the ability to give and receive feedback and to appraise one’s own needs, which are required in the daily activities of the physician. They also allow detection of potential interpersonal problems that would have remained unnoticed otherwise” (Nendaz & Tekian, 1999).

    Tutor, peer and self-assessments are aimed primarily at assessing students’ ability to give and receive appropriate feedback that can be reflected through the following qualities:
    • Self-awareness and internal motivation;
    • Professional attitudes and behaviours (e.g. mutual trust and respect for and responsibility to each other);
    • Critical thinking and self-directed learning skills; and
    • Interpersonal, communication and team skills.

    The Tutotest, developed by the University of Sheerbrooke to assess the skills and attitudes of medical students working in tutorials in a problem-based curriculum, was found to have high reliability (Cronbach’s coefficient á = 0.98); a correlation coefficient of 0.64 was obtained with tutor global assessments and 0.39 with students’ written examinations (Hebert & Bravo, 1996).

  • Four Step Assessment Test (4 SAT)

    The 4 SAT, recently implemented by the University of Queensland, consists of:
    • Solving a case scenario individually and in writing through identifying key features, generating hypotheses, explaining symptoms, defining hypotheses from requested additional data, and formulating learning issues;
    • Repeating the above processes at the group level with presentation of new information (with observers assessing the tutorial process);
    • Undertaking a period of self-directed learning; and
    • Taking a written exam testing content knowledge and based on the “top 10” learning issues identified by all groups.

    The 4 SAT is aimed at assessing individual knowledge, clinical reasoning and group process skills. Inter-rater agreement was found to be greater than 80% with good correlation (r = 0.49) between 4 SAT scores and those from other objective test instruments (Zimitat & Alexander, 1998).

  • Triple-Jump Exercise (TJE)

    The TJE is organised as a structured three-part oral assessment that reflects the learning process in PBL, but under more controlled and standardised conditions. The TJE can be used for formative and summative assessments, but its reliability and validity are generally considered to be low. Inter-rater correlations can be low and errors of measurement resulting from the use of a single case pose serious concerns to the use of the TJE.

Formative Assessments

The use of formative assessments to provide regular, informative and detailed feedback to students on their progress and performance at various stages during a given course, is an essential component of the PBL educational strategy. Such assessments will enable students, whenever necessary, to undertake effective and timely remedial action that is either self-initiated or upon the advise of the tutor.

Conclusion

“Despite the large range of assessment methodologies used in PBL settings, no single choice emerges, and the triangulation of diverse instruments is required to obtain a fair judgment about students” (Nendaz & Tekian, 1999).

The intended educational outcomes of PBL go beyond just the acquisition of content knowledge. The acquisition of educational process skills, that contribute to the development of clinical competence and desired professional attitudes and behaviours in medical practice, is also an important educational objective of PBL. Student assessment in PBL, from a psychometric standpoint, will continue to pose a challenge to medical educators in the selection of test instruments that can ensure high consistency, reliability and validity to meet the educational demands and curricular tenets of the overall PBL curriculum.

References

Boshuizen, H.P.; van der Vleuten C.P.; Schmidt H.G. & Machiels-Bongaerts, M. (1997). ‘Measuring and Clinical Reasoning Skills in a Problem-Based Curriculum’. Medical Education, Vol. 31, pp. 115–121.

Hebert, R. & Bravo, G. (1996). ‘Development and Validation of an Evaluation Instrument for Medical Students in Tutorials’. Academic Medicine, Vol. 71, pp. 488–494.

Nendaz, M.R. & Tekian, A. (1999). ‘Assessment in Problem-Based Learning Medical Schools: A Literature Review’. Teaching and Learning in Medicine, Vol. 11, pp. 232–243.

Blake, J. M.; Norman, G.R.; Keane, G.R.; Mueller, C.B.; Cunnington, J.P.W. & Didyk, N. (1996). ‘Introducing Progress Testing in McMaster University’s Problem Based Curriculum: Psychometric Properties and Effect on Learning’. Academic Medicine, Vol. 71, pp. 1002–1007.

Norman, G.R. (1997). ‘Assessment in Problem-Based Learning’. In The Challenge of Problem-Based Learning, Boud, D. & Feletti, G. I. (Eds.). London: Kogan Page Ltd. pp. 263–268.

Zimitat, C. & Alexander, H. (1998). ‘A New Assessment Instrument for Large Classes in PBL Curricula’. Abstract from 8th Ottawa International Conference on Medical Education, Philadelphia, PA: National Board of Medical Examiners.

 
 
 First Look articles





Search in
Email the Editor
Inside this issue
Assessing Education Quality: Measures and Processes
   
Addressing Students’ Fears about Examinations
   
Student Assessment in Problem-based Learning: A Challenge Beyond Reliability and Validity
   
Implementing Effective Peer Assessment
   
Self-and Peer-assessments — Vehicles to Improve Learning
   
What is Quality in Assessment Practice?