CDTL    Publications    About
Jan 1999 Vol. 3   No. 1

Peer Review:
A Method of Evaluating Teaching
By Associate Professors
Lai Yee Hing (right) & Lee Hian Kee
Department of Chemistry
Faculty of Science

Current Methods of Teaching Evaluation

Evaluation of teaching staff by students, as has been practised by the National University of Singapore for the past decade or so, provides useful student feedback on teaching performance. When used judiciously, evaluation statistics can assist staff to improve their teaching and overcome, or at least minimise, their deficiencies in helping students learn. However for ease of analysing the evaluation, such feedback is mainly in numerical form, and consequently, may not be the most desirable gauge of teaching performance as it is subject to personal bias.

In science and engineering, one critical parameter in the analysis of experimental errors (the so-called determinate errors) is personal bias. Unless experiments are carefully designed and carried out, they may lead to invalid results if the possibility exists of personal bias creeping into the collection of data and/or interpretation of the results. Thus, in an evaluation by students that bases itself even more so on opinionated perspectives, the significance of personal prejudice bearing on the outcome cannot be overemphasized, especially if it is taken as the sole, exclusive measure of teaching performance. After all, it is, at the present time, a one-time exercise, held towards the end of each semester.

Ideally, evaluation should be a continuous process. Since the implementation of the modular system, we have been placing greater emphasis on continuous assessment of students. This, it is argued, gives a better gauge of the ability of students, instead of a single major examination at the end of the semester. Yet a single evaluation exercise is deemed sufficient for student evaluation of staff teaching performance! This begs the question: Why not have something akin to continuous assessment for evaluation of teaching performance? Obviously, there are logistical difficulties, and implementation of such a scheme to the same degree as continuous assessment for students would be practically unfeasible. But what if we combine the current student evaluation exercise with peer review in which staff members are reviewed by their colleagues who sit in during their lectures at least twice over a semester?

In the academic world, peer review is already an established practice. Review of manuscripts submitted to journals, grant applications, promotion exercises, etc. are part and parcel of academic life. By extending this to teaching performance, we might yet have a fairer and more objective assessment of teaching performance.

Peer-Review in the Department of Chemistry

Beginning in session 1998/99, the Department of Chemistry has instituted a new format for peer-review of its staff members on their teaching. Openness and transparency in the review process are the hallmarks of this scheme. The reviewer (we would like to think this is not an assessment or evaluation in the strictest sense of the word) need not necessarily be a senior staff member (this is after all a peer-review exercise). He/she sits in on a lecture or tutorial, and subsequently provides brief written comments of his/her impressions of the lecturer/tutor during the class, based on the following points:

  • Lecture was well-organised and covered the topic adequately
  • Lecturer’s speech was audible and clear
  • Lecturer’s explanations were clear, and seemingly understood by students
  • Lecturer’s enthusiasm
  • Students’ response to lecturer
  • Other general comments

The term “lecture” is used in a general sense and can also include a small- (with 10 students) to a medium-sized (with more than 10 students) tutorial.

Rating is based on “can be improved”, “satisfactory”, and “very good”. Reviewers are encouraged to be as constructive as possible in their critiques. No numerical scores are given. Both lecturers and reviewers are told that reports will be returned to the teaching staff being reviewed. There is absolute transparency in the scheme. Reviewers are asked to contact staff to be reviewed to arrange for a suitable time to attend the lecture or tutorial. No one is assigned reviewing duties based on impersonal third-party directives. It is important for everyone concerned that the review is not for judgemental purposes. It is meant to be an objective, and honest appraisal, and the results of the review should be considered in this spirit.

Student Response as an Important Assessment Criterion

The first four parameters for reviewers to comment upon are standard ones in evaluation forms – the only difference is that they are considered from the reviewer’s, not student’s perspective. The fourth parameter we consider being important for a meaningful review: students’ response to the lecturer as perceived and observed by the reviewer. We believe unquestionably teaching is a two-way process, especially so at the tertiary level of education. Most student evaluation exercises are based only from the students’ perspective in which the lecturer is judged on how well he/she provides knowledge. However, is that all all there is to the learning process? Shouldn’t the student’s role be a critical factor in the learning process too? We say yes. Thus, the staff member sitting in on a colleague’s lecture is also asked for any student reaction (if any) to the lecturer’s teaching, including his prompting and encouragement.

How many of us teaching staff have experienced this frustration: Despite our encouragement and prompting, student response is generally minimal or non-existent. This parameter is not a component of the teaching evaluation form. However, we contend that it is important to gauge the level of student response (again, if any) so that a reticent class provides food for thought for the lecturer concerned. He/she should begin to ask why this is the case. Is he/she not providing the encouragement, or is not seen to provide such interaction? Is the lecture being conducted at too fast a pace, leaving no time for students to respond in a positive manner? Is sufficient time being given to students to assimilate and digest the information – and thus, are the explanations clear enough for students to understand during the lecture? Is too much material being given so that students have difficulty coping with the flow of information?

With the honesty and objectivity implicit in the peer-review system, the reviewer can bring some of these reasons to the attention of the lecturer. It can be argued that the current teaching evaluation system also allows for students’ comments. This is true, and this component should remain. It is, however, difficult to judge the honesty and objectivity with which these comments are made since they are made anonymously, and to put it succinctly, anything goes. How much credence can we put on anonymous assessments? Even in a scientific reviewing process that does provide for anonymity, collegiate responsibility, objectivity and credibility dictate against impartial, mendacious and mischievous assessment. In an assessment exercise like student evaluation, there is no such honour system in place. In fact, even numerical scores that lecturers and tutors receive from students evaluating them may not have been given honestly and fairly, unless they are completely consistent across the entire spectrum of modules taught by the staff concerned.

The feedback on students’ response as seen from a third party is therefore, we feel, an important component of our peer-review scheme. Since we make no distinction as to whether the review is taking place at a lecture or a tutorial, the level of student response is contingent upon which type of teaching is being reviewed. For a big class of several hundred students, no one expects student interaction of any appreciable extent; for a tutorial, we might expect more (although this expectation is not often realised). The fact is that our peer-review system allows such interaction to be recorded by the reviewer, and acted upon accordingly by the lecturer or tutor concerned in order to improve upon his/her teaching.

Improving Our Peer-Review System

This is only the first iteration of the peer-review system in our Department. Although it is by no means a definitive approach to the problem of obtaining a fair and useful review of teaching performance, we feel it offers a useful non-student perspective that should be taken seriously. The best judges of teachers are probably teachers themselves; their opinions should then be more seriously valued than hitherto has been the case. We anticipate changes in the questionnaire: for example by January 1999, we will have incorporated a section in which reviewers are asked to list two-to-five good points about the lecturer or tutor, and two-to-five areas in which there may be room for improvement. We will also seek opinions from staff members themselves what other additional parameters ought to be reviewed during a lecture or tutorial.





Some Thoughts on Effective Teaching
Peer Review: A Method of Evaluating Teaching

Gathering Student Feedback

Peer Review: Building A Community of Scholars

1998 Statistics

Clueless About IT

Disguised Blessing

We Have Guests!

Food for Thought

Teaching & Learning Highlights
IT is CreaTive

The Integration of Creativity and IT in the Teaching of Thinking

Email Editors

© 2012 CDTLink is published by the Centre for Development of Teaching and Learning. Reproduction in whole or in part of any material in this publication without the written permission of CDTL is expressly prohibited. The views expressed or implied in CDTLink do not necessarily reflect the views of CDTL.