CDTL    Publications     Mailing List     About Brief

 

   

This issue of CDTL Brief is published for the purpose of informing our colleagues of the discussion concerning the recommendations for change to Student Feedback and Peer Review, and to invite further feedback to help fine-tune the proposals before they are implemented.

In August 1999, CDTL conducted an online discussion on its proposal to improve Student Feedback and Peer Review. Additional comments and suggestions were presented at the Dialogue Session on Saturday 28 August 1999 led by Deputy Vice-Chancellor Professor Chong Chi Tat. This CDTL Brief highlights the main issues raised in the discussions and presents in summary form the recommendations made by CDTL.

January 2000, Vol. 3 No. 1 Print Ready ArticlePrint-Ready
Student Feedback & Peer Review
 
 

Dialogue Session

DVC Professor Chong Chi Tat opened with some remarks about events leading to the dialogue session. As issues of institutional self-evaluation and improvement are of perennial interest at NUS, the Student Feedback and Peer Review exercises came up again for discussion during a meeting of the Vice-Chancellor and Deputy Vice-Chancellors some months ago. CDTL was subsequently tasked to review and submit recommendations.

A preliminary report was submitted in July and mounted on the Intranet on 1 August with an invitation for a university-wide discussion among all staff. This was prompted by the conviction that in such important matters—and ones that will affect everyone when implemented—it is logical and essential to have the input of all those involved. The online discussion, however, did not draw as much response as was expected and this face-to-face dialogue session was therefore arranged.

DVC Prof Chong reiterated the genuine concern with getting as much feedback as possible as this will contribute to the efforts at improving the existing processes. All views would be taken into account and, wherever possible, incorporated into the final version for implementation. He also reminded that these issues should be seriously addressed, and close attention be paid to the recommendations offered by CDTL. The dialogue session offered an opportunity for a second round of response and he invited all colleagues to participate freely in the discussion.

Summary of Suggestions and Feedback from the Dialogue Session

This section presents a summary of the suggestions and concerns shared at the dialogue session at CDTL on 28 August.

On Student Feedback:

Suggestions

  1. Student Feedback can be more objective if the learning objectives for each course are stated, providing students with a reference point for their evaluation.

  2. It is important to include a question on the overall impression of the course, because the other questions may always capture that.

  3. CDTL should carry out a survey to find out students’ perceptions of good teaching to help bridge the gap between our perception and theirs.

  4. There should be provision for distinguishing between one who merely states one’s consultation hours but does not welcome actual interaction with students, and one who really invests time and effort interacting with students (through email, IVLE discussions, office consultations, etc.).

Concerns and Other Comments

  1. De-linking Student Feedback from examination registration and using a non-electronic means would be backtracking where technology is concerned.

  2. There is a need for Student Feedback. Students need some kind of empowerment, as teachers do, and there is no hard evidence to suggest that Student Feedback is totally biased.

  3. The proposed 5-point scale is an improvement. The previous 10-point scale could unfairly empower a minority of students to distort the average score.

On both Student Feedback and Peer Review:

Suggestions

  1. Student Feedback and Peer Review should not be the only components of staff evaluation. Other components should include preparation, setting of examinations, and other indications of staff’s interest in teaching such as the publishing of textbooks and the writing of articles on teaching (e.g. contributions to CDTL’s publications).

  2. It is important to develop a culture where feedback is given/taken seriously. It should not be treated as a routine exercise but something that has important implications for the person being evaluated. Awareness of this would make for more responsible and constructive feedback.

  3. The qualitative aspect of evaluation should also be emphasised. The use of cumulative quantitative scores are used in the evaluation for promotion and contract renewal may convey the impression that the qualitative aspect is undervalued.

Summary of the Online Discussion

This section presents some recurrent points that were raised in the online discussion in August.

On Student Feedback:

  1. Students may not be reliable judges of the quality of teaching, because, (a) they are not experts in the field, and (b) what students expect from education, as well as the criteria they use for evaluating the quality of teaching may not be the same as that of the academic staff.

  2. It is important to distinguish the feedback from “intelligent, committed” students from those of the “dumb, unmotivated” ones.

  3. Students who are “delinquent” should not participate in feedback.

  4. Students should not be asked to rank teachers. Ranking may artificially distort minute differences.

  5. The Student Feedback questions are too detailed.

  6. The Student Feedback questions are not detailed enough.

  7. The feedback questions should have greater emphasis on innovation in teaching.

  8. The feedback questions should factor in differences between disciplines.

  9. There should be more discriminating probes about use of IT.

  10. There should be a complementary provision for staff training/support, particularly for those who receive poor feedback.

  11. Teacher appraisals should not be based on the feedback from a single year, but on the feedback from several years indicating the “history” of the individual’s performance.

  12. There are 2 opposite positions on the choice between compulsory vs. optional nature of the Student Feedback. There was also a discussion on the issue of online vs. in-class feedback, again controversial.

On Peer Review:

  1. Greater transparency is desirable in Peer Review.

  2. Peer Review militates against collegiality.

  3. The reviewers may not be wholly objective.

  4. Peer Review should be used carefully and selectively.

  5. Peer Review should be used to double-check adverse Student Feedback.

  6. Peer Review should be used at critical career points.

  7. Yearly Peer Review is necessary only for new staff.


CDTL’s Recommendations

The initial recommendations have been revised according to feedback received. This section presents the highlights of the current recommendations: 1) Student Feedback and Peer Review—Summary of Recommendations, 2) The Proposed Student Feedback Questionnaire, and 3) The Proposed Peer Review Checklist.

Student Feedback and Peer Review —Summary of Recommendations

A. Student Feedback

  1. The results of the feedback should be thought of as useful data or information rather than as a direct overall assessment of the teacher. This means we should avoid asking questions that require students to rank teachers.

  2. To make Student Feedback more effective, students should be sensitised to the University’s conception of excellence in teaching. There should be a provision for us to prepare Year 1 students early enough to start reflecting on the parameters of good teaching implicit in the feedback.

  3. The proposed questionnaires are meant as the University level template. Individual Faculties or Departments may find it necessary to add further questions to these questionnaires. For courses which are completely clinical or laboratory-based, it might be necessary to drop a few questions as well.

  4. We should distinguish between feedback on the course and feedback on the classroom teaching of the teacher (teacher = lecturer, tutor, seminar facilitator, teaching assistant). Since many courses are team-taught, we recommend that feedback on the course, as a whole, not be used for teacher appraisal. In line with this suggestion, we have made feedback on the course completely qualitative. To distinguish between the two, feedback on the classroom teaching is put in Sections A and B of the questionnaire, while feedback on the course is put in Section C, along with other qualitative comments.

  5. Section A is a set of questions on the specific aspects of teaching, while Section B is a single question about the holistic perception of the quality of teaching. We suggest that the scores from these two parts should not be added up. If kept distinct, the cumulative average score of Section A can be compared with the score for Section B, to check for match. If the two do not match, the qualitative comments in Section C should be scrutinized carefully to find out which one is more reliable.

  6. The results of the feedback should be made available online to the respective teachers as soon as the examination results are announced.

  7. Having seen the back and forth arguments on compulsory online feedback, we are tempted to suggest that we continue with the current scheme, unless someone proposes a strikingly more efficient solution.

B. Peer Review

  1. The Peer Review of a course should cover the evaluation of the course as a whole. Evaluation of the classroom teaching (lectures, tutorials and seminars) is only one of the components of Peer Review.
  2. The evaluation of lectures, tutorials, seminars may be conducted more frequently than curriculum evaluation. Curriculum evaluations are necessary only for critical administrative decisions (e.g. promotion, tenure and teaching awards).
  3. The Peer Review contains both a quantitative part and a qualitative part. The cumulative average score of the quantitative part of the Peer Review is one of the components of teacher appraisal, to be compared with the cumulative average score from Student Feedback. If the rating from the Student Feedback does not match the rating from the Peer Review, the qualitative comments in both should be scrutinised carefully to see which one is more reliable.
  4. Copies of Peer Review reports should be given to the teachers under evaluation. These reports should be included in the teaching portfolios submitted for promotion and contract renewal. If teachers feel that the report disadvantages them, they can attach their response to the review.
  5. The responsibility of reviewing peers should be distributed to all members of staff with a minimum teaching experience of five years.
  6. As far as possible, at least two reviewers should be assigned for each teacher for a given Peer Review exercise, and the reviewers assigned for subsequent years should keep changing.
  7. Wherever possible, the reviewer should have sufficient familiarity with the subject matter of the courses being reviewed.
  8. The reviewer and the candidate being reviewed should not be in competition for the same position.

Other Recommendations

Portfolio of Teaching

Student Feedback and Peer Review may be supplemented by the Portfolio of Teaching submitted by the teacher as an additional source of information.

Consistency

For significant administrative decisions based on teacher appraisal (e.g. contract non-renewal because of poor teaching, promotion-based primarily on excellence in teaching, and teaching excellence awards), the results of Student Feedback, Peer Review and Portfolio of Teaching must be consistent across courses, across evaluators and with one another.

For instance, we can take the results to be reliable if the Peer Reviews by different reviewers report poor quality, and furthermore, the same conclusion is indicated in Student Feedback and Portfolio of Teaching as well. However, if the results are mixed, we must look carefully for probable sources of contamination before making an overall appraisal.

None of the different sources of information we appeal to in teacher appraisal (cumulative average score in Peer Review, cumulative average score of Section A in Student Feedback, score of the overall perception of teaching in Section B in Student Feedback) is totally reliable in isolation. However, each of them contributes useful information. The essence of the strategy that we propose is that of convergence of evidence: if the scores from these different sources converge, then we can be reasonably confident that what we are making a meaningful and reliable measurement. On the other hand, if the different scores do not match, at least one of the scores involves a distortion due to some factor.

If there is a difference of more than one point along the five-point scale, we suggest a closer scrutiny. We cannot tell on a prior grounds which score is problematic. We have to look at the totality of the information and make an informed guess about which of the scores involves distortion. In other words, we have to do it on a case-by-case basis. As a general procedure for dealing with such conflicts, we suggest that the Head makes a recommendation on the basis of a careful examination of all the information available, and the establishment committee reviews the recommendation.

Ongoing Revision

The design and implementation of Student Feedback, Peer Review and Portfolio of Teaching should be reviewed every three years, and revised if necessary.

 
 
 First Look articles





Search in
Email the Editor
Inside this issue
Student Feedback & Peer Review
   
The Proposed Student Feedback Questionnaire
   
The Proposed Peer Review Checklist