DVC Professor Chong Chi Tat opened with some remarks about
events leading to the dialogue session. As issues of institutional
self-evaluation and improvement are of perennial interest
at NUS, the Student Feedback and Peer Review exercises came
up again for discussion during a meeting of the Vice-Chancellor
and Deputy Vice-Chancellors some months ago. CDTL was subsequently
tasked to review and submit recommendations.
A preliminary report was submitted in July and mounted on
the Intranet on 1 August with an invitation for a university-wide
discussion among all staff. This was prompted by the conviction
that in such important matters—and ones that will affect
everyone when implemented—it is logical and essential
to have the input of all those involved. The online discussion,
however, did not draw as much response as was expected and
this face-to-face dialogue session was therefore arranged.
DVC Prof Chong reiterated the genuine concern with getting
as much feedback as possible as this will contribute to the
efforts at improving the existing processes. All views would
be taken into account and, wherever possible, incorporated
into the final version for implementation. He also reminded
that these issues should be seriously addressed, and close
attention be paid to the recommendations offered by CDTL.
The dialogue session offered an opportunity for a second round
of response and he invited all colleagues to participate freely
in the discussion.
Summary of Suggestions and Feedback from the Dialogue
This section presents a summary of the suggestions and concerns
shared at the dialogue session at CDTL on 28 August.
On Student Feedback:
- Student Feedback can be more objective if the learning
objectives for each course are stated, providing students
with a reference point for their evaluation.
- It is important to include a question on the overall impression
of the course, because the other questions may always capture
- CDTL should carry out a survey to find out students’
perceptions of good teaching to help bridge the gap between
our perception and theirs.
- There should be provision for distinguishing between
one who merely states one’s consultation hours but
does not welcome actual interaction with students, and one
who really invests time and effort interacting with students
(through email, IVLE discussions, office consultations,
Concerns and Other Comments
- De-linking Student Feedback from examination registration
and using a non-electronic means would be backtracking where
technology is concerned.
- There is a need for Student Feedback. Students need some
kind of empowerment, as teachers do, and there is no hard
evidence to suggest that Student Feedback is totally biased.
- The proposed 5-point scale is an improvement. The previous
10-point scale could unfairly empower a minority of students
to distort the average score.
On both Student Feedback and Peer Review:
- Student Feedback and Peer Review should not be the only
components of staff evaluation. Other components should
include preparation, setting of examinations, and other
indications of staff’s interest in teaching such as
the publishing of textbooks and the writing of articles
on teaching (e.g. contributions to CDTL’s publications).
- It is important to develop a culture where feedback is
given/taken seriously. It should not be treated as a routine
exercise but something that has important implications for
the person being evaluated. Awareness of this would make
for more responsible and constructive feedback.
- The qualitative aspect of evaluation should also be emphasised.
The use of cumulative quantitative scores are used in the
evaluation for promotion and contract renewal may convey
the impression that the qualitative aspect is undervalued.
Summary of the Online Discussion
This section presents some recurrent points that were raised
in the online discussion in August.
On Student Feedback:
- Students may not be reliable judges of the quality of
teaching, because, (a) they are not experts in the field,
and (b) what students expect from education, as well as
the criteria they use for evaluating the quality of teaching
may not be the same as that of the academic staff.
- It is important to distinguish the feedback from “intelligent,
committed” students from those of the “dumb,
- Students who are “delinquent” should not
participate in feedback.
- Students should not be asked to rank teachers. Ranking
may artificially distort minute differences.
- The Student Feedback questions are too detailed.
- The Student Feedback questions are not detailed enough.
- The feedback questions should have greater emphasis on
innovation in teaching.
- The feedback questions should factor in differences between
- There should be more discriminating probes about use
- There should be a complementary provision for staff training/support,
particularly for those who receive poor feedback.
- Teacher appraisals should not be based on the feedback
from a single year, but on the feedback from several years
indicating the “history” of the individual’s
- There are 2 opposite positions on the choice between
compulsory vs. optional nature of the Student Feedback.
There was also a discussion on the issue of online vs. in-class
feedback, again controversial.
On Peer Review:
- Greater transparency is desirable in Peer Review.
- Peer Review militates against collegiality.
- The reviewers may not be wholly objective.
- Peer Review should be used carefully and selectively.
- Peer Review should be used to double-check adverse Student
- Peer Review should be used at critical career points.
- Yearly Peer Review is necessary only for new staff.
The initial recommendations have been revised according
to feedback received. This section presents the highlights
of the current recommendations:
1) Student Feedback and Peer Review—Summary of Recommendations,
2) The Proposed Student Feedback Questionnaire, and 3) The
Proposed Peer Review Checklist.
Student Feedback and Peer Review
—Summary of Recommendations
A. Student Feedback
- The results of the feedback should be thought of as useful
data or information rather than as a direct overall assessment
of the teacher. This means we should avoid asking questions
that require students to rank teachers.
- To make Student Feedback more effective, students should
be sensitised to the University’s conception of excellence
in teaching. There should be a provision for us to prepare
Year 1 students early enough to start reflecting on the
parameters of good teaching implicit in the feedback.
- The proposed questionnaires are meant as the University
level template. Individual Faculties or Departments may
find it necessary to add further questions to these questionnaires.
For courses which are completely clinical or laboratory-based,
it might be necessary to drop a few questions as well.
- We should distinguish between feedback on the course
and feedback on the classroom teaching of the teacher (teacher
= lecturer, tutor, seminar facilitator, teaching assistant).
Since many courses are team-taught, we recommend that feedback
on the course, as a whole, not be used for teacher appraisal.
In line with this suggestion, we have made feedback on the
course completely qualitative. To distinguish between the
two, feedback on the classroom teaching is put in Sections
A and B of the questionnaire, while feedback on the course
is put in Section C, along with other qualitative comments.
- Section A is a set of questions on the specific aspects
of teaching, while Section B is a single question about
the holistic perception of the quality of teaching. We suggest
that the scores from these two parts should not be added
up. If kept distinct, the cumulative average score of Section
A can be compared with the score for Section B, to check
for match. If the two do not match, the qualitative comments
in Section C should be scrutinized carefully to find out
which one is more reliable.
- The results of the feedback should be made available
online to the respective teachers as soon as the examination
results are announced.
- Having seen the back and forth arguments on compulsory
online feedback, we are tempted to suggest that we continue
with the current scheme, unless someone proposes a strikingly
more efficient solution.
B. Peer Review
- The Peer Review of a course should cover the evaluation
of the course as a whole. Evaluation of the classroom teaching
(lectures, tutorials and seminars) is only one of the components
of Peer Review.
- The evaluation of lectures, tutorials, seminars may be
conducted more frequently than curriculum evaluation. Curriculum
evaluations are necessary only for critical administrative
decisions (e.g. promotion, tenure and teaching awards).
- The Peer Review contains both a quantitative part and
a qualitative part. The cumulative average score of the
quantitative part of the Peer Review is one of the components
of teacher appraisal, to be compared with the cumulative
average score from Student Feedback. If the rating from
the Student Feedback does not match the rating from the
Peer Review, the qualitative comments in both should be
scrutinised carefully to see which one is more reliable.
- Copies of Peer Review reports should be given to the
teachers under evaluation. These reports should be included
in the teaching portfolios submitted for promotion and contract
renewal. If teachers feel that the report disadvantages
them, they can attach their response to the review.
- The responsibility of reviewing peers should be distributed
to all members of staff with a minimum teaching experience
of five years.
- As far as possible, at least two reviewers should be
assigned for each teacher for a given Peer Review exercise,
and the reviewers assigned for subsequent years should keep
- Wherever possible, the reviewer should have sufficient
familiarity with the subject matter of the courses being
- The reviewer and the candidate being reviewed should
not be in competition for the same position.
Portfolio of Teaching
Student Feedback and Peer Review may be supplemented by
the Portfolio of Teaching submitted by the teacher as an additional
source of information.
For significant administrative decisions based on teacher
appraisal (e.g. contract non-renewal because of poor teaching,
promotion-based primarily on excellence in teaching, and teaching
excellence awards), the results of Student Feedback, Peer
Review and Portfolio of Teaching must be consistent across
courses, across evaluators and with one another.
For instance, we can take the results to be reliable if
the Peer Reviews by different reviewers report poor quality,
and furthermore, the same conclusion is indicated in Student
Feedback and Portfolio of Teaching as well. However, if the
results are mixed, we must look carefully for probable sources
of contamination before making an overall appraisal.
None of the different sources of information we appeal to
in teacher appraisal (cumulative average score in Peer Review,
cumulative average score of Section A in Student Feedback,
score of the overall perception of teaching in Section B in
Student Feedback) is totally reliable in isolation. However,
each of them contributes useful information. The essence of
the strategy that we propose is that of convergence of evidence:
if the scores from these different sources converge, then
we can be reasonably confident that what we are making a meaningful
and reliable measurement. On the other hand, if the different
scores do not match, at least one of the scores involves a
distortion due to some factor.
If there is a difference of more than one point along the
five-point scale, we suggest a closer scrutiny. We cannot
tell on a prior grounds which score is problematic. We have
to look at the totality of the information and make an informed
guess about which of the scores involves distortion. In other
words, we have to do it on a case-by-case basis. As a general
procedure for dealing with such conflicts, we suggest that
the Head makes a recommendation on the basis of a careful
examination of all the information available, and the establishment
committee reviews the recommendation.
The design and implementation of Student Feedback, Peer
Review and Portfolio of Teaching should be reviewed every
three years, and revised if necessary.