Peer assessment refers to students’ critical evaluations
of peers’ performance, whether for writing, oral or
visual presentations. Peers may be evaluated in terms of their
contribution to the group, their product, or both. When effectively
implemented, peer assessment promotes critical thinking and
learner autonomy (Race, 2001; Zariski, 1996), both desired
characteristics of life-long learners which Singapore aims
to cultivate in its graduates (Lim & Chan, 2000; Poh,
1999). Please refer to Race (2001) and Dochy, et al.
(1999) for more thorough reviews on the topic.
This article highlights possible difficulties in implementing
peer assessment and suggests practical solutions to facilitate
effective peer assessment.
Setting up Peer Assessment
To promote effective peer assessment, several issues must
- Validity & Reliability
Some problems of validity and reliability include:
- Peer over-marking, where peers tend to give higher
marks than would tutors (Falchikov, 2002; MacKenzie,
2000; Roach, 1999);
- Too wide a range of marks such that tutors have to
moderate the marks for the whole class (Bostock, 2000);
- Too narrow a range of marks, making it difficult
to differentiate between good, average and weak performers
(Cheng & Warren, 1999; MacKenzie, 2000; Zariski,
The problems listed above are attributable to students’
inexperience and lack of confidence in marking (Hanrahan
& Isaacs, 2001). Practising and exposing students
to the peer marking procedure regularly can improve its
validity and reliability. Time-conscious educators could
model the marking process first, highlighting their rationale
along the way, so that students can understand the thought
processes behind the marking and apply them to their own
- Development & Use of Criteria
Related to the issues of validity and reliability,
unsuitable or misused criteria can also invalidate assessment.
These problems more likely occur when criteria are simply
given to students beforehand; if students themselves do
not consider what is important in grading a piece of work,
they tend to use the criteria less thoughtfully (Bostock,
2000). Even mere discussion of the criteria does not seem
as effective as getting students to develop them independently
There are two suggested solutions:
- Get students to first work out suitable criteria
for assessment, and train them to apply the benchmarks
through analysing and discussing their own answers to
sample questions (Stanton, 1999), or the work of their
predecessors (Smith, Cooper, & Lancaster, 2002).
Students are compelled to exercise their critical thinking
skills by generating criteria.
- Clear and objective criteria will enable students
to mark their peers’ work more confidently (Purchase,
2000). Race (2001:Appendix 1) suggests that students
reduce obscurity of assessment criteria by employing
simple language, and formulating them in terms of checklist
questions. For instance, students could re-phrase the
criterion, ‘Provides clarity in illustrations’,
as ‘How clearly were illustrations given?’,
to increase precision in peer-judgments.
Making students develop their own criteria helps them
analyse and think critically as they methodically assess
and evaluate a piece of work to determine good and bad
qualities. Criteria setting is an essential stage in the
peer assessment process if the benefits of critical thinking
are to be maximised.
Level of Formality
Level of formality refers to the accountability of the
assessment done by peers: to what extent and how peer-given
marks would be included in students’ final course
- Formative, not summative
Formative assessment focused on suggesting improvements
is preferred to summative assessment that is done solely
for computing final marks. This aspect also increases
student acceptance of this assessment process, as the
direct benefits are concrete (Falchikov, 2002). Not
only would students have a chance to improve on their
work before submission for a final grade, tutors also
benefit from receiving work that has been fine-tuned,
removing some of the tedium in marking.
- Negotiate, not negate
Tutors should not override peer marks but strive to
use peer marking as an opportunity for negotiating differing
peer opinions. There would probably be concerns over
the implications of ‘significant outliers’
in cases where a peer-given mark is arbitrated because
it differed significantly from the average of the marks
given by other groups (Purchase, 2000), or from the
tutor-given mark (Lim & Chan, 1999). However, it
is interesting that marks given by peers are mostly
consistent with the tutor’s marks. Lim and Chan
(1999) attribute this to the fact that criteria were
discussed and made explicit to students, which again
highlights the importance of students’ active
engagement in generating marking criteria and understanding
how to apply them in critical evaluations. Should significant
outliers appear, students should discuss with the tutor
and the class why they choose to allocate a certain
mark. Such discussions would allow tutors to give useful
feedback on students’ thought processes.
The tutor’s involvement as arbitrator instils
in students a sense of accountability in peer assessment,
with the assurance that the tutor will be there to ensure
fairness of given marks. More importantly, by having
to account for the marks given, students are again engaged
in critical thinking and learning to verbalise their
- Contribution, not content
Since fellow group members are probably in the best
position to judge individual contributions, peers could
assess each other’s contributions to the group,
leaving the assessment of product content to the tutor.
The individual then gets an overall mark based on some
weighted combination of both marks (Crockett & Peter
2002; Crowe & Pemberton, 2000). Weighting procedures
can also counter different peer-marker scenarios, for
example, those who are overgenerous, and those who ‘conspire’
to penalise a member (Li, 2001).
To conclude this section, given that the issue of marks
is a sensitive one, it is best for tutors to still be
involved in mark determination (Sher 2001), while making
students accountable for it. More importantly, both tutors
and students should strive to keep their focus on the
process of critical evaluation and not on the outcome
of the marks.
- Student Attitudes Towards Peer Assessment
Peer assessment can be met with negative initial responses
from students, such as scepticism, lack of confidence, and
fears of being discriminated against by peers (Sher, 2001).
Students are also known to argue that peer assessment “is
too demanding” (Lapham & Webster, 1999), or that
assessment is “the tutor’s job” (Crockett
& Peters, 2002; Crowe & Pemberton, 2000). Such responses
might manifest in attitudes of hostility or even refusal
to participate in the process (Bostock, 2000; Zariski, 1996)!
Although these are initial reactions which students do eventually
get over, it helps to inform students early of the rationale
and benefits of peer assessment (Crowe & Pemberton,
2000; Hanrahan & Isaacs, 2001) and its formative aspects
(Sher, 2001). Most students appreciate feedback so that
they can improve on their work before it is actually graded.
Additionally, it is important to make the procedure clear
to students (Smith, Cooper & Lancaster, 2002). Communication
with students also includes discussing assessment criteria
and allowing students to discuss and negotiate peer-given
marks. Giving students feedback on their feedback (Hanrahan
& Isaacs, 2001) helps keep them on track and increases
their confidence in the process. Ultimately, students must
become actively engaged in the thinking processes to reap
the benefits in assessing each other, so that they will
be empowered to carry it out more seriously.
Peer assessment is an important tool to develop critical
thinking and autonomous learning—skills that are valued
in today’s society. A possible concern for NUS educators
in implementing peer assessment is discovering the most time-efficient
way of carrying out this procedure without compromising on
its benefits. One suggestion is to use students’ presentations
of course topics as a form of peer assessment, integrating
assessment with course coverage and getting students actively
involved in thinking and learning (Lim & Chan, 1999).
Another alternative is to take the assessment part of the
procedure out of class-time. Students could perform the actual
assessment online, and still be made accountable for it (Bostock,
Implemented effectively, peer assessment fosters critical
thinking. Having raised some possible problems and solutions
in this article, it is hoped that tutors and students can
focus more on the processes of criteria setting, marking and
negotiating stages in the procedure, and work towards effective
Bostock, S. (2000). Computer-Assisted
Assessments—Experiments in Three Courses. From Learning Technology website, Keele University.
(Last Accessed: 23 December 2002).
Cheng, W. & Warren, M. (1999). ‘Peer and Teacher
Assessment of the Oral and Written Tasks of a Group Project’. Assessment & Evaluation in Higher Education, Vol. 24, No. 3, pp. 301–314.
Crockett, G. & Peter, V. (2002). ‘Peer Assessment
and Team Work as a Professional Skill in a Second Year
Economics Unit’. In Focusing on the Student.
Proceedings of the 11th Annual Teaching Learning Forum,
5–6 February 2002. Perth: Edith Cowan University. http://cea.curtin.edu.au/tlf/tlf2002/crockett.html. (Last Accessed: 21 December 2002).
Crowe, C. & Pemberton, A. (2000). ‘But
That’s Your Job!: Peer Assessment in Collaborative
Learning Projects’. Proceedings of the
3rd Effective Teaching and Learning at University Conference,
9–10 November 2000. Brisbane: University
of Queensland. (Last Accessed: 21 December 2002).
Dochy. F.; Segers, M. & Sluijsmans, D. (1999). ‘The
Use of Self-, Peer and Co-assessment in Higher Education:
a Review’. Studies in Higher Education, Vol.
24, No. 3, pp. 331–350.
Falchikov, N. (2002). ‘Unpacking’ Peer Assessment’.
In P. Schwartz & G. Webb (Eds.). Assessment: Case
Studies, Experience & Practice from Higher Education.
London: Kogan Page.
Gregory, A. & Yeomans, L. (2002). Peer
Assessment and Enhancing Students’ Learning. Leeds Metropolitan University. (Last Accessed: 23 December
Hanrahan, S.J. & Isaacs, G. (2001). ‘Assessing
Self- and Peer-assessment: the Students’ Views’. Higher Education and Development, Vol. 20, No. 1,
Jordan, S. (1999). ‘Self-Assessment & Peer-Assessment’.
In Brown, S. & Glasner, A.(Eds.), Assessment Matters
in Higher Education. Buckingham [England]; Philadelphia,
PA: Society for Research into Higher Education & Open
University Press. pp. 172–182.
Lapham, A. & Webster, R. (1999). ‘Peer Assessment
of Undergraduate Seminar Presentations: Motivations, Reflection
and Future Directions’. In Assessment Matters in
Higher Education. Buckingham [England]; Philadelphia,
PA: Society for Research into Higher Education & Open
University Press. pp. 183–190.
Li, L.K.Y. (2001). ‘Some Refinements on Peer Assessments
of Group Projects’. Assessment & Evaluation
in Higher Education. Vol. 26, No. 1, pp. 5–18.
Lim, L. & Chan, S. (2000). ‘Practical
Ways to Develop Lifelong Learning Skills in SP Students’. Journal of Teaching Practice, Singapore Polytechnic. (Last Accessed: 21 December 2002).
MacKenzie, L. (2000). ‘Occupation Therapy Students
as Assessors in Viva Examinations’. Assessment &
Evaluation in Higher Education, Vol. 25, No. 2, pp. 135–147.
Poh, S.H. (1999). ‘Assessment Issues in Singapore’. Educational Measurement: Issues and Practice, Vol.
18, No. 3, pp. 31–32.
Purchase, H.C. (2000). ‘Learning About Interface Design
Through Peer Assessment’. Assessment & Evaluation
in Higher Education, Vol. 25, No. 4, pp. 341–352.
Race, P. (2001). A
Briefing on Self, Peer and Group Assessment. Assessment Series No. 9. The Generic Centre, Learning
and Teaching Support Network. (Last Accessed: 21 December
Roach, P. (1999). ‘Using Peer Assessment and Self-Assessment
for the First Time’. In Assessment Matters in Higher
Education. Buckingham [England]; Philadelphia, PA: Society
for Research into Higher Education & Open University Press.
Smith, H.; Cooper, A. & Lancaster, L. (2002). ‘Improving
the Quality of Undergraduate Peer Assessment: A Case for Student
and Staff Development’. Innovations in Education
and Teaching International, Vol. 39, No. 1, pp. 71–81.
Sher, W. (2001). Peer
Assessment in the Design & Construction of a Reinforced
Concrete Lintel. From Assessment Case Studies
in the Centre for Education in the Built Environment
website. Loughborough University. (Last Accessed: 21
Stanton, K. (1999). ‘Involving Students in Setting
Criteria for Assessing Written Work’. In Hinett, K.
& Thomas, J. (Eds.). Staff Guide to Self and Peer
Assessment. Oxford: Oxford Brookes University. pp. 51–53.
Zariski, A. (1996). ‘Student
Peer Assessment in Tertiary Education: Promise, Perils
& Practice’. In Abbott, J. & Willcoxson,
L. (Eds.), Teaching and Learning Within and Across
Disciplines, pp. 189–200. The Proceedings
of the 5th Annual Teaching Learning Forum. Murdoch
University. (Last Accessed: 21 December 2002).