Since the inception of the World Wide Web and its rapid adoption
by the public, business, government and education, research
into its use has been constantly outpaced by its exponential
growth. Just as architects and innovators of earlier technologies
learnt of why and how new technologies worked through trial
and further improvements, ‘cyberphiles’ are discovering
the possibilities and inadequacies of Information and Communication
Technology as new features emerge. Regardless of the unfolding
technical wizardry, while some believe that the underlying
principles of learning and cognition do not change with the
medium of delivery (Wilson & Lowry, 2000), others argue
that in some cases developments in technology have brought
new perspectives on how humans learn (Papert, 1980) and new
possibilities of expressing cognition such as through simulation
and model building (Jonassen, 1995).
To date, much of the teaching and learning through information
technology has focused on scripting presentations and providing
pre-programming responses to limited user input. However,
this ‘Online Tutorial’ design approach has never
been very effective at supporting activities for critical
thinking. Addressing these levels of cognitive processing
demands a nuance in identifying user input and a level of
sophistication in creating meaningful feedback that is best
exemplified through human communication, or even computer-mediated
human communication. Computer-mediated communication (CMC)
has great potential in promoting the relationships necessary
to support and expand one’s knowledge and the challenge
is to design CMC-enhanced learning activities that support
strategies aimed at eliciting reflection, critical thinking
One approach is to involve students in the revision, evaluation
and feedback process of correcting online assignments, i.e.
online peer assessment. However, some have criticised the
use of non-traditional assessment methods such as peer assessment
for being: (a) less rigorous than traditional forms of assessment;
(b) too demanding, putting unreasonable pressure on some students;
(c) not reliable since people other than the lecturers are
involved in it; and (d) not necessarily fair due to student
In response, proponents like Bostock (2000) believe that
“Student assessment of other students’ work, both
formative and summative, has many potential benefits to learning
for the assessor and the assessee”. He points out that
peer assessment encourages student autonomy and higher order
thinking skills, and although he is aware of the weaknesses
of peer assessment, he trusts they can be avoided with anonymity,
multiple assessors, and tutor moderation. Furthermore, Bostock
points to internet technology and its potential to assist
in the management of large numbers of students.
One example of an online peer review and assessment system
is the Criterion Peer ReviewTM (CPR) program developed at
UCLA, USA, by Orville L. Chapman and Michael A. Fiore. This
program, first introduced in 1999, incorporates an integrated
set of ‘digital tools’ that manage the review
process, analyse student input and prepare reports for both
instructor and student (Chapman, 2001). CPR assignments engage
students in correcting short essays on a specific topic. After
electronically submitting their respective essays, students
then read and assign a score to three ‘calibration’
essays: one calibration essay is an exemplar written by an
expert; the other two are documents containing misconceptions,
omissions, and errors. To clarify students’ understanding
of the issues and to correct any misconceptions that they
might have, CPR provides extensive feedback in the assessment
of the calibrations.
After the calibration exercise, CPR assesses each student’s
performance, and if the performance is inadequate, the student
receives further instruction. Students must repeat the calibration
satisfactorily before being allowed to continue. From these
practice exercises, students achieve competency as reviewers
before being assigned to read and score three anonymous peer
essays, as well as their own. Finally, the program generates
a report, showing the reviewer’s comments and scores.
Another example of an online peer review and assessment system
is OASYS. Developed at the University of Warwick, UK, by A.
Bhalerao and A. Ward (2001), OASYS not only marks multiple-choice
questions (MCQs) automatically, but also subsequently controls
the anonymous distribution of free response answers amongst
learners for peer assessment. A hybrid system combining MCQ
testing with free response questions, OASYS was designed to
address the inadequacies of current computer-assisted assessment
systems that limit the testing format to MCQ because marking
for free response answers cannot be easily automated.
Bhalerao and Ward explain that their Computer Science classes
are “…increasingly using supervised practical
programming sessions rather than seminars to reinforce problem
solving”. As such, with 240 first year undergraduate
students, approximately 1000 scripts need to be marked and
commented on before the next lab session that is usually in
a week’s time. Of course some of the questions seldom
have unique answers, and providing timely feedback is critical.
Believing that without the human element in the assessment
processes, the quality and validity of the assessment is reduced,
Bhalerao and Ward proposed a “…system which exploits
the efficiency of electronic document handling whilst achieving
the quality of feedback that can only be given by humans”.
Using anonymous electronic distribution, each script is marked
multiple times, increasing the validity of the marks. A monitoring
feature allows tutors to view the variability of the marks
given to each script and if the variance is high, indicating
disagreement between the assessors, the script is highlighted
for moderation by the tutor.
So what is the value of online student peer review, evaluation
and feedback? Is learning how to assess someone else’s
work a practical skill for future engineers, doctors, lawyers
and so forth? Considering that higher education institutions
ultimately favour the implementation of learning activities
that challenge learners to provide evidence of analysis, synthesis
and evaluation skills necessary for effective critical thinking,
and that these activities involve the sharing and communicating
of the learners’ perspectives, it is therefore imperative
that learners engaged in these activities receive timely formative
feedback. It is important that students corroborate or dispute
their constructed knowledge before misconceptions take root.
However, with the high student-to-teacher ratios in many tertiary
education environments of today, how well can this be done?
Online student peer review, evaluation, feedback, critique
and debate need to be examined more closely in order to establish
rules and guidelines to maximise their potential.
Bhalerao, A. & Ward, A. (2001). ‘Towards Electronically
Assisted Peer Assessment: A Case Study’. ALT-J,
Vol. 9, No. 1, pp. 26–37.
Bostock, S. (2000). ‘Student Peer Assessment, Learning
Technology’, Keele University, UK, [Electronic Citation],
(Last accessed: 11 June 2002).
Chapman, O.L. (2001). ‘Calibrated Peer Review™,
The White Paper: A Description of CPR’, [Electronic
Citation], (Last accessed: 19 June 2002).
Jonassen, D.H. (1995). ‘Computers as Cognitive Tools:
Learning with Technology, not from Technology.’ Journal
of Computing in Higher Education, Vol. 6, No. 2, pp.
Papert, S. (1980). Mindstorms: Children, Computers,
and Powerful Ideas. New York: Basic Books.
Wilson, B. & Lowry, M. (2000). ‘Constructivist
Learning on the Web’. Learning Technologies: Reflective
and Strategic Thinking. Liz Burge (Ed.), San Francisco:
Jossey-Bass, New Directions for Adult and Continuing Education,
2001, [Electronic Citation]
(Last accessed: 6 August 2002).