The Asian Journal of the Scholarship of Teaching and Learning (AJSoTL) is an international, peer-reviewed, open-access, online journal. AJSoTL seeks to create and nurture a global network of academics and educators who will discuss ongoing changes and future trends in tertiary education.
Centre for English Language Communication, National University of Singapore
Address for Correspondence: Dr Misty So-Sum Wai-Cook, Centre for English Language Communication, National University of Singapore, 10 Architecture Drive, Singapore 117511
Cook, M. S. S. W. (2016). Explain Everything: What can students gain from online multimodal feedback? Asian Journal of the Scholarship of Teaching and Learning, 6(2), 194-220.
Feedback to students is an essential part of learning, particularly language learning. Traditional feedback on students’ writing is written on drafts and/or conveyed via face-to-face consultations. However, with increased workload and class sizes, some problems that educators face when they use the traditional feedback methods face include the timeliness and quality of feedback, time constraints of consultations, and a lack of engagement by students (Crook et al., 2012). This study examines students’ writing performance as well as qualitative and quantitative measurements of students’ experiences with different modalities of feedback. It also highlights the benefits and drawbacks of using online resources to provide feedback to students and demonstrates how educators can best provide quality feedback to language learners in higher education. Consistent with previous research findings (Nicol & Milligan, 2006; Parr & Timperley, 2010), the results of this study suggest that using technology to provide feedback can further enhance learners’ performance by promoting deeper learning and higher order thinking, and by increasing students’ self-regulated learning.
Providing feedback to language learners is an integral part of instruction as it helps improve students’ performance, particularly in writing, and students who receive feedback outperform those who do not receive any feedback on writing tasks (Ahmadi, Maftoon & Mehrdad, 2012; Parr & Timperley, 2010). Traditional feedback on students’ writing is written on drafts and/or conveyed via face-to-face consultations. Instructors pride themselves on providing top quality student feedback by informing learners of where they were at, highlighting key features of the desired performance so students would notice the ‘gap’ in their work, and pointing out what needs to be done to achieve the desired performance (Parr & Timperley, 2010). To date, instructors at the Centre for English Language and Communication (CELC) at the National University of Singapore (NUS) continue to provide written feedback and/or consultations. However, there are serious practical constraints arising from this mode of feedback provision as a result of increased workload and class size (Crook et al., 2012).
Research on the effectiveness of teachers’ feedback for students’ written assignments has consistently revealed that the timeliness and quality of teachers’ feedback are major factors that affect the usefulness of such feedback. Students reported that they would not read the teachers’ qualitative comments and are more concerned about their grade if they receive feedback too long after submitting their work, or if the comments are unclear (Weaver, 2006, Wilson, M., 2012). Rather, students indicated that the best forms of feedback are those the teacher delivers promptly which can help them clarify points they do not understand, and indicate areas of improvement so that they can make revisions based on the feedback provided (Crook et al., 2012; Gibbs & Simpson, 2004; Hepplestone, 2012; Wilson, 2012).
Two important factors that affect the quality of feedback students receive for their written assignments would be time constraints and heavy workload on the teacher’s part (see, for example, Crook et al., 2012; Orsmond & Merry, 2011). This is especially the case for teachers who teach large classes. The repetitiveness of giving feedback and the amount of time the process takes means the quality of the teachers’ feedback may be sacrificed (Chanock, 2000; Crook et al., 2012; Orsmond & Merry, 2011; Poulos & Mahony, 2007).
Ultimately, however, students must take an active role in learning through the feedback provided. Research has shown that students’ engagement in the feedback process can lead to improvements in their writing performance (Nicol & MacFarlane-Dick, 2006). Research suggests that, rather than letting students passively read and accept teachers’ comments, they should be encouraged to actively engage in the feedback process (Evans, 2012). This means teachers should create opportunities for students to be involved in their academic work, such as active participation in the feedback process. It would give students the opportunity to master the knowledge, skills, and craft that enable the teachers to make such corrections (Evans, 2012; Newmann, Wehlage & Lamborn, 1992). For instance, teachers should optimise students’ learning and improvement in writing by providing sufficient time for debriefing and addressing misconceptions, offering suggestions on areas of improvement, and giving students opportunities to seek clarifications on areas that they do not understand (Evans, 2012; Trowler, 2010).
In an attempt to facilitate the teacher-student interaction in the feedback process, researchers have examined the use of technology to provide feedback. In general, research has repeatedly revealed the benefits of using technology to provide feedback to students. Video lectures, in particular, have been used successfully in teaching and learning (Abdous & Yoshimura, 2010), and for giving and receiving peer feedback (Chi, Roy, & Hausmann, 2008).
Teachers have also reported that screen-capture video recordings are an effective way of presenting content. This is because teachers are able to provide demonstrations of the content, and students can literally see what needs to be done to improve on subsequent coursework (Abrahamson, 2010). Teachers are also receptive to using videos for teaching and providing feedback because with video recordings, they are able to articulate the assessment criteria and explain key points clearly while minimising potential misinterpretations of content and feedback due to illegible handwriting (Crook et al., 2012).
In terms of the feedback process, despite a small percentage of students who reported that they disliked receiving video feedback because the videos were generic, most responded positively towards receiving video feedback because the videos were personalised and that watching the recordings made them feel like they were in face-to-face sessions with their teachers (Chi, Roy, & Hausmann, 2008; Crook et al., 2012). Students were also able to revise and improve on their work more easily because they were able to gauge their teachers’ reaction to their work, and the teachers’ emphases on key points that had to be improved on. Moreover, students preferred to watch and listen to their teachers’ feedback than to read written feedback (Crook et al., 2012).
Whether the videos are used for teaching and learning, or for giving and receiving feedback, providing videos online can be beneficial because students are able to keep a permanent record of the presentations which can be stored and replayed at their convenience, and they can access the feedback remotely anytime and anywhere (Abdous & Yoshimura, 2010; Crook et al., 2012).
In addition, from the perspective of using technology to help students improve their writing performance, recent research suggests that using technology to provide feedback can further enhance learners’ performance by promoting deeper learning and higher order thinking (Nicols & Milligan, 2006), and increase students’ capacity for self-regulated learning (Parr & Timperley, 2010). For instance, the use of video technology (Crook et al., 2012) and other typical voice and screen recording software such as Educreations, Screen Capture and Jing (Hacker, 2012) consistently point to the remarkable learning benefits that technology brings. Educators have reported that technology has not only increased students’ engagement in learning, but also provided timely feedback which is of good quality.
To date, several types of software have been employed by language instructors to provide feedback. While the use of software such as video technology (e.g. ASSET) and screen recording software such as Educreations, Screen Capture and Jing are useful, there are some disadvantages as well. The software can be hard to access, expensive, have time limitations on the length of the recordings (e.g. Jing only allows 5 minutes per video), or require instructors to write or type comments on the students’ essays using a separate software (e.g. Microsoft Word) before exporting them into the software for audio narration and recording.
The most recent application (app) called Explain Everything, a screencasting software on the iPad, is easy to download and access, inexpensive, easy to use, has no time restrictions on the length of the video recording. The best part about the app is that it allows users to write or type comments on the work in Microsoft Word or the app itself while providing audio narration simultaneously in the video recordings. This, of course, means users will save time and avoid the rigmarole of having to import and transport files between software. The video is then exported as a movie and emailed to students, which they can replay as frequently as is necessary. In addition, the user could provide a video with written comments and an audio narrative of good and bad essay samples. The video could then be uploaded to YouTube. Both files can be accessed anytime and anywhere by students when they are correcting their drafts, and they can replay their work prior to their consultations which will focus on reflection and improving their understanding of the work. The result is a more student-centred session rather than an instructor-centred one.
Because of these features, Explain Everything was selected for the study. In particular, the advantages of Explain Everything include the following in relation to providing online multimodal feedback:
Most importantly, Explain Everything has the following features:
The aim of this study was therefore, to investigate how technology, through the multimodal functionality of Explain Everything, could be used to enhance the quality of feedback provided to a group of students taking the module ES1102 “English for Academic Purposes (EAP)”. More specifically, the study focused on addressing four research questions:
To date, the instructors at CELC continue to provide written feedback and/or consultations. However, there are serious practical constraints arising from this mode of feedback provision as a result of increased workload and class size (Crook et al., 2012). Thus, this study aimed to investigate whether technology could enhance the current practices of providing feedback to students by addressing the challenges that instructors face: timeliness and quality of feedback, time constraints of consultations, and a lack of student engagement with the feedback.
The provision of feedback is a highly repetitive process which is a time consuming and often laborious task. Like many faculty colleagues, CELC instructors have a very demanding workload. They teach between 12 and 16 hours per week and have between 60 and 80 students. Instructors allocate on average 25 minutes of consultation time per student for each assignment, totalling 25 to 33 hours over 3 to 5 days. It is highly likely that instructors experience fatigue towards the end of the day or week, thus the quality of feedback is likely to be adversely affected.
A few days prior to the consultations, instructors typically provide students with explicit and/or implicit written feedback on the content, organisation and language of their essay drafts, and questions that would prompt them to think about what the errors are, and what needs to be done to improve their performance on their individual essays. The problem with this is that during the consultation, students often report that they do not understand the brief comments written on the drafts. The instructors then provide explicit explanations on errors and, if time permits, encourage students to think about correcting them. As such, there is insufficient time to do the following: for students to ask questions, and for the instructors to go through examples of good and poor writing to raise students’ awareness the ‘gap’ in their work as well as highlight to students what needs to be done to achieve the desired performance.
Due to time constraints, student consultations tend to be more teacher-centred than student-centred. More often than not, students would just sit and listen passively to explicit feedback. Students are often overwhelmed by the amount of information given and find it difficult to make all the corrections suggested. Many are still unable to make the changes highlighted to them in the next draft. This is not surprising as second language acquisition researchers (Schmidt, 1990) have established that for any deep learning to occur, learners must receive input that points out students’ performance, the required standard of performance, and how to fill the gap between students’ own performance and the required standard.
It is hypothesised that Explain Everything can mitigate the present gap and help enhance students’ capacity for independent learning, higher order thinking and deep learning through the active provision of feedback on their academic writing. It is also predicted that students would be more engaged in thinking about the errors they made.
The students that had enrolled in the researcher’s 12-week module ES1102 “English for Academic Purposes (EAP)” participated in this study (n=72). All students in the researcher’s classes were required to participate in this study as they all received feedback from the researcher who was their instructor. The students were divided into two groups. Students who were randomly assigned to the control group (n=36) received online written feedback on their essay prior to the one-to-one student consultations, and students who were randomly assigned into the experimental group (n=36) received online multimodal written and audio feedback on their essays before the one-to-one consultations. To ensure that students in the control group were not disadvantaged, they were given online multimodal feedback after the one-to-one consultations.
Both groups of students were given feedback on the content, organisation and language of their essay drafts based on Parr and Timperley’s (2010) principles of good feedback practice. All feedback informed students of where they were at, key features of the desired performance so that they notice the ‘gap’ in their work, and what needed to be done to achieve the desired performance. Students also completed pre- and post- feedback surveys on technology and the feedback process.
Students taking ES1102 were taught academic writing skills. They were provided with instructions and examples on how to complete the writing task, and were required to engage in learning activities such as discussions and presentations, and that these activities have been scaffolded into a well-paced, appropriately ordered sequence in order to achieve the module’s learning outcomes. Students were then asked to demonstrate their understanding of and ability to complete the writing tasks (e.g. summaries and essays). The following approach was used to provide feedback in this project (See Figure 1).
Figure 1. Process of providing feedback for essays.
ii. Process of using Explain Everything
Here are the steps of how Explain Everything was used in the study:
Step 1: The instructors would download students’ written work which had been submitted online via Dropbox, for example. When the written work is imported in Explain Everything, the app would automatically divide the pages in the document into individual slides (See Figure 2).
Figure 2. An example of students' written work being imported into Explain Everything.
Step 2: The instructors would proceed to write comments using a stylus pen and simultaneously provide an audio recording of each slide (Figure 3). The main advantage of using this software is that instructors could rewind and re-record over certain sections if they made mistakes in their recordings.
Figure 3. An example of written notes on Explain Everything.
Alternatively, the instructors could write comments and provide audio explanations on students’ work using Microsoft Word, and proceed to record an audio annotation in Explain Everything (Figure 4). This was the option adopted in this study because the typed comments were tidier than the handwritten comments on the files.
Figure 4. Example of a Word document with comments and audio recordings in Explain Everything (with reference to the numbering system for the comments).
The instructors then provided written and audio recordings of explanations of common errors found in all the essays, as well as examples of good and weaker essays. The explanations focused on structure, content and language to guide students through the re-writing process, with the objective of making them notice the gap in their work and consequently reflect on their own errors. These files were then uploaded onto the Dropbox for students to access at their convenience. The areas of feedback included:
Feedback was provided based on the criteria listed for essay structure (Figure 5) and content (Figure 6).
Figure 5. Criteria listed for essay structure.
Figure 6. Criteria listed for essay content.
Figure 7 illustrates an example of written feedback (with audio recording) on a sample essay using Explain Everything.
Figure 7. Example of feedback from Steps 1 and 2.
Figure 8 shows the types of language problems commonly found in essays.
Figure 8. Feedback on types of language problems.
Figure 9 illustrates the way in which language problems and weaknesses found amongst the class (in this case, the use of transitional devices) were highlighted to students.
Figure 9. Example of highlighting and working through language problems and weaknesses found amongst the class.
This approach in giving feedback using technology allowed students to become more independent and engaged in their own writing. By sending students multi-sensory feedback that they can play back at their own convenience, they became more aware of their errors which enabled them to reflect on their own learning (see for example, Anderson, 2008; Bourelle et al., 2016). It would help them become independent learners, fostering their own deeper learning and cognizance of their subjects as they can revise the material in order to accomplish a higher level of understanding and performance in the subject matter (See for example, Anderson, 2008; Bourelle et al., 2016).
Multiple data collection instruments were employed in this study to increase its reliability and validity. Each data collection method has its strengths and weaknesses, and any one source of information can potentially be incomplete or partial (Beebe & Cummings, 1996; Nugraha, 2002; Richards, 2001). Therefore, investigating students’ use of Explain Everything through multiple methods of data collection could provide more reliable findings. The data collection in this study was obtained by gathering information on students’ experience and their performance, as well as their perception of the effectiveness of the feedback given. The data to support the research questions came from:
Analysing the quantitative data collected using SPSS ensured that the data was efficiently processed to provide results and conclusions that can be considered valid and reliable. Correlation and ANOVA analysis were conducted to ascertain whether or not significant differences appeared between the groups and individuals, as well as to document extreme cases (Flyvbjerg, 2006).
Students’ pre- and post- feedback drafts were contrasted to assess whether students understood the feedback given to make corrections, and a post-essay task was given to assess whether students were able to apply their newly acquired knowledge to correct a problematic paragraph.
This section presents the findings of whether technology can be used to enhance the quality of feedback provided to students in the module ES1102 “English for Academic Purposes (EAP)”. In particular, it reports the findings related to the four research questions raised in Section 1 “Introduction”.
The results of this section present how students rated the effectiveness of online multimodal feedback (which consisted of written comments and oral explanations of the comments), how they used written feedback and online multimodal feedback, and their confidence levels in their essays.
Effectiveness of online multimodal feedback
All the students who participated perceived online multimodal feedback to be more effective than just having written comments on their essays. Out of a maximum of 1 where online multimodal feedback is more effective than written feedback, students who received online multimodal feedback rated it as being significantly more effective than receiving only written feedback (x = .734, t = 2.682, p = 0.004). Students who had a stronger command of the language felt that some comments which related to language did not require explanations (see the next section “Use of feedback”).
Use of feedback
Students reported that they depended more on online multimodal feedback than just written feedback to solve content-, organization- and language-related problems in their essays. Though the students depended more on online multimodal feedback to address content and organization issues in their essays than those related to language, they still rated differences between multimodal online feedback as being statistically more useful than written feedback (x = 4.23, t = 1.99, p = 0.027; x = 4.11, t = 2.38, p = 0.012; 3.80, t = 2.333, p = 0.004).
Students' ratings of the effectiveness of using online multimodal feedback to make changes in their essays
Not surprisingly, students reported that they needed feedback on language only for checking understanding and accuracy after they had listened to the audio feedback with the written annotations. Students reported that they might have been able to make changes without the audio, but the audio explanations increased their understanding of the errors (See Table 1). As one student reported:
“I depended on the audio feedback more when correcting my essay as compared to the summary and reader response as there were more major errors in the second draft of my essay than the summary and reader response. Some of my body paragraphs did not explain my thesis and some ideas were not elaborated on clearly. For example, by reading comment [M2] in draft 2 of my essay per se, I would not [sic] understand why my paragraphs and explanations were not explaining my thesis. With the audio feedback, I gained a better understanding about [sic] the errors. Thus, the audio feedback was a great help in giving me an overall idea of my mistakes and it also prompted me to think about how I should go about to make [sic] the corrections.”
All in all, students found the online multimodal feedback useful as it increased their understanding of the errors, and therefore allowed them to more accurately make the necessary changes to their writing.
The results from this study show that students benefitted from online multimodal feedback in a way that receiving only written feedback could not provide. Traditionally, instructors were able to provide online written feedback by inserting comments related to individual errors. While such feedback can still enable students to make related changes for specific language errors, the explanations of such comments tend to lack depth and it can be difficult to make references across text. This cross-referencing is often required for comments regarding coherence of ideas between different sentences in the same paragraph or across paragraphs in the essay.
For example, Figure 10 shows the draft of a student’s essay containing a problem regarding the development of an idea (content) in the same paragraph. As can be seen from the comments written in the draft, traditionally instructors would be able to comment on individual sentences. However, the explanations tend to lack depth so students are often unclear about what to change and how to correct the errors. Specifically, Figure 10 illustrates how in discussing solutions to income disparity, the student jumped from income disparity to the cost of education and back to income disparity again in this paragraph that aimed to describe an existing solution in a problem-solution essay; instead, the student should have focused on one main idea in the paragraph related to solving income inequality.
Figure 10. Example of an initial draft of a student's essay showing a problem regarding the development of a main idea in the same paragraph.
After receiving online multimodal feedback, the student was able to accurately address the error by deleting the unrelated points and developing one main point in a coherent manner (See Figure 11).
Figure 11. Example of the draft in which the problem of developing a main idea in the same paragraph has been corrected.
In the audio recording, the instructor was able to cross-reference sentences to show the student the illogical jump from one idea to another and explain what should have been done. As can be seen in Figure 11, the student was able to understand the error in jumping illogically from income disparity to education cost to education and back to income disparity again. The student rectified the problem by specifically stating and explaining the link between income disparity and the importance of subsidising the cost of education.
The benefits of online multimodal feedback go far beyond increasing accuracy. As the explanations of the students’ errors are provided in greater depth, students can re-play the explanations and think about how best to correct the errors before they submit the next draft.
As can be seen in Table 2, the results of this study show that students who received online multimodal feedback were more confident in making changes related to the content, organisation and language compared to those who only received written feedback (t = 1.725, p = 0.122; t = 1.909, p = 0.029; t = 2.662, p = 0.004 respectively). It also suggested that the depth of the audio explanations enabled students to make changes with confidence, especially for problems related to the organisation of ideas which can be impossible to reference across the text.
Students' ratings on self-perceived confidence in correctly rectifying errors for content, organisation and language
The results of this study suggest that students were able to revise and improve on their work more easily. In doing so, student felt more confident about the accuracy of their revisions because they were able to gauge the instructor’s feedback on their work and emphases on the key points that had to be improved on.
The difference in the depth of students’ understanding of their errors as well as of the writing and language rules were evident during the one-to-one consultations. Students who received only written feedback attended the consultations to seek clarification regarding the meaning of the comments made in the previous draft. Each consultation lasted approximately 30 to 40 minutes. The students made very few corrections as they did not know what errors they had made in their drafts, let alone how to correct them. They mainly asked for clarification on the meaning of the comments, and how to correct organisation- and content-related errors, such as what was wrong with the thesis statement/topic sentences, paragraph development, coherence across essays and language errors that required checking across paragraphs such as tenses. The consultations were very much teacher-centred as students wanted to know what they had to do to make the changes.
In contrast, the consultations for students who received online multimodal feedback lasted approximately 5 to 15 minutes. The nature of the consultations was also vastly different. Students mainly confirmed or clarified changes they made on paragraph development and how they elaborated on the proposed solution, which was the most difficult section of the essay. The consultations became much more student-centric than teacher-centric. As noted by researchers such as Trowler (2010) and Evans (2012), students were also more interested in participating in the feedback process because they were able to understand the comments. Once they could understand the comments that offered suggestions and specific areas of improvement, they became more engaged in the feedback process.
Previous research conducted by Anderson (2008) and Bourelle et al. (2016) demonstrated the convenience of playback functions on multi-sensory feedback in promoting independent learning and enabling the learner to accomplish higher understanding and performance in the subject matter. The results of this study also suggest that online multimodal feedback is able to promote higher order thinking and empower students to feed-forward the skills that they learnt from the essay-writing process. In this case, after students had submitted the final draft of their essay, they were given a paragraph which contained several errors. For example, the paragraph’s topic sentence was too long, it lacked an explanation of the topic, and it contained two examples which were unexplained. The sample paragraph is shown below, followed by examples of how students commented on and corrected these errors based on whether they received written or online multimodal feedback:
Furthermore, minority wealthy corporations that dominate the information production and communication will challenge democracy within a country, drowning out the voice of local media. For example, when General Electric Company (GE) bought National Broadcasting Company (NBC), naturally, political ideas and biases of GE can be seen in NBC. Although GE expels criminal amounts of pollution, pollution is not covered by NBC (Pappas, 2004). Robert McChesney, in a documentary titled “Orwell Rolls in His Grave”, stated that the income for the wealthiest 1% of Americans has risen 141% over the past twenty years, and American middle class has risen 9%, but these statistics are largely unnoticed by the Americans because they are not handed over to us by media (Pappas, 2004). This results in Americans being ignorant of the data collated and information is controlled totally by the multi-national media corporations. (Emphasis in original.)
Example 1 shows the comment made by a student who received written feedback:
The first part of the article is readable and valid but lacks some elaboration on the thesis statement. After presenting the topic sentence, the author didn’t provide [an] explanation about how the media corporation can influence a country’s democracy. In another word, the writer jumped to the example part too eagerly. Besides, the other example shown by him after the first one about GE seems not supporting [sic] his view point. The other example is indicating that many Americans are ignorant of some information because the media is owned by certain company. But the writer didn’t provide any correlation between the ignorance [sic] and democracy.
Example 2 shows the changes made by a student who received written feedback:
Furthermore, minority wealthy corporations dominate information production and communication in an attempt to challenge democracy within a country by imposing censorship on local media. For example, [the] political ideas and biaseness of General Electric Company (GE) can been seen in National Broadcasting Company (NBC) when GE bought NBC. The high amount of pollution that GE emits is not covered by NBC (Pappas, 2004). Robert McChesney, in a documentary titled “Orwell Rolls in His Grave”, stated that the income for the wealthiest 1% of Americans has risen 141% over the past twenty years and American middle class has risen 9%. These statistics are not broadcasted over the media and thus is largely unnoticed by the Americans (Pappas, 2004). As information is being controlled by the multi-national media corporations, Americans are ignorant of the data collated. (Emphasis in original.)
Example 3 shows the changes made by a student who received online multimodal feedback:
Students who received written feedback made comments that demonstrated they recognised the errors in the paragraph (see Example 1). While these students tended to make fewer and more minor changes (see Example 2), they did not necessarily fix the problem as compared to students who received online multimodal feedback (see Example 3). In the past, the instructor had to provide a lot of explicit explanations on various issues related to paragraph development, including why and how changes had to be made to the thesis statements and topic sentences within and across paragraphs in students’ work. This is because the students did not understand what they had to change even after the instructor had provided lengthy written feedback. However, as illustrated in Examples 1 to 3, the students were able to correct content, organisation and language errors after receiving online multimodal feedback (without further explanations or clarifications). The improvements in students’ writing were evident.
The results in this study indicate that the use of online multimodal feedback can encourage students to become independent learners who are able to correct their writing errors by reflecting on their mistakes based on the suggestions given, and without explicit instructions on what to correct. Thus, it is more likely that deep learning will occur.
Overall, providing online multimodal feedback was no quicker than providing written feedback alone. Although providing online multimodal feedback (written and audio) meant it was possible to write very brief notes on students’ writing errors, the recording also took time. On average, essays that received only written comments took an average of 20 to 35 minutes to mark, and assignments that received online multimodal feedback took approximately 25 minutes–15 minutes for the instructor to complete the written comments and 10 minutes to do the audio recording. The time taken to complete an audio recording ranged from as being as short as 6 to 8 minutes to taking up to 15 minutes for each assignment. Therefore, in general, providing written feedback and online multimodal feedback took about the same length of time. However, it was observed that students enjoyed greater learning benefits from online multimodal feedback.
The depth of the explanations provided through online multimodal feedback can enhance independent learning, higher order thinking skills and, more importantly, enable students to actively engage in the learning as they would feel a stronger sense of connection to the course and the instructor. This type of student-centred approach to receiving feedback was a novel experience for students and they were extremely interested in listening to in-depth, tailor-made feedback about their work. The results of this study also indicated that students would benefit from such feedback that would lead to feed-forward of the skills they learnt. Furthermore, from the researcher’s experience, students have been extremely grateful for the time and effort the researcher had put in to help them improve their academic writing skills through this online multimodal feedback mechanism.
The author gratefully acknowledges CDTL for granting the Teaching Enhancement Grant and the Conference Grant in 2013/14.
Abdous, M., & Yoshimura, M. (2010). Learner outcomes and satisfaction: A comparison of live video-streamed instruction, satellite broadcast instruction, and face-to-face instruction. Computers & Education, 55(2), 733–741. http://dx.doi.org/10.1016/j.compedu.2010.03.006
Ahmadi, D., Maftoon, P., & Mehrdad. A.G. (2012). Investigating the effects of two types of feedback on EFL students’ writing. Procedia—Social and Behavioral Sciences, 46, 2590-2595. http://dx.doi.org/10.1016/j.sbspro.2012.05.529
Anderson, D. (2008). The low bridge to high benefits: Entry-level multimedia, literacies, and motivation. Computers and Composition, 25(1), 40–60. http://dx.doi.org/10.1016/j.compcom.2007.09.006
Beebe, L. M. & Cummings, M. C. (1996). Natural speech act data versus written questionnaire data: How data collection method affects speech act performance. In S. M. Gass & J. New (Eds.), Speech Acts Across Cultures: Challenges to Communication in a Second Language (pp. 65-86). Berlin: Mouton de Gruyter.
Bourelle, A., Bourelle, T., Knutson, A. V., & Spong, S. (2016). Sites of multimodal literacy: Comparing student learning in online and face-to-face environments. Computers and Composition: An International Journal for Teachers of Writing, 39, 55–70. http://dx.doi.org/10.1016/j.compcom.2015.11.003
Chanock, K. (2000). Comments on essays: Do students understand what tutors write?Teaching in Higher Education, 5(1), 95–105. http://dx.doi.org/10.1080/135625100114984
Chi, M. T. H., Roy, M., & Hausmann, R. G. M. (2008). Observing dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning. Cognitive Science, 32(2), 301–341. http://dx.doi.org/10.1080/03640210701863396
Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P., Gomez, S., & Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers and Education, 58(1), 386-396. http://dx.doi.org/10.1016/j.compedu.2011.08.025
Evans, C. (2012). Assessment feedback: We can do better. Reflecting Education, 8 (1), 1-9. Retrieved from http://www.reflectingeducation.net/index.php/reflecting/article/view/102/107.
Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219-245. http://dx.doi.org/10.1177/1077800405284363
Gibbs, G. & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3-31. Retrieved from http://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/articles/simpson.pdf.
Hepplestone, S., Holden, G., Irwin, B., Parkin, H. J. and Thorpe, L. (2011). Using technology to encourage student engagement with feedback: A literature review. Research in Learning Technology, 19(2), 117-127. http://dx.doi.org/10.1080/21567069.2011.586677
Newmann, F. M., Wehlage, G. G. & Lamborn, S. D. (1992). The significance and sources of student engagement. In F.M. Newmann (Ed.) Student Engagement and Achievement in American Secondary Schools (pp. 2-11). New York: Teachers College, Columbia University. Retrieved from http://files.eric.ed.gov/fulltext/ED371047.pdf#page=16.
Nicol, D. J. & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. http://dx.doi.org/10.1080/03075070600572090
Nicol, D. J. & Milligan, C. (2006). Rethinking technology-supported assessment in terms of the seven principles of good feedback practice. In C. Bryan and K. Clegg (Eds.), Innovative Assessment in Higher Education. London: Taylor and Francis Group Ltd. Retrieved from http://ewds.strath.ac.uk/REAP/public/Papers/Nicol_Milligan_150905.pdf.
Nugraha, M. (2002). Triangulation of instrumentation and data source: A stronger method in assessing English language needs. K@ta, 4(2). 148-161. Retrieved from http://puslit2.petra.ac.id/ejournal/index.php/ing/article/view/15491/15483.
Orsmond, P., & Merry, S. (2011). Feedback alignment: Effective and ineffective links between tutors’and students’ understanding of coursework feedback. Assessment and Evaluation in Higher Education, 36(2), 125–136. http://dx.doi.org/10.1080/02602930903201651
Parr, J.M. & Timperley, H. S. (2010). Feedback to writing, assessment for teaching and learning and student progress. Assessing Writing, 15(2), 68-85. http://dx.doi.org/10.1016/j.asw.2010.05.004
Pompos, L. & Yee, K. (2012, February 20). iPad Screencasting: Educreations and Explain Everything. The Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/profhacker/ipad-screencasting-educreations-and-explain-everything/38662.
Poulos, A. & Mahony, M.J. (2008). Effectiveness of feedback: The students’ perspective. Assessment and Evaluation in Higher Education, 33(2), 143-154. http://dx.doi:org/10.1080/02602930601127869
Richards, J.C. (2001). Curriculum Development in Language Teaching. Singapore: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511667220
Trowler, V. (2010). Student engagement literature review. York: The Higher Education Academy. Retrieved from https://www.heacademy.ac.uk/system/files/studentengagementliteraturereview_1.pdf.
Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written response. Assessment and Evaluation in Higher Education, 31(3), 379-394. http://dx.doi.org/10.1080/02602930500353061
Wilson, M. (2012). Students’ learning style preferences and teachers’ instructional strategies: Correlations between matched styles and academic achievement. Southeastern Regional Association of Teacher Educators, 22(1), 36-44. Retrieved from http://files.eric.ed.gov/fulltext/EJ995172.pdf.
Wilson, A. (2012). Student engagement and the role of feedback in learning. Journal of Pedagogic Development, 2(1). Retrieved from http://www.beds.ac.uk/jpd/volume-2-issue-1/student-engagement-and-the-role-of-feedback-in-learning.
Misty So-Sum WAI-COOK is a lecturer at the Centre for English Language Communication. Her research interests lie in the areas of intercultural communication (particularly speech acts and study abroad), teacher and peer feedback in language education. In particular, she has strong interest in using technology to strengthen an inside-outside class continuum in students’ learning. She has published in the Australian Review of Applied Linguistics.
Do you have a view you'd like to share?