Volume 18
Abstract: Providing detailed, constructive and helpful feedback is an important contribution to effective student learning. Quality assurance is also required to ensure consistency across all students and reduce error rates. However, with increasing workloads and student numbers these goals are becoming more difficult to achieve. An automated feedback system, referred to as the Automated Feedback Generator (AFG), has therefore been designed and developed with the aim of providing superior quality assurance and efficiency in both assessing student assignments and providing feedback. Unlike existing automated marking and feedback software, AFG aims to allow educators to perform the entire process of student feedback generation for any assessment type. The AFG system is investigated across two introductory ICT courses: general ICT and programming. The aim is to demonstrate that AFG provides a more effective means for providing student feedback than alternative manual and automated approaches. This is achieved by comparing AFG with these alternatives and demonstrating that it offers quality control, efficiency and effectiveness benefits whilst generating consistent feedback from a student perspective. An empirical approach is employed using attitudinal data. T tests are used to test hypotheses comparing three feedback generation approaches: AFG, manual and a more complex automated approach. The results show that feedback from AFG was perceived to be constructive, helpful and with error levels less than or equal to those for other course feedback approaches; students also found feedback to be consistent with that produced by the more complex alternatives. Keywords: Quality assurance, Automated assessment, ICT education Download this article: JISE - Volume 18 Number 4, Page 491.pdf Recommended Citation: Debuse, J., Lawley, M., & Shibl, R. (2007). The Implementation of an Automated Assessment Feedback and Quality Assurance System for ICT Courses. Journal of Information Systems Education, 18(4), 491-502. |