Journal of Information Systems Education (JISE)

Volume 14

Volume 14 Number 4, Pages 389-400

Winter 2003


How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes?


William L. Kuechler
Mark G. Simkin

University of Nevada
Reno, NV 89557, USA

Abstract: Despite the wide diversity of formats with which to construct class examinations, there are many reasons why both university students and instructors prefer multiple-choice tests over other types of exam questions. The purpose of the present study was to examine this multiple-choice/constructed-response debate within the context of teaching computer programming classes. This paper reports the analysis of over 150 test scores of students who were given both multiple-choice and short-answer questions on the same midterm examination. We found that, while student performance on these different types of questions was statistically correlated, the scores on the coding questions explained less than half the variability in the scores on the multiple choice questions. Gender, graduate status, and university major were not significant. This paper also provides some caveats in interpreting our results, suggests some extensions to the present work, and perhaps most importantly in light of the uncovered weak statistical relationship, addresses the question of whether multiple-choice tests are “good enough.”

Keywords: Multiple-choice versus essay tests, Computer programming education, Test formats, Student test performance

Download this article: JISE - Volume 14 Number 4, Page 389.pdf


Recommended Citation: Kuechler, W. L. & Simkin, M. G. (2003). How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes? Journal of Information Systems Education, 14(4), 389-400.