Volume 35
Abstract: This study explores the potential of large language models (LLMs), specifically GPT-4 and Gemini, in generating teaching cases for information systems courses. A unique prompt for writing three different types of teaching cases such as a descriptive case, a normative case, and a project-based case on the same IS topic (i.e., the introduction of blockchain technology in an insurance company) was developed and submitted to each LLM. The generated teaching cases from each LLM were assessed using subjective content evaluation measures such as relevance and accuracy, complexity and depth, structure and coherence, and creativity as well as objective readability measures such as Automated Readability Index, Coleman-Liau Index, Flesch-Kincaid Grade Level, Gunning Fog Index, Linsear Write Index, and SMOG Index. The findings suggest that while both LLMs perform well on objective measures, GPT-4 outperforms Gemini on subjective measures, indicating a superior ability to create content that is more relevant, complex, structured, coherent, and creative. This research provides initial empirical evidence and highlights the promise of LLMs in enhancing IS education while also acknowledging the need for careful proofreading and further research to optimize their use. Keywords: Generative AI, Gemini, GPT-4, Large language model (LLM), Teaching case Download This Article: JISE2024v35n3pp390-407.pdf Recommended Citation: Lang, G., Triantoro, T., & Sharp, J. H. (2024). Large Language Models as AI-Powered Educational Assistants: Comparing GPT-4 and Gemini for Writing Teaching Cases. Journal of Information Systems Education, 35(3), 390-407. https://doi.org/10.62273/YCIJ6454 |