Rising Concerns Over AI Use in Academic Settings
Recent allegations have surfaced regarding a significant incident of academic dishonesty during a midterm exam at Yonsei University. The incident involved students allegedly using generative AI tools, such as ChatGPT, to assist with their answers in a non-face-to-face course.
The course in question focuses on the operational and developmental principles of generative AI. It is a large lecture with over 600 students and is conducted through internet videos. The midterm exam was administered online, requiring students to complete answer sheets individually. Many of the questions asked students to explain basic AI-related concepts.
To prevent cheating, Professor A from the Department of Artificial Intelligence at Yonsei University required students to record and submit a video showing their computer screen, hands, and face throughout the test. Despite these measures, after analyzing the submitted videos with teaching assistants, Professor A identified multiple cases where students intentionally adjusted their camera angles or looked away from the screen. These actions raised suspicions that they were using the internet or AI to search for answers during the exam.
Over 50 students are believed to have engaged in cheating, according to Professor A. In a post on the school’s bulletin board, Professor A stated that students who “confess” would have their midterm exam scores marked as zero, while those who deny the allegations may face suspension. Professor A emphasized the need to address this issue decisively.
Subsequently, over 40 students contacted Professor A to admit their wrongdoing, with many stating they used AI to draft their answers.
Evolution of Cheating Techniques
During the COVID-19 pandemic in 2020, several universities, including Yonsei, Sogang, and Inha, uncovered collective cheating in online midterm exams. At that time, cheating involved sharing answers or bringing in “cunning notes.” However, recent reports indicate that students are now using AI to search for answers in real time.
Professor Lee Kyung-jeon from Kyung Hee University’s Department of Big Data Applications noted that ChatGPT can provide answers in just 2–3 seconds when a question is inputted. This has made cheating significantly easier.
A survey by the Korea Research Institute for Vocational Education and Training found that 91.7% of 726 university students (enrolled in 4–6 year programs) had experience using AI for assignments or research last year. However, according to the Korean Council for University Education, 71.1% of 131 universities nationwide have yet to establish AI guidelines.
Global Responses to AI-Assisted Cheating
In response to similar concerns, U.S. universities have begun implementing subjective evaluations requiring students to handwrite answers in “blue books,” blue-covered lined notebooks, to prevent AI-assisted cheating. Institutions such as UC Berkeley and Carnegie Mellon University prohibit the use of AI for drafting exam outlines or assignments.
This shift highlights a growing awareness of the challenges posed by AI in academic settings. As technology continues to evolve, educational institutions must adapt their policies and practices to maintain academic integrity.
Ongoing Challenges and Future Implications
The use of AI in academic assessments presents both opportunities and challenges. While it can enhance learning and provide immediate feedback, it also raises ethical and practical concerns about fairness and originality.
Educational institutions must develop comprehensive strategies to address these issues. This includes creating clear guidelines for AI use, investing in detection technologies, and fostering a culture of academic honesty among students.
As the landscape of education continues to change, it is crucial for universities to stay ahead of emerging trends and ensure that academic standards remain high. By doing so, they can protect the value of education and promote a fair and equitable learning environment for all students.
