An oral assessment initiative at Cornell University has been well-received as higher education struggles with AI-related integrity issues.
At some point over the past century, a classic academic tradition was abandoned.
Fast forward past the creation of the calculator, personal computers, the internet, and ubiquitous digital learning environments.
A Cornell University professor, at a time when so many college students are relying on the latest version of ChatGPT for help, found a solution so old that it was new again: oral exams.
“Academic integrity was always a major concern, but the bar to violating it was just so low,” Christopher Schaffer, who teaches biomedical engineering, told The Epoch Times.
He said a return to more personal engagement in the classroom benefits both students and teachers.
“It’s about fairness and assessment,” he said, “not just cracking down on cheating.”
Schaffer developed the idea over three years, comparing students’ biomedical engineering assignment results with ChatGPT’s.
Between 2022 and 2025, new versions of the generative artificial intelligence (AI) tool improved to the point that it consistently outperformed all students. That caused some alarm, the professor said. He added the oral exam requirement to his five-credit, 300-level course ahead of last semester.
Restoring the Circle of Learning
Schaffer’s class provides instruction and labs in which students learn how to design electronic medical devices that operate with signals. Six take-home assignments are still a course requirement, but instead of just submitting a paper explaining a solution to a problem, students must defend their research, concepts, and applications in a 20-minute discussion.
The assignments allow for some early collaboration with others and the use of AI as a starting point to identify sources of information, but all students must explain what they know and how they know it to a professor or teaching assistant. All the written materials used in their research are submitted as well.
For example, Schaffer explained, students might be assigned to design a circuit used in a sensor that detects eyelid spasms, which would include an explanation of a signal-producing algorithm. They can provide a diagram to illustrate their code-writing, but the remainder of the assessment is explained live and out loud.
None of the problems assigned has just one correct design or answer. The technology the students are learning about has “trade-offs and alternatives,” Schaffer said.
The completed oral assessments are scored between one for unsatisfactory and four for excellent.
In addition to ensuring academic integrity, this process of researching, preparing, and rehearsing—a circle of learning—goes a long way in helping students learn more and build confidence. Many students have limited experience in public speaking. Those who struggle because they are nervous are allowed a do-over, Schaffer said.
“Broadly speaking, it went really well,” he said, noting that he was pleased with the class’s performance and positive feedback from students, another professor, and graduate students who helped with this initiative. “The vast majority really appreciate in-the-moment feedback on their understanding.”
These days, oral assessments are mostly limited to learning exercises in some classes or dissertations for doctoral candidates in certain fields, Schaffer said. But he said he believes that interest in this method will grow across higher education, in both STEM (science, technology, engineering, and math) and the humanities. Other faculty members and staff told him they support the concept and may eventually follow suit.







