Skip to main content

NYU Professor's 42-Cent AI Oral Exams Expose Cheating Gap

The $0.42 Solution to Academic Cheating

Image Image source note: The image is AI-generated, and the image licensing service provider is Midjourney

When NYU professors Panos Ipeirotis and Konstantinos Rizakos noticed suspiciously polished assignments in their "AI/ML Product Management" course, they didn't expect a simple oral exam would reveal such glaring knowledge gaps. Students who submitted flawless papers stumbled through basic explanations of their own work.

How AI Became the Ultimate Teaching Assistant

The professors turned this discovery into an innovative assessment method using ElevenLabs' voice AI technology. Their two-part oral exam first asked students to defend project decisions, then randomly quizzed them on course material. Over nine days, 36 students completed the 25-minute exams at a total cost of just $15 - cheaper than a pizza delivery.

"At first, students complained the AI sounded like a stern professor," Ipeirotis admits. Early versions sometimes fired multiple questions simultaneously, creating confusion. After tweaking the system, the virtual examiner became more conversational while maintaining rigorous standards.

The Grading Revolution

Scoring presented another challenge. Using Claude, Gemini and ChatGPT to evaluate responses initially produced inconsistent results. "It was like having three teaching assistants who never agreed," Rizakos jokes. By having the AIs cross-check each other's assessments, they achieved remarkably consistent final grades.

While 70% of students acknowledged the tests effectively measured true understanding, many found them more stressful than written exams. "You can't bluff an AI," one participant noted. "It immediately spots vague answers and asks follow-ups."

Beyond Cheat Detection

The experiment revealed unexpected benefits beyond catching academic dishonesty. "Some students clearly understood concepts but struggled to articulate them," Ipeirotis observes. "Now we know where to focus teaching efforts."

The professors believe AI oral exams could become standard practice, especially for technical courses where practical understanding matters more than polished writing. At 42 cents per test, they're also solving the eternal problem of academic budget constraints.

Key Points:

  • Written vs Verbal Discrepancy: High-scoring assignments often didn't reflect actual comprehension when tested verbally
  • Budget-Friendly Innovation: AI proctoring slashed oral exam costs from hundreds to mere dollars per class
  • Stress with Purpose: While more intense than written tests, most students recognized the method's effectiveness
  • Teaching Insights: The exams identified not just cheating but genuine learning gaps needing attention

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Tsinghua Takes Stand: New AI Rules Aim to Balance Tech Use and Academic Integrity
News

Tsinghua Takes Stand: New AI Rules Aim to Balance Tech Use and Academic Integrity

Tsinghua University has unveiled groundbreaking guidelines governing AI's role in education. The framework walks a tightrope - embracing AI's potential while safeguarding against overreliance. Teachers gain flexibility to integrate AI tools creatively, but must disclose usage clearly. Students can tap AI for learning support, but submitting machine-generated work as their own crosses the line. The policy particularly tightens screws on graduate research, banning AI-assisted writing outright.

November 27, 2025
AI in EducationAcademic IntegrityHigher Education Policy
NeurIPS Conference Rocked by Fake Citation Scandal
News

NeurIPS Conference Rocked by Fake Citation Scandal

A shocking discovery at one of AI's most prestigious conferences has exposed widespread citation fraud. GPTZero's investigation found over 100 fabricated references in 51 NeurIPS papers, including fake authors and invalid DOIs. The scandal highlights growing pressures in academic publishing as conference submissions more than doubled since 2020, overwhelming the peer review system.

January 23, 2026
Academic IntegrityAI ResearchPeer Review
Music Legends Team Up With AI for Groundbreaking Album
News

Music Legends Team Up With AI for Groundbreaking Album

Legendary artists like Liza Minnelli and Art Garfunkel are collaborating with ElevenLabs on 'The Eleven Album,' blending human artistry with AI innovation. The project promises full creative control for musicians while exploring new sonic territories across genres from pop to electronic. As the music industry grapples with technology's role, this ambitious venture could redefine creative partnerships.

January 22, 2026
AIinMusicElevenLabsMusicInnovation
News

El Salvador Bets on Controversial AI Chatbot Grok for School Reform

El Salvador is making headlines with its bold education gamble - deploying Elon Musk's controversial Grok chatbot across all public schools. While the AI assistant promises to revolutionize learning for over a million students, its checkered past of extremist rhetoric raises eyebrows. This move puts the Central American nation at the forefront of AI education experiments, joining Estonia and Colombia in testing whether classroom chatbots can deliver on their high-tech promises.

December 12, 2025
AI in EducationEdTech InnovationControversial Technology
Apple's AI Paper Hits Snag: Benchmark Errors Trigger Late-Night Debugging Frenzy
News

Apple's AI Paper Hits Snag: Benchmark Errors Trigger Late-Night Debugging Frenzy

An Apple research paper claiming small models outperform GPT-5 in visual reasoning faces scrutiny after a Beijing researcher uncovered significant benchmark errors. Lei Yang discovered missing image inputs in the official code and incorrect ground truth labels affecting about 30% of test cases. The revelation sparked urgent corrections and reignited debates about quality control in AI research.

December 1, 2025
AI ResearchMachine LearningAcademic Integrity
News

AI Conference Faces Irony: Thousands of Peer Reviews Written by AI

In a twist that reads like tech satire, the prestigious ICLR 2026 conference discovered AI had infiltrated its peer review process. Detection tools revealed over 15,000 reviews were fully generated by large language models, while another third showed significant AI editing. These 'machine reviews' tended to be longer and scored higher—but often contained fabricated citations or imaginary errors. The scandal has forced organizers to implement strict new rules banning undeclared AI use in submissions and reviews.

November 28, 2025
Academic IntegrityPeer Review CrisisAI Ethics