Skip to main content

Student Demands Refund After Professor Uses AI for Lectures

A controversy has erupted at Northeastern University after business student Ella Stapleton discovered her professor used artificial intelligence to create lecture materials. The incident has reignited debates about academic integrity and AI's role in education.

While reviewing course materials, Stapleton noticed telltale signs of AI generation—ChatGPT references, unusual spelling errors, and oddly proportioned images. "He repeatedly told us not to use AI," Stapleton told reporters, "but he himself was using it." She filed formal complaints with university administrators and demanded an $8,000 tuition refund, which the school ultimately denied.

Image Image source note: Image generated by AI, licensed through Midjourney service provider.

The case gained attention after being reported by The New York Times, revealing sharp divisions among educators. Paul Shovlin, an English professor and AI researcher at Ohio University, dismissed student concerns as "absurd," arguing there are no universal standards for AI use in classrooms.

However, new research from Duke University suggests workplace AI users face social stigma. The study found colleagues often judge those who rely on automation tools, despite proven efficiency gains. This social cost appears to be influencing perceptions in academic settings as well.

Professor Rick Arrowood admitted using AI to update his lectures but acknowledged failing to properly review the output. "Looking back," he said, "I wish I had examined these contents more carefully." Arrowood now advocates for greater transparency when educators use AI tools and hopes his experience serves as a cautionary tale.

As universities grapple with evolving technology policies, this incident highlights growing tensions between innovation and traditional academic values. How should institutions balance efficiency with educational integrity? The answer remains unclear as both students and faculty navigate this new terrain.

Key Points

  1. A Northeastern University student identified AI-generated content in her professor's lectures and requested an $8,000 tuition refund
  2. Educators remain divided on whether using AI for teaching materials constitutes misconduct
  3. Research shows workplace AI users often face negative social perceptions despite productivity benefits
  4. The involved professor admitted insufficient review of AI-generated content and now advocates for transparency

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

South Korea's AI Ambition Hits Snag Over Chinese Code Controversy

South Korea's push for AI independence faces scrutiny as homegrown models show striking similarities to Chinese open-source code. Major tech players like Naver and SK Telecom find themselves embroiled in debates about technological sovereignty versus practical development realities. While companies defend their approach as standard industry practice, the revelations spark discussions about what truly constitutes 'domestic' AI innovation.

January 14, 2026
Artificial IntelligenceTechnology PolicySouth Korea Tech
YouTube Reigns Supreme in South Korea as ChatGPT Makes Stunning Gains
News

YouTube Reigns Supreme in South Korea as ChatGPT Makes Stunning Gains

South Korea's app landscape saw dramatic shifts in 2025, with YouTube overtaking local favorites KakaoTalk and Naver to become the most-used mobile app. Meanwhile, ChatGPT's explosive growth - a staggering 341% year-over-year increase in December - signals AI's rapid transition from novelty to necessity. These trends reveal how video consumption and intelligent assistants are reshaping digital habits.

January 9, 2026
YouTubeChatGPTdigital trends
ChatGPT Loses Ground as Gemini Gains Momentum in AI Assistant Race
News

ChatGPT Loses Ground as Gemini Gains Momentum in AI Assistant Race

The AI assistant landscape is shifting dramatically, with ChatGPT's dominance slipping while Google's Gemini makes significant gains. Recent data shows ChatGPT's global desktop traffic share dropped from 86.7% to 64.5% in just one year, while Gemini surpassed the 20% threshold for the first time. Users are increasingly choosing alternatives based on functionality and innovation, putting pressure on OpenAI as it considers introducing ads—a move that could further alienate its user base.

January 8, 2026
AI AssistantsChatGPTGoogle Gemini
ChatGPT Steps Into Healthcare as Ant Group's AI Doctor Hits 30 Million Users
News

ChatGPT Steps Into Healthcare as Ant Group's AI Doctor Hits 30 Million Users

OpenAI has entered the AI healthcare race with ChatGPT Health, launching features similar to Ant Group's popular Afu app. The Chinese platform now serves 30 million monthly users, doubling its base in just one month. Both tools offer health Q&A and smart device integration, though Afu maintains an edge with full medical service connections.

January 8, 2026
AIHealthcareDigitalWellnessChatGPT
News

ChatGPT Now Fields 40 Million Health Questions Daily

Artificial intelligence is transforming how people access medical information, with ChatGPT handling over 40 million health-related queries daily worldwide. Surprisingly, most interactions involve insurance questions and medical terminology rather than diagnoses. The technology proves especially valuable during off-hours and in underserved areas, while healthcare professionals increasingly adopt AI tools for documentation support.

January 7, 2026
AIHealthcareChatGPTMedicalTechnology
ChatGPT Emerges as America's Unexpected Medical Advisor
News

ChatGPT Emerges as America's Unexpected Medical Advisor

New data reveals ChatGPT is handling over 2 million insurance queries weekly, with medical questions making up more than 5% of its workload. The AI has become a go-to resource for millions seeking help with medical bills, insurance comparisons, and preliminary symptom checks. While GPT-5 shows improved medical capabilities, experts caution about lingering risks of AI hallucinations.

January 6, 2026
AI in HealthcareChatGPTMedical Technology