Skip to main content

Rakuten AI Faces Backlash Over License Removal Scandal

Rakuten's AI Model Sparks Open-Source Controversy

Japan's e-commerce giant Rakuten has stumbled into a public relations crisis with its latest artificial intelligence offering. The company's Rakuten AI 3.0 model, initially promoted as a homegrown achievement, is now at the center of an open-source licensing scandal that has developers buzzing.

The Discovery That Started It All

Tech enthusiasts digging through the model's code made an uncomfortable discovery: Rakuten had quietly removed the MIT Open Source License file from DeepSeek-V3, the Chinese AI model that served as its foundation. In the world of open-source software, this is equivalent to removing a copyright notice from a book you didn't write.

"It's not that they built on existing work - everyone does that," explains Tokyo-based developer Haruto Tanaka. "But pretending it's entirely yours? That crosses a line in our community."

Public Backlash and Quick Damage Control

The backlash came swiftly across Japanese tech forums and social media. Critics highlighted two major concerns:

  • Legal questions: The MIT license explicitly requires maintaining original copyright notices
  • Ethical concerns: Rakuten had accepted substantial government funding while presenting what appeared to be repackaged work

The company moved quickly to contain the fallout, adding a NOTICE file with proper attribution within days of the controversy emerging. Legally speaking, this patch-job brings them into compliance - but many in the open-source community remain unimpressed.

Bigger Questions Linger

Beyond the immediate licensing issue, this episode raises uncomfortable questions about corporate use of open-source projects:

  • Should companies receiving public funds have stricter disclosure requirements?
  • Where exactly should we draw the line between "building on" and "appropriating" community work?
  • How can open-source principles survive as big money enters the AI space?

Rakuten has declined to explain why the license was removed initially. For now, their repaired GitHub repository tells one story while their initial actions tell another - and tech watchers are paying close attention to which version wins out in public perception.

Key Points:

  • Rakuten AI 3.0 removed required license information from its base model
  • Quick corrections followed public outcry from developer community
  • Incident highlights tension between corporate and open-source cultures in AI development

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

OpenAI and AWS Forge Defense Deal as Anthropic Exits Pentagon Partnership

In a major shakeup for AI in government, OpenAI has secured a deal to provide its models to the Pentagon through Amazon Web Services. This comes as rival Anthropic withdrew from government contracts over ethical concerns about military applications. The shift signals growing tensions between AI commercialization and ethical boundaries in defense technology.

March 18, 2026
AI ethicsgovernment technologydefense contracts
News

Justice Dept. Fires Back at AI Firm Over Military Use Restrictions

The U.S. Justice Department has escalated its legal battle with AI company Anthropic, arguing the firm's attempts to restrict military use of its Claude AI system justify its 'supply chain risk' designation. Government lawyers predict the lawsuit will fail, while tech industry leaders rally behind Anthropic's ethical stance - creating a high-stakes clash between national security concerns and AI principles.

March 18, 2026
AI ethicsmilitary technologygovernment contracts
News

Teens Sue Musk's AI Over Disturbing Deepfake Content

Elon Musk's xAI faces a troubling lawsuit as three Tennessee teenagers accuse its Grok chatbot of generating explicit images of minors. Court documents reveal shocking details about how these AI-created depictions circulated online, allegedly serving as 'trading tools' in encrypted groups. The case spotlights growing concerns about generative AI's potential misuse and the tech industry's responsibility to protect vulnerable users.

March 17, 2026
AI ethicsDeepfake dangersChild online safety
Youzan Sets Record Straight on AI Controversy
News

Youzan Sets Record Straight on AI Controversy

Chinese tech firm Youzan has clarified its position following allegations linking its investments to an 'AI poisoning' scandal exposed during CCTV's annual consumer rights show. The company confirmed its invested firms weren't involved in developing the controversial GEO optimization system that manipulated AI search results. Youzan emphasized its commitment to ethical AI marketing practices while distancing itself from the deceptive tactics revealed in the investigation.

March 16, 2026
AI ethicsYouzanGEO technology
News

Inside San Francisco's Secret Robot Fight Club

An underground scene is electrifying San Francisco's tech circles - humanoid robots battling it out in steel cages while VR pilots control the action remotely. Powered by Chinese hardware and AI brains, these mechanized gladiators showcase a startling fusion of technology and spectacle that's raising eyebrows about where robotics entertainment might be headed.

March 16, 2026
humanoid robotsAI ethicsunderground tech
News

Google Bets on AI-Powered Animation to Clean Up Kids' YouTube

Google is taking an unconventional approach to tackling the flood of low-quality AI-generated content on YouTube Kids. The tech giant has invested $1 million in Animaj, a children's animation studio known for its high-quality productions. This marks YouTube's first direct investment in a children's content creator worldwide. The deal includes early access to Google's unreleased AI models, positioning Animaj as part of Google's solution to improve content quality rather than contribute to the problem.

March 16, 2026
YouTubechildrens mediaAI ethics