Skip to main content

Japan's AI Ambitions Clouded by Copying Allegations

Japan's AI Showcase Sparks Transparency Debate

What was meant to be a proud moment for Japan's tech industry has turned into a cautionary tale about AI development ethics. Rakuten Group's recent unveiling of its 70-billion-parameter language model - developed with government support - quickly unraveled when eagle-eyed developers spotted telltale signs of foreign origins.

The Telltale Clues

Within hours of the model's release, open-source investigators found smoking-gun evidence in the technical architecture. The configuration files still bore the original name "DeepseekV3ForCausalLM" - a clear fingerprint from the Chinese-developed model. Rather than building from scratch as claimed, Rakuten appeared to have simply fine-tuned this existing framework with Japanese data.

"It's like repainting a car and claiming you engineered it," commented one developer on GitHub. "The chassis still shows the original manufacturer's marks."

Disclosure Dilemmas

The controversy centers on two critical issues:

1. Selective Transparency Rakuten's press materials vaguely referenced "integrating open-source community wisdom" without specifically acknowledging the Chinese model's foundational role. This omission struck many as disingenuous for what was marketed as a national achievement.

2. Licensing Lapses Initial releases allegedly omitted required MIT license documentation. While Rakuten later added compliance notices, critics argue this reactive approach demonstrates poor open-source stewardship.

Industry Reactions

The AI community remains divided:

  • Purists condemn what they see as intellectual property laundering
  • Pragmatists note that model refinement is common practice globally
  • Legal experts debate whether license terms were technically violated

"This isn't just about Rakuten," observes Tokyo University AI ethics professor Kenji Sato. "It exposes systemic challenges in properly attributing AI lineage as the field moves at breakneck speed."

As of publication, Rakuten maintains its model represents significant original work while declining to address specific allegations about license file removal.

Key Points:

  • 70B parameter model developed with METI funding faces authenticity questions
  • Technical artifacts suggest Chinese Deepseek model foundation
  • Disclosure practices criticized as insufficiently transparent
  • Open-source compliance remains under scrutiny
  • Industry debate continues about ethical standards for derivative AI works

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Mistral AI's Small4: A Versatile Powerhouse for Developers
News

Mistral AI's Small4: A Versatile Powerhouse for Developers

European AI lab Mistral has unveiled its latest innovation - the Small4 model. This open-source marvel combines reasoning, multimodal understanding, and programming capabilities in one package. With a 256k context window and efficient MoE architecture, it promises significant performance gains over its predecessor. Developers now have a powerful all-in-one solution that doesn't force them to choose between specialized models.

March 20, 2026
AI DevelopmentOpen SourceMachine Learning
News

NVIDIA's Huang Calls for Calm in AI Debate: Separate Real Risks from Hype

At the GTC 2026 conference, NVIDIA CEO Jensen Huang urged tech leaders to approach AI discussions with nuance, warning against fearmongering that could stifle innovation. His comments come as AI firm Anthropic faces government pushback over ethical concerns. Huang maintains AI is fundamentally software, not a sentient threat, while advocating for diversified chip supply chains as a real strategic priority.

March 20, 2026
AI EthicsSemiconductor IndustryTech Policy
Apple Caught in AI Copyright Storm Over Questionable Training Data
News

Apple Caught in AI Copyright Storm Over Questionable Training Data

Tech giant Apple finds itself embroiled in a growing legal battle over AI training practices. Chicken Soup for the Soul has filed suit alleging Apple and other major tech companies used pirated books from the controversial 'Books3' dataset. While Apple claims its use was limited to research, legal experts warn the company could face complications through its partnership with Google. This case highlights the murky ethical waters of AI development as regulators tighten scrutiny.

March 19, 2026
AI EthicsCopyright LawTech Lawsuits
News

Encyclopedia Britannica Takes OpenAI to Court Over AI Training Dispute

Encyclopedia Britannica has filed a lawsuit against OpenAI, accusing the tech company of illegally using nearly 100,000 copyrighted articles to train its ChatGPT model. The legal complaint alleges that ChatGPT's outputs often mirror Britannica's content 'almost word for word,' potentially diverting readers from the original source. This case marks another chapter in the ongoing tension between content creators and AI developers over copyright boundaries.

March 17, 2026
Copyright LawAI EthicsChatGPT
News

OpenAI Considers Adult Content Mode Amid Internal Debate

OpenAI CEO Sam Altman is pushing forward with plans for an 'adult mode' in ChatGPT, sparking intense internal debate. While promising to treat adult users 'as adults,' concerns persist about safety risks and ethical implications. The proposed feature would allow verified users access to romantic content, though disagreements within the company and regulatory hurdles may delay implementation.

March 17, 2026
OpenAIChatGPTAI Ethics
News

NVIDIA and Cisco Team Up to Secure AI Agents with Open-Source OpenShell

As AI agents move from labs to business systems, security concerns grow. NVIDIA and Cisco have responded by open-sourcing OpenShell, a runtime that creates secure 'sandboxes' for AI agents. Combined with Cisco's AI Defense platform, this solution monitors agent actions while preventing data leaks. The collaboration marks a significant step toward trustworthy enterprise AI automation.

March 17, 2026
AI SecurityEnterprise TechnologyOpen Source