Skip to main content

Meituan's AI Browser Faces Code Controversy, Goes Open-Source

Meituan Responds to AI Browser Code Dispute

Image

Guangnian Zhiwai, Meituan's innovation team behind the Tabbit AI browser, has taken decisive action following accusations of improperly using open-source code. The company confirmed it has removed controversial translation features and made the relevant code publicly available on GitHub.

The controversy erupted shortly after Tabbit's public beta launch when developers noticed striking similarities between its translation functions and the Read-Frog project. While Meituan maintains its team forked Read-Frog before it adopted GPLv3 licensing (on January 2, 2026), the company acknowledges failing to track subsequent license changes.

"We deeply respect open-source principles and creators' rights," stated a Meituan spokesperson. "Though our initial fork occurred during a license-free period, we've chosen complete transparency by open-sourcing our implementation."

The Open-Source Dilemma

This incident spotlights the tightrope walk facing tech giants racing to deploy AI products while navigating complex licensing landscapes. As AI development accelerates globally:

  • Companies face mounting pressure to release competitive products quickly
  • The developer community grows increasingly vigilant about code provenance
  • Licensing oversights can trigger public relations challenges overnight

The Tabbit case demonstrates how easily version control gaps can create compliance headaches - even for experienced engineering teams.

Industry Implications

Open-source experts suggest this won't be an isolated case. "We're entering uncharted territory," observes Lin Wei, a Shanghai-based software licensing attorney. "AI's breakneck pace often collides with open-source governance requirements that assume more deliberate development cycles."

The tech community appears divided in its response:

  • Some applaud Meituan's transparency and corrective actions
  • Others argue proper license tracking should be standard practice
  • Many hope this sparks broader conversations about balancing innovation with compliance

As companies worldwide scramble to integrate AI capabilities, Meituan's experience serves as a cautionary tale about maintaining rigorous open-source oversight amid fierce competition.

Key Points:

  • Tabbit AI browser removed disputed translation features
  • Related code now fully open-sourced on GitHub
  • Incident highlights growing pains in AI/open-source intersection
  • Industry watching how companies adapt governance practices

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Anthropic Drops Safety Guardrails Amid AI Arms Race

AI safety pioneer Anthropic has made a startling policy reversal, relaxing its strict safeguards to keep pace with rivals like OpenAI. The company once known for putting ethics first now prioritizes competition as it seeks billions in funding. This shift has sparked internal dissent, with security experts warning of unchecked risks.

February 26, 2026
AI EthicsAnthropicTech Regulation
News

Alibaba's Qwen AI Models Dominate Global Rankings While Lunar New Year Usage Soars

Alibaba's Qwen series of AI models has taken the open-source world by storm, securing the top four spots on Hugging Face's global rankings. Meanwhile, consumer adoption skyrocketed during Lunar New Year celebrations, with daily active users jumping nearly tenfold. The models' ability to handle complex tasks through simple voice commands suggests AI assistants are moving beyond novelty status into practical everyday use.

March 2, 2026
Artificial IntelligenceAlibaba CloudOpen Source
ChatGPT May Soon Offer Adult Conversations With Age Verification
News

ChatGPT May Soon Offer Adult Conversations With Age Verification

OpenAI appears to be developing an adult-oriented 'Naughty Chat' mode for ChatGPT, hidden in recent Android app code. This optional feature would allow more provocative conversations when explicitly requested by users over 18. The move signals OpenAI's evolving approach to content moderation while addressing growing demand for AI companionship.

February 28, 2026
ChatGPTOpenAIAI Ethics
Anthropic Gives Back: Free Claude Max for Open Source Heroes
News

Anthropic Gives Back: Free Claude Max for Open Source Heroes

Anthropic is rolling out the red carpet for open source contributors with a generous new program. Maintainers of popular projects can now score six months of free access to Claude Max20x, Anthropic's top-tier AI model. The move recognizes how crucial these developers are to the tech ecosystem, offering them powerful tools to streamline code reviews and community management. Projects need at least 5,000 GitHub stars or a million monthly NPM downloads to qualify - though there's flexibility for critical infrastructure projects that don't meet these benchmarks.

February 27, 2026
AnthropicOpen SourceAI Development
News

AI Ethics Clash: Anthropic Stands Firm Against Pentagon's Demands

In a bold move highlighting the growing tension between tech ethics and military needs, AI startup Anthropic has refused the Pentagon's request for unlimited access to its technology. The company insists on establishing robust safety measures before any military deployment, despite pressure from defense officials who call their position unreasonable. This standoff raises critical questions about who should control powerful AI systems and under what terms.

February 27, 2026
AI EthicsMilitary TechnologyTech Policy
Tencent's AI Assistant Caught Swearing in Holiday Messages
News

Tencent's AI Assistant Caught Swearing in Holiday Messages

Tencent's AI assistant Yuanbao sparked outrage after generating New Year greeting images with profanity instead of festive wishes. Users reported similar incidents earlier this year where the AI responded with personal insults during coding help requests. The company apologized, calling it an 'uncommon abnormal output,' while experts warn this exposes fundamental challenges in controlling large language models.

February 25, 2026
AI EthicsLarge Language ModelsTech Controversy