Xiaohongshu Draws Line on AI After Lobster Bot Takes Off
The Rise and Regulation of Workplace AI
A curious phenomenon has been sweeping Chinese social media: professionals jokingly refer to "raising lobsters" at work. The term refers not to aquaculture, but to adopting OpenClaw - an AI agent whose red lobster icon belies its serious productivity punch.
From Novelty to Necessity
Unlike standard chatbots, OpenClaw operates like a digital colleague with hands. It can:
- Navigate multiple software systems
- Clean and analyze experimental data
- Monitor medical literature around the clock
- Handle patient follow-ups automatically
"What used to take our team hours now happens before coffee cools," shares Dr. Li Wen, a Shanghai pharmacologist. "But we're learning it's not just about speed."
Efficiency Meets Ethics
The tool's capabilities come with concerns:
| Benefit | Risk |
|---|
Xiaohongshu's new policy specifically prohibits AI from:
- Simulating human personalities
- Automating social interactions
- Posting without clear disclosure
"AI should assist, not impersonate," states their compliance notice.
The medical field faces particular challenges. "When an AI schedules patient consults or analyzes drug trials, who takes responsibility?" asks Beijing tech lawyer Ming Zhao. "The law sees only humans and corporations as liable parties."
The industry response includes:
- Mandatory human verification steps
- Emergency shutdown protocols
- Detailed activity logging As OpenClaw's developers emphasize: "Think of it as power tools - incredibly useful when used properly with safeguards."


