Skip to main content

New York Moves to Ban AI from Giving Medical and Legal Advice

New York Takes Stand Against AI Medical and Legal Advice

When your chatbot tries to play doctor or lawyer in New York, it might soon find itself in legal trouble. State legislators are advancing a groundbreaking bill that would put strict limits on artificial intelligence's role in professional advice.

The Proposed Restrictions

The legislation, known as S7263, targets what lawmakers call "AI impersonation" of licensed professionals. Under the bill:

  • No more Dr. Chatbot: AI systems would be barred from providing substantive medical diagnoses or treatment recommendations
  • Legal advice off-limits: Similarly, chatbots couldn't offer what constitutes actual legal counsel
  • Clear warnings required: Platforms must disclose when users are talking to AI "in a clear and prominent manner"

What makes this proposal particularly strong is its enforcement teeth. "Even with warnings," explains the bill text, "platforms can't dodge responsibility for harmful advice." Users who receive bad medical or legal guidance could sue directly.

Why Now?

The push comes amid growing concerns about AI's expanding role in sensitive areas. Earlier this year, the generative AI app Character.AI settled lawsuits alleging its platform contributed to teen suicides. These tragic cases appear to have galvanized lawmakers.

"New Yorkers deserve care from real humans," said Senator Kristin Gonzalez, emphasizing public safety concerns. "We can't let innovation come at the cost of people's wellbeing."

Industry Implications

If passed, the law would take effect just three months after signing - lightning speed by legislative standards. Experts say this could force major changes:

  1. Healthcare and legal tech platforms may need to redesign their AI interfaces
  2. Disclosure requirements could alter how users perceive and trust chatbot interactions
  3. The precedent might inspire similar laws elsewhere

The bill represents perhaps the most aggressive U.S. attempt yet to draw boundaries around AI's professional roles. As one Albany insider put it: "The wild west days of chatbot advice may be ending."

Key Points:

  • New York's S7263 bill would ban substantial medical/legal advice from AI systems
  • Platforms must clearly disclose when users interact with artificial intelligence
  • Violations could lead to civil lawsuits against chatbot operators
  • Legislation follows settlements involving alleged harm from AI advice
  • If passed, law would take effect within 90 days

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Military Contractors Rush to Dump AI Tool Amid Policy Chaos
News

Military Contractors Rush to Dump AI Tool Amid Policy Chaos

U.S. defense contractors are scrambling to replace Anthropic's Claude AI system as conflicting regulations create supply chain headaches. While the Pentagon still uses Claude for battlefield decisions, Trump-era bans have forced civilian agencies to drop it immediately. The situation highlights growing tensions between military needs and tech security concerns.

March 5, 2026
military technologyAI regulationdefense contracting
X cracks down on unmarked AI war videos with revenue bans
News

X cracks down on unmarked AI war videos with revenue bans

Social media platform X is tightening its rules around AI-generated conflict footage. Creators who post unlabeled synthetic war videos will face a 90-day suspension from revenue sharing, with permanent bans for repeat offenders. The move comes as concerns grow about AI's role in spreading wartime misinformation.

March 4, 2026
social media policyAI regulationmisinformation
Kuaishou Takes Action Against AI-Altered Videos Targeting Classics
News

Kuaishou Takes Action Against AI-Altered Videos Targeting Classics

Kuaishou has removed over 4,000 videos featuring inappropriate AI modifications of classic films and animations. The crackdown focuses on protecting minors from disturbing content and preserving the integrity of cultural treasures like 'Journey to the West'. The platform vows to strengthen content review systems while encouraging user participation in reporting violations.

March 3, 2026
AI regulationcontent moderationdigital heritage
Chinese AI Cancer Screening Tech Makes Debut in Pakistan
News

Chinese AI Cancer Screening Tech Makes Debut in Pakistan

Alibaba's DAMO Academy has partnered with Pakistani hospitals to introduce AI-powered multi-cancer screening technology. The system, already serving 20 million people globally, helps detect subtle lesions invisible to the human eye. Pakistan's largest cloud provider Sky47 will support the deployment, marking a significant step in bringing advanced medical AI to developing nations.

March 3, 2026
medical AIcancer screeningChina-Pakistan cooperation
News

WeChat Pulls 4,000 AI-Altered Videos in Crackdown on Distorted Classics

WeChat has removed nearly 4,000 videos in February that used AI to grotesquely alter classic films and animations. The platform is targeting content that distorts cultural classics like 'Romance of the Three Kingdoms,' misrepresents historical figures, or creates disturbing versions of children's cartoon characters. This crackdown comes as part of broader efforts to maintain healthy online content and protect young users from harmful material.

March 3, 2026
AI regulationcontent moderationdigital culture
News

Taobao Flash Sales Rolls Out AI-Powered Food Safety Checks

Taobao Flash Sales has launched a new '3+1+AI' food safety system in response to stricter regulations. The platform now combines artificial intelligence with rider inspections to monitor restaurant hygiene and compliance throughout the delivery process. This move comes as China tightens oversight of online food services, with major platforms racing to implement smarter safety measures.

February 27, 2026
food deliveryAI regulatione-commerce