Skip to main content

New York Moves to Ban AI Doctors and Lawyers

New York Takes Aim at AI Medical and Legal Advice

When your chatbot starts diagnosing illnesses or drafting legal contracts, regulators take notice. New York State is advancing groundbreaking legislation that would prohibit AI systems from providing substantial medical or legal advice to consumers.

The Proposed Crackdown

The bill, designated S7263, targets what lawmakers call "AI impersonation" of licensed professionals. Sponsored by the Senate Committee on Internet and Technology, it specifically bans:

  • Medical diagnosis or treatment recommendations without human oversight
  • Legal counsel beyond basic informational responses
  • Failure to disclose when users are interacting with artificial intelligence

"People deserve care from actual humans," emphasized Senator Kristin Gonzalez, referencing recent tragic cases involving minors and AI platforms. Earlier this year, Google settled lawsuits alleging its Character.AI product contributed to teen suicides.

What's At Stake?

The legislation introduces strict new requirements:

Mandatory Warnings: Platforms must display "clear and prominent" notices about their AI nature—no fine print allowed.

No Liability Shields: Even with warnings, companies remain responsible for harmful advice their bots provide.

User Recourse: Consumers gain explicit rights to sue over botched AI guidance.

Industry Implications

The bill signals a turning point for generative AI applications moving into regulated professions. If passed after the current legislative session, companies would have just 90 days to comply—potentially forcing major changes in how chatbots operate.

The debate reflects growing concerns about balancing innovation with public protection. While AI can democratize access to information, lawmakers argue some fields require human judgment and accountability that algorithms can't provide.

Key Points:

  • New York's S7263 bill would ban substantive medical/legal advice from AI systems
  • Requires unmistakable disclosures when users interact with chatbots
  • Maintains company liability regardless of warnings displayed
  • Comes amid heightened scrutiny following tragic cases involving vulnerable users
  • Would take effect 90 days after signing if approved

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Claude's ID Check Sparks User Backlash: Privacy Fears and Account Bans
News

Claude's ID Check Sparks User Backlash: Privacy Fears and Account Bans

Anthropic's new mandatory ID verification for Claude AI has users up in arms. The process requires holding physical IDs for photos, with data handled by third-party Persona. But many see it as a prelude to account suspensions - like one teen developer who lost access despite paying for services. With privacy concerns over data sharing and strict age limits, Claude's security measures are creating as many problems as they solve.

April 17, 2026
AI regulationdigital privacyage verification
News

Claude's New ID Check: What It Means for AI Users

Anthropic has introduced identity verification for certain Claude features, requiring users to submit government IDs and real-time selfies. The company partnered with Persona Identities for the process, promising data won't be used for training or marketing. While aimed at responsible AI use, the move has sparked debate about privacy and accessibility in the AI community.

April 15, 2026
AI regulationdigital identityAnthropic
News

Meituan Steps Into AI-Powered Family Healthcare with New 'Xiaotuan' Assistant

Meituan has unveiled its latest foray into digital health services at the Wuzhen Health Conference. The tech giant introduced 'Xiaotuan Health Butler,' an AI-driven platform for family health management, alongside a premium 'Health Card' membership. These offerings combine AI consultations with Meituan's delivery network, creating an integrated health service ecosystem. Users can now access everything from medical advice to prescription deliveries through the Meituan app.

April 15, 2026
digital healthAI healthcareMeituan
News

Tmall Tightens Rules for AI Software Listings to Protect Buyers

Tmall has rolled out stricter guidelines for merchants selling AI software and apps on its platform. The new rules, effective April 14, 2026, require clear product categorization and transparent pricing. Sellers must now specify delivery methods and avoid misleading claims about AI performance. The move aims to create a fairer marketplace where consumers can shop with confidence.

April 15, 2026
e-commerce regulationAI marketplaceconsumer protection
Claude Brings AI Smarts to Microsoft Word for Legal and Finance Pros
News

Claude Brings AI Smarts to Microsoft Word for Legal and Finance Pros

Anthropic's new Claude plugin for Microsoft Word is shaking up how lawyers and financial analysts work with documents. It's not just another AI tool - this one understands complex legal and financial jargon, suggests precise edits while preserving formatting, and even answers questions about specific clauses. Currently available only to enterprise teams, it represents Anthropic's bold move into workplace productivity tools.

April 13, 2026
AI productivitylegal techMicrosoft Word
News

U.S. Regulators Urge Banks to Harness AI's Power for Financial Security

In a surprising policy shift, top U.S. financial regulators are now encouraging major banks to test Anthropic's powerful Mythos AI model to uncover system vulnerabilities. While previously wary of this technology's risks, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell see potential in turning the model's capabilities into a defensive tool. Wall Street giants including JPMorgan Chase and Goldman Sachs have already begun testing, even as Anthropic faces legal challenges from the Trump administration over national security concerns.

April 13, 2026
AI regulationfinancial technologycybersecurity