Skip to main content

New York Moves to Ban AI Doctors and Lawyers

New York Takes Aim at AI Medical and Legal Advice

When your chatbot starts diagnosing illnesses or drafting legal contracts, regulators take notice. New York State is advancing groundbreaking legislation that would prohibit AI systems from providing substantial medical or legal advice to consumers.

The Proposed Crackdown

The bill, designated S7263, targets what lawmakers call "AI impersonation" of licensed professionals. Sponsored by the Senate Committee on Internet and Technology, it specifically bans:

  • Medical diagnosis or treatment recommendations without human oversight
  • Legal counsel beyond basic informational responses
  • Failure to disclose when users are interacting with artificial intelligence

"People deserve care from actual humans," emphasized Senator Kristin Gonzalez, referencing recent tragic cases involving minors and AI platforms. Earlier this year, Google settled lawsuits alleging its Character.AI product contributed to teen suicides.

What's At Stake?

The legislation introduces strict new requirements:

Mandatory Warnings: Platforms must display "clear and prominent" notices about their AI nature—no fine print allowed.

No Liability Shields: Even with warnings, companies remain responsible for harmful advice their bots provide.

User Recourse: Consumers gain explicit rights to sue over botched AI guidance.

Industry Implications

The bill signals a turning point for generative AI applications moving into regulated professions. If passed after the current legislative session, companies would have just 90 days to comply—potentially forcing major changes in how chatbots operate.

The debate reflects growing concerns about balancing innovation with public protection. While AI can democratize access to information, lawmakers argue some fields require human judgment and accountability that algorithms can't provide.

Key Points:

  • New York's S7263 bill would ban substantive medical/legal advice from AI systems
  • Requires unmistakable disclosures when users interact with chatbots
  • Maintains company liability regardless of warnings displayed
  • Comes amid heightened scrutiny following tragic cases involving vulnerable users
  • Would take effect 90 days after signing if approved

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Military Contractors Rush to Dump AI Tool Amid Policy Chaos
News

Military Contractors Rush to Dump AI Tool Amid Policy Chaos

U.S. defense contractors are scrambling to replace Anthropic's Claude AI system as conflicting regulations create supply chain headaches. While the Pentagon still uses Claude for battlefield decisions, Trump-era bans have forced civilian agencies to drop it immediately. The situation highlights growing tensions between military needs and tech security concerns.

March 5, 2026
military technologyAI regulationdefense contracting
X cracks down on unmarked AI war videos with revenue bans
News

X cracks down on unmarked AI war videos with revenue bans

Social media platform X is tightening its rules around AI-generated conflict footage. Creators who post unlabeled synthetic war videos will face a 90-day suspension from revenue sharing, with permanent bans for repeat offenders. The move comes as concerns grow about AI's role in spreading wartime misinformation.

March 4, 2026
social media policyAI regulationmisinformation
Kuaishou Takes Action Against AI-Altered Videos Targeting Classics
News

Kuaishou Takes Action Against AI-Altered Videos Targeting Classics

Kuaishou has removed over 4,000 videos featuring inappropriate AI modifications of classic films and animations. The crackdown focuses on protecting minors from disturbing content and preserving the integrity of cultural treasures like 'Journey to the West'. The platform vows to strengthen content review systems while encouraging user participation in reporting violations.

March 3, 2026
AI regulationcontent moderationdigital heritage
News

WeChat Pulls 4,000 AI-Altered Videos in Crackdown on Distorted Classics

WeChat has removed nearly 4,000 videos in February that used AI to grotesquely alter classic films and animations. The platform is targeting content that distorts cultural classics like 'Romance of the Three Kingdoms,' misrepresents historical figures, or creates disturbing versions of children's cartoon characters. This crackdown comes as part of broader efforts to maintain healthy online content and protect young users from harmful material.

March 3, 2026
AI regulationcontent moderationdigital culture
News

Taobao Flash Sales Rolls Out AI-Powered Food Safety Checks

Taobao Flash Sales has launched a new '3+1+AI' food safety system in response to stricter regulations. The platform now combines artificial intelligence with rider inspections to monitor restaurant hygiene and compliance throughout the delivery process. This move comes as China tightens oversight of online food services, with major platforms racing to implement smarter safety measures.

February 27, 2026
food deliveryAI regulatione-commerce
News

Canada Demands OpenAI Strengthen Safety Measures After Shooting Incident

Canadian officials have issued a strong warning to OpenAI following a tragic school shooting linked to a banned ChatGPT user. Justice Minister Sean Fraser emphasized the need for immediate improvements to AI safety protocols, threatening legislative action if changes aren't made voluntarily. The case highlights growing concerns about tech companies' responsibility in preventing misuse of their platforms.

February 26, 2026
AI regulationOpenAIpublic safety