Skip to main content

AI Startup Secures $13M to Fight Rising Deepfake Threats

The Arms Race Against Digital Deception

In a significant move against the rising tide of synthetic media threats, Resemble AI has closed a $13 million funding round. The Toronto and San Francisco-based company brings its total funding to $25 million as it expands its arsenal against increasingly sophisticated deepfakes.

Image

Cutting-Edge Detection Technology

The startup's secret weapons? Detect-3B Omni - their multilingual detection model claiming 98% accuracy across 40+ languages - and Intelligence, their contextual analysis platform. Together, these tools scan audio, video, images and text in real-time, hunting for telltale signs of digital manipulation.

"We're not just flagging fakes," explains company materials. "We're helping users understand why content was created and whether it can be trusted."

The Staggering Cost of Digital Deception

The funding comes as deepfake-related fraud reaches alarming levels. Resemble AI estimates businesses lost $1.56 billion this year alone to synthetic media scams. Without effective countermeasures, projections suggest U.S. losses could balloon to $40 billion by 2027.

The threats span industries: financial institutions face forged voice approvals for wire transfers; corporations battle executive impersonation scams; even government officials contend with fabricated statements.

From Cloning to Cybersecurity

Founded in 2018 as a voice cloning service, Resemble AI pivoted its expertise toward security as synthetic media threats escalated. Their evolution mirrors the broader tech industry's scramble to develop defenses against the very generative AI capabilities it helped create.

The latest investment round includes backing from Google's AI Future Fund and Okta Ventures among others, signaling strong industry confidence in detection technology's growing importance.

Key Points:

  • Funding milestone: $13M raised ($25M total) fuels global expansion
  • Detection edge: Detect-3B Omni offers multilingual deepfake spotting
  • Fraud forecast: Synthetic media scams may cost $40B annually by 2027

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Google's AI Crackdown Leaves Email Automation Users in the Cold
News

Google's AI Crackdown Leaves Email Automation Users in the Cold

Google has escalated its battle against AI-powered email automation, with users of tools like OpenClaw reporting complete account suspensions. The tech giant isn't just restricting access to Gmail - entire Google accounts are being wiped out, taking years of stored data with them. Security experts warn that AI agents' unnatural behavior patterns and some users' attempts to bypass paid features have crossed Google's red lines. While developers scramble for solutions, affected users face the harsh reality of permanently lost emails, photos, and documents.

February 25, 2026
GoogleEmail AutomationAI Security
Microsoft Sounds Alarm on OpenClaw AI Security Risks
News

Microsoft Sounds Alarm on OpenClaw AI Security Risks

Microsoft warns enterprises against deploying its OpenClaw AI assistant on standard workstations due to serious security vulnerabilities. The autonomous agent's high-privilege access makes it susceptible to indirect prompt injections and skill-based malware attacks. Recent findings reveal over 42,000 exposed control panels globally, prompting Microsoft to recommend strict isolation protocols.

February 24, 2026
AI SecurityMicrosoftEnterprise Technology
Google Gemini Hit by Massive AI Model Hack Attempt
News

Google Gemini Hit by Massive AI Model Hack Attempt

Google revealed its Gemini AI chatbot suffered a sophisticated attack where hackers bombarded it with over 100,000 prompts to extract its core algorithms. Security experts warn this 'model distillation' technique could become widespread, threatening corporate AI secrets. The incident highlights growing vulnerabilities as businesses increasingly rely on customized AI systems.

February 15, 2026
AI SecurityGoogle GeminiCyber Threats
OpenAI Bolsters ChatGPT Security Against Sneaky Prompt Attacks
News

OpenAI Bolsters ChatGPT Security Against Sneaky Prompt Attacks

OpenAI has rolled out two new security features for ChatGPT to combat prompt injection attacks that could trick the AI into harmful actions. The first introduces Lockdown Mode, restricting risky external interactions for enterprise users. The second labels high-risk functions with clear warnings. These additions build on existing protections while giving users more control over security trade-offs.

February 14, 2026
AI SecurityChatGPT UpdatesPrompt Injection
News

Claude Plugins Expose Critical Security Flaw Through Calendar Invites

A newly discovered vulnerability in Claude's desktop extensions allows hackers to execute malicious code remotely through seemingly innocent Google Calendar invites. Security researchers warn this 'zero-click' attack could have devastating consequences, scoring a perfect 10/10 on the CVSS risk scale. While Anthropic shifts responsibility to users, experts argue the plugin system fails basic security expectations.

February 11, 2026
AI SecurityClaude VulnerabilitiesZero-Click Attacks
NanoClaw: The Lightweight AI Assistant That Puts Security First
News

NanoClaw: The Lightweight AI Assistant That Puts Security First

Meet NanoClaw, a sleek new AI assistant built for security-conscious users. Born from OpenClaw's limitations, this innovative tool runs Claude assistant within Apple containers for ironclad isolation. With just 8 minutes needed to grasp its codebase and unique features like WhatsApp integration, NanoClaw offers simplicity without sacrificing protection. While macOS-focused, developers hint at Linux compatibility through Claude.

February 2, 2026
AI SecurityDigital PrivacyApple Technology