Skip to main content

Runlayer Emerges from Stealth with $11M Boost to Secure AI Protocols

Runlayer Secures $11M Seed Round to Protect AI Operations

In a significant move for AI security, Runlayer announced today its emergence from stealth mode with an $11 million seed investment. The funding round was co-led by prominent venture firms Khosla Ventures and Felicis.

Addressing Critical Security Gaps

The startup specializes in securing the Model Context Protocol (MCP), an increasingly important framework adopted by tech giants including OpenAI, Microsoft, AWS, and Google. While MCP enables AI agents to autonomously handle data and business operations, its rapid adoption has exposed worrying security vulnerabilities.

"We've seen firsthand how prompt injection attacks and unauthorized data access can compromise systems," said founder Andrew Berman, former Director of AI at Zapier. "Our platform creates essential guardrails for this powerful technology."

Impressive Early Traction

Despite operating quietly for just four months, Runlayer has already onboarded eight major clients - including unicorns Gusto, dbt Labs, Instacart, and Opendoor. The company also scored a coup by bringing aboard David Soria Parra, principal author of the MCP specification, as a consultant.

The security platform combines multiple critical functions:

  • Gateway protection against malicious prompts
  • Real-time threat detection
  • Comprehensive audit logging
  • Fine-grained permission controls

How It Works

The system employs an innovative "Okta-style" directory that allows IT teams to pre-authorize MCP servers and link them directly to employee identities. This creates clear accountability chains where AI agent permissions mirror user privileges exactly.

"What excites me most is solving the observability blind spots," Berman explained. "When something goes wrong with an AI operation today, companies often struggle to trace why or how."

The platform integrates seamlessly with existing identity providers like Okta and Microsoft Entra while generating compliance-ready audit trails.

Rapid Development Timeline

Berman's team moved remarkably fast - conceiving the idea in August after recognizing security gaps while building early MCP implementations at Zapier. Within four months they developed their product prototype and secured paying customers.

The fresh capital will fuel engineering team expansion ahead of their General Availability launch later this year. Future plans include supporting private on-premises deployments and multi-cloud environments.

Key Points:

  • Runlayer raises $11M seed round co-led by Khosla Ventures & Felicis
  • Solves critical security gaps in rapidly adopted Model Context Protocol
  • Already serves eight major clients including Instacart & Opendoor
  • Founder brings deep experience from Zapier's AI division
  • Platform combines gateway protection with identity-based controls

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

ByteDance rolls out new security toolkit for AI model protection
News

ByteDance rolls out new security toolkit for AI model protection

ByteDance has introduced ByteClaw, a new security tool designed to safeguard internal access to large AI models. The company also released comprehensive guidelines addressing common vulnerabilities like prompt injection and data leaks. These measures aim to balance AI innovation with enterprise-grade security as machine learning tools become more prevalent in corporate environments.

March 18, 2026
AI SecurityByteDanceEnterprise Technology
News

Alibaba's AI Security Team Uncovers Critical Flaws in OpenClaw Framework

Alibaba's AI security researchers have identified 33 vulnerabilities in the popular OpenClaw autonomous agent framework, including one critical flaw. The team worked closely with developers who quickly patched eight high-risk issues in the latest update. This discovery highlights the growing importance of security audits for open-source AI tools that power critical systems worldwide.

March 30, 2026
AI SecurityOpenClawCybersecurity
Tencent Unveils 'Shrimp Farm' AI Agent Platform with Multi-Model Support
News

Tencent Unveils 'Shrimp Farm' AI Agent Platform with Multi-Model Support

Tencent has pulled back the curtain on its ambitious Agent product ecosystem, playfully nicknamed 'Shrimp Farm'. This comprehensive platform combines Tencent's homegrown Hunyuan model with support for popular alternatives like MiniMax and Kimi. Beyond just chatting, the system integrates deeply with Tencent's productivity tools and WeChat ecosystem, allowing AI assistants to actually complete tasks. Security gets serious attention too, with multiple layers of protection against AI-related risks.

March 27, 2026
TencentAI AgentsEnterprise AI
News

Anthropic's New AI Model Faces Backlash Amid OpenClaw Controversy

Anthropic has launched Claude 3.6 Sonnet, its latest enterprise-focused AI model with enhanced programming capabilities and massive context windows. But the release comes at a difficult time - the company is embroiled in a public relations crisis over its handling of the open-source OpenClaw project. While the technical upgrades are impressive, analysts say Anthropic's heavy-handed trademark enforcement may have damaged its reputation with developers at a crucial moment.

March 19, 2026
AI DevelopmentEnterprise TechnologyOpen Source Controversy
News

Alibaba Bets Big on AI with New 'Wukong' Business Unit Under CEO's Direct Leadership

Alibaba is making a strategic shift in its AI approach with the launch of the Wukong Business Unit, directly overseen by CEO Wu Yongming. This enterprise-focused AI platform aims to move beyond simple chatbots to deeply integrate AI into business workflows through DingTalk. The move comes as the industry shifts from model development to practical applications, with Alibaba positioning itself at the forefront of enterprise AI adoption.

March 19, 2026
AlibabaArtificial IntelligenceEnterprise Technology
AI Blind Spot: How Hackers Trick Chatbots with Sneaky Font Tricks
News

AI Blind Spot: How Hackers Trick Chatbots with Sneaky Font Tricks

Security researchers uncovered a clever hack where attackers manipulate fonts and web styling to fool AI assistants like ChatGPT and Copilot. By disguising malicious code as harmless text, they trick these systems into giving dangerous advice. While Microsoft quickly patched the vulnerability in Copilot, other major providers like Google dismissed the threat. This eye-opening discovery reminds us that even advanced AI can be fooled by simple visual tricks.

March 18, 2026
AI SecurityChatGPT VulnerabilitiesCyber Threats