Skip to main content

Huawei, Zhejiang University Launch AI Model with Enhanced Security

Huawei and Zhejiang University Unveil DeepSeek-R1-Safe AI Model

At the recent Huawei Global Connect Conference, Huawei Technologies and Zhejiang University jointly introduced DeepSeek-R1-Safe, a groundbreaking foundation model built on Huawei's Ascend 1000 computing platform. This collaboration marks a significant step forward in addressing critical challenges at the intersection of AI performance and security.

A New Standard for AI Safety

Professor Ren Kui, Dean of Zhejiang University's School of Computer Science and Technology, detailed the model's innovative framework. "DeepSeek-R1-Safe represents a comprehensive approach to secure AI development," he explained. The model incorporates:

  • A high-quality secure training corpus
  • Balanced optimization techniques for security training
  • Proprietary software/hardware integration

The framework specifically targets fundamental security challenges in large-scale AI training processes.

Unprecedented Security Performance

Test results demonstrate exceptional capabilities:

  • 100% defense rate across 14 categories of harmful content (toxic speech, political sensitivity, illegal activity incitement)
  • Over 40% success rate against jailbreak attempts
  • 83% comprehensive security score, outperforming comparable models by 8-15%

Remarkably, these security gains come with minimal performance trade-offs. In standard benchmarks (MMLU, GSM8K, CEVAL), the model shows less than 1% performance loss compared to non-secure counterparts.

Industry Implications and Open Access

Zhang Dixuan, President of Huawei's Ascend Computing Business, emphasized the company's commitment to collaborative innovation: "By open-sourcing this technology through ModelZoo, GitCode, GitHub and Gitee, we're enabling broader participation in secure AI development."

The release signals growing industry recognition of security as a foundational requirement rather than an afterthought in AI systems.

Key Points:

  • First domestic foundation model on Ascend 1000 platform
  • Achieves security-performance balance through novel framework
  • Outperforms competitors by significant margins
  • Now available through major open-source platforms

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Open-Source AI Models Face Growing Security Threats, Study Warns

A new study reveals thousands of unprotected open-source AI models are vulnerable to hacker exploitation. Researchers found these models, when run outside secure platforms, can be manipulated for harmful purposes like phishing scams and disinformation campaigns. The report highlights particularly concerning findings about modified system prompts in popular tools like Ollama.

January 30, 2026
AI SecurityOpen Source RisksCybersecurity Threats
News

Lima 2.0 Reinvents Itself as AI's Silent Guardian

The open-source tool Lima has unveiled its 2.0 version, shifting from a simple container solution to a sophisticated security platform for AI development. Its new 'sandbox' feature creates virtual walls around AI coding assistants, preventing them from accessing sensitive host files—even when compromised. The update also brings GPU acceleration for Apple chips and introduces plugin support, making it easier for developers to customize their workflow while keeping AI interactions safe.

December 24, 2025
AI SecurityDeveloper ToolsContainer Technology
OpenAI Confirms AI Browser Security Flaws, Deploys Robot Hackers
News

OpenAI Confirms AI Browser Security Flaws, Deploys Robot Hackers

OpenAI acknowledges persistent security vulnerabilities in its AI-powered Atlas browser, warning that prompt injection attacks pose an ongoing challenge. The company is fighting fire with fire by deploying AI-powered 'robotic hackers' to test defenses. Experts advise caution when granting permissions to AI agents as the industry scrambles for solutions.

December 23, 2025
AI SecurityOpenAIPrompt Injection
China Launches AI Security Database to Tackle Emerging Threats
News

China Launches AI Security Database to Tackle Emerging Threats

China's tech watchdog has rolled out a specialized database to track security flaws in AI products, marking a significant step in safeguarding the rapidly growing industry. The new platform connects developers, security experts and users to identify and fix vulnerabilities before they're exploited. This initiative builds on existing cybersecurity measures while addressing unique risks posed by artificial intelligence systems.

December 16, 2025
AI SecurityCybersecurityVulnerability Management
AI Startup Secures $13M to Fight Rising Deepfake Threats
News

AI Startup Secures $13M to Fight Rising Deepfake Threats

Resemble AI, a Toronto and San Francisco-based startup, has raised $13 million in fresh funding to combat the growing menace of deepfake technology. Their detection tools boast 98% accuracy across multiple languages, addressing what experts warn could become a $40 billion fraud problem by 2027.

December 16, 2025
Deepfake DetectionAI SecuritySynthetic Media
Huawei Boosts AI Ambitions with New Large Model Division
News

Huawei Boosts AI Ambitions with New Large Model Division

Huawei's prestigious 2012 Lab has taken a significant step forward in artificial intelligence research by establishing a dedicated Fundamental Large Model Department. This strategic move signals Huawei's intensified focus on developing core AI technologies and infrastructure. The company is simultaneously ramping up global recruitment efforts for top-tier AI talent, particularly targeting researchers with breakthrough innovations.

December 9, 2025
HuaweiArtificialIntelligenceTechInnovation