Majority of Companies Lack AI Risk Management Plans
Majority of Companies Lack AI Risk Management Plans
In recent years, the rapid advancement of artificial intelligence (AI) has opened up new opportunities for businesses, but it has also presented significant challenges. A recent survey from Riskconnect, a risk management software company, indicates that 80% of companies have not established specific plans to mitigate risks linked to generative AI, including concerning security threats such as AI-driven cyber scams.

Survey Findings
The 2024 Next-Generation Risk Report surveyed 218 professionals in global risk compliance and resilience. The findings are alarming: 24% of respondents anticipate that AI-driven cybersecurity threats—such as ransomware, phishing, and deepfakes—will significantly affect their operations within the next year. Furthermore, 72% reported that cybersecurity risks have had a significant or severe impact on their organizations, an increase from 47% in the previous year.
Growing Concerns Over AI
As the ethical, privacy, and security implications of AI become more pronounced, the report emphasizes a growing concern among companies about AI risks. However, many organizations struggle to adapt their risk management frameworks accordingly, leaving critical vulnerabilities unaddressed. Notably, 65% of companies lack policies governing the use of generative AI by partners and suppliers, despite recognizing that third-party vendors are often prime targets for cybercriminals.
Internal Threats and Best Practices
Internal threats also pose significant challenges. Marketing expert Anthony Miyazaki points out that while generative AI is proficient at producing written content, the final output requires human oversight to ensure accuracy and persuasiveness. Moreover, using AI-generated content for websites can backfire; Google has explicitly stated that leveraging AI to manipulate search rankings could result in lower visibility, adversely affecting a company's SEO performance.
To combat these issues, businesses must implement comprehensive internal policies to protect sensitive data and comply with relevant regulations. John Scimone, Chief Security Officer at Del Tech, highlights that the company had established guiding principles before the generative AI boom to ensure that their AI applications are developed in a fair, transparent, and responsible manner.
At Empathy First Media, a digital marketing agency, Vice President Ryan Doser stresses the importance of strict protocols regarding employee use of AI tools. The company prohibits the entry of sensitive client data into generative AI platforms and mandates human review of all AI-generated content. These measures are designed to enhance transparency and foster customer trust.
Conclusion
With the threat landscape evolving rapidly, it is imperative for businesses to prioritize the development of robust risk management strategies to address the challenges posed by generative AI. By doing so, organizations can better safeguard themselves against both internal and external threats.
Key Points
- 80% of companies have not developed specific plans for generative AI risks, facing potential security vulnerabilities.
- 72% of companies believe cybersecurity risks have had a significant impact, calling for enhanced risk management.
- Companies should take proactive measures to ensure the security and compliance of AI applications, avoiding internal and external threats.

