ChatGPT and the rapid adoption of generative AI have pushed chief information security officers (CISOs) to the limit, with employees testing these tools in the workplace.
A survey released earlier this year found that few businesses view this threat vector seriously enough to already have a third-party security cyber risk management solution in place. While 94% of CISOs are concerned with third-party cybersecurity threats — including 17% who view it as a top priority — only 3% have already implemented a third-party cyber risk management solution at their organizations, and 33% plan to do so this year.
Security risk management software firm Panorays shed new light on the worsening network security problems workers cause. This internal threat occurs when employees use their organization’s network to experiment with generative AI and other AI tools.
According to the research, 65% of CISOs expect the third-party cyber risk management budget to increase. Of those respondents, 40% said it would increase from 1% to 10% this year. The report also revealed that CISOs at very large enterprises (73%) are more concerned about third-party cybersecurity threats than mid-size enterprises (47%). Only 7% of CISOs said they were not worried at all.
“CISOs understand the threat of third-party cybersecurity vulnerabilities, but a gap exists between this awareness and implementing proactive measures,” said Panorays CEO Matan Or-El.
He warned that empowering CISOs to fortify defenses by analyzing and addressing gaps swiftly is crucial in navigating the current cyber landscape. With the speed of AI development, bad actors will continue to leverage this technology for data breaches, operational disruptions, and more.
Overlooked Challenges Increasing Cybersecurity Risks
The top challenge CISOs see in fixing third-party risk management matters is complying with new regulations for third-party risk management, according to 20% of the CISOs responding.
A majority of CISOs are confident that AI solutions can improve third-party security management. However, other cyber experts not referenced in the Panorays report argue that AI is too nascent to provide that solution reliably.
Other challenges include:
Communicating the business influence of third-party risk management: 19%
Not enough resources to manage risk in the growing supply chain: 18%
AI-based third-party breaches increasing: 17%
No visibility to Shadow IT usage in their company: 16%
Prioritizing the risk assessment efforts based on criticality: 10%
“Confronting regulatory changes and escalating third-party cyber risks is paramount,” continued Or-El. “Despite resource constraints and rising AI-related breaches, increased budget allocation towards cyber risk management is a positive step in the right direction.”
The Importance of Reducing Third-Party Security Risks
Jasson Casey, CEO of cybersecurity firm Beyond Identity, agreed that access to AI tools can expose companies to sophisticated attacks. These tools can be manipulated to reveal proprietary information or serve as entry points for cyberthreats.
“The probabilistic nature of AI models means they can be tricked into bypassing security measures, highlighting the importance of rigorous security practices and the need for AI tools that prioritize privacy and data protection,” he told TechNewsWorld.
Casey added that Shadow IT, particularly the unauthorized use of AI tools, significantly undermines organizational cybersecurity efforts. It increases the risk of data breaches and complicates incident response and compliance efforts.
“To combat the challenges posed by shadow IT, organizations must encourage transparency, provide secure alternatives to popular AI tools, and implement strict yet adaptable policies that guide the use of AI within the enterprise,” he offered.
Organizations can better manage the risks associated with these unauthorized technologies by addressing the root causes of shadow IT, such as the lack of available, approved tools that meet employee needs. CISOs must provide secure, approved AI solutions that mitigate the risk of information leakage.
They can reduce reliance on external, less secure AI applications by offering in-house AI tools that respect privacy and data integrity. Casey noted that fostering a security-conscious culture and ensuring that all AI tool usage aligns with organizational policies are crucial steps in curbing the proliferation of shadow IT.
Balancing Innovation and Security
While that formula for fixing may sound simple, making it happen is one of the biggest obstacles CISOs face today. Among the most formidable challenges CISOs face is the rapid pace of technological advancement and the innovative tactics employed by cyber adversaries.
“Balancing the drive for innovation with the need for comprehensive security measures, especially in the face of evolving AI technologies and the shadow IT phenomenon, requires constant vigilance and adaptability. Moreover, overcoming security fatigue among employees and encouraging a proactive security posture remain significant hurdles,” Casey noted.
The most significant increases in the usage and adoption of gen AI and other AI tools are in sectors that stand to gain from data analysis, automation, and enhanced decision-making processes. These include finance, health care, and technology.
“This uptick necessitates a more nuanced understanding of AI’s benefits and risks, urging organizations to adopt secure and ethical AI practices proactively,” he said.
Mitigating Risks of Shadow IT Exposure
IT leaders must prioritize establishing AI-centric security training, according to Casey. Workers need to recognize that every interaction with AI could potentially train its core models.
By implementing phishing-resistant authentication, organizations can shift from traditional phishing security training to educating employees on the proper use of AI tools. This focus on education will form a robust defense against inadvertent data breaches and provide a good starting point for defending against third-party cyber assaults.
A worthwhile follow-up for CISOs is developing dynamic policies that account for the evolving nature of AI tools and the associated security risks. Policies must limit confidential and proprietary inputs to public AI services, mitigating the risk of exposing these details.
“Additionally, these policies should be adaptive, regularly reviewed, and updated to remain effective against new threats,” Casey pointed out. “By understanding and legislating against the misuse of AI, including potential jailbreaks, CISOs can safeguard their organizations against emerging threats.”