“Bringing Your Own AI to Work Increases Risks of Cyberattacks”

The rapid adoption of generative AI tools like ChatGPT is putting significant pressure on chief information security officers (CISOs), as employees increasingly test these tools in the workplace.

A recent survey highlighted that while 94% of CISOs are concerned about third-party cybersecurity threats — with 17% considering it a top priority — only 3% have already implemented a third-party cyber risk management solution. However, 33% plan to deploy such solutions within the year.

Research from security risk management software firm Panorays emphasizes that internal threats are exacerbated when employees use their organization’s network to experiment with generative AI and other AI tools.

The study indicates that 65% of CISOs anticipate an increase in their third-party cyber risk management budgets, with 40% expecting an increase of 1% to 10% this year. Additionally, CISOs at very large enterprises (73%) are notably more concerned about third-party cybersecurity threats compared to their counterparts at mid-sized enterprises (47%). Only 7% of CISOs reported having no concerns at all.

Panorays CEO Matan Or-El noted that while CISOs recognize the threat posed by third-party cybersecurity vulnerabilities, there is often a gap between this awareness and the implementation of proactive measures. He stressed the importance of equipping CISOs with the tools to swiftly analyze and address these risks, especially as malicious actors continue to exploit AI technology for breaches and disruptions.

Overlooked Challenges Escalating Cybersecurity Risks

One of the main challenges CISOs face in addressing third-party risk management is adapting to new regulations, which 20% of surveyed CISOs identified as a significant issue.

While many CISOs are optimistic that AI solutions can enhance third-party security management, some cybersecurity experts argue that AI technology is still evolving and may not yet provide a reliable solution.

Other key challenges include:

  • Communicating the business impact of third-party risk management: 19%
  • Insufficient resources to manage risk in an expanding supply chain: 18%
  • Increasing AI-related third-party breaches: 17%
  • Lack of visibility into Shadow IT usage: 16%
  • Difficulty in prioritizing risk assessment efforts based on criticality: 10%

Panorays CEO Matan Or-El emphasized the importance of addressing regulatory changes and rising third-party cyber risks. He noted that despite challenges such as resource constraints and the growing frequency of AI-related breaches, increasing budget allocation for cyber risk management is a positive and necessary development.

The Importance of Reducing Third-Party Security Risks

Jasson Casey, CEO of cybersecurity firm Beyond Identity, highlighted that while AI tools can enhance productivity, they also pose significant security risks. These tools can be exploited to access sensitive information or serve as entry points for cyberattacks.

“The probabilistic nature of AI models can sometimes lead them to bypass security measures, underscoring the need for robust security practices and AI tools that emphasize privacy and data protection,” he told TechNewsWorld.

Casey also pointed out that unauthorized use of AI tools, known as Shadow IT, can seriously undermine cybersecurity efforts. It increases the risk of data breaches and complicates incident response and compliance.

“To address the challenges of Shadow IT, organizations should promote transparency, offer secure alternatives to popular AI tools, and implement clear policies that regulate the use of AI within the company,” he advised.

By tackling the root causes of Shadow IT, such as the lack of available, approved tools that meet employee needs, organizations can better manage risks. CISOs should ensure that secure, approved AI solutions are provided to minimize information leakage.

Offering internal AI tools that prioritize privacy and data integrity can reduce reliance on external, potentially insecure applications. Casey emphasized that cultivating a security-aware culture and ensuring alignment of AI tool usage with organizational policies are essential steps in managing the risks associated with Shadow IT.

Balancing Innovation and Security

While the concept of balancing innovation with security may seem straightforward, implementing it is a significant challenge for CISOs. One of the major difficulties they face is keeping up with the rapid pace of technological advancements and the evolving tactics of cyber adversaries.

“Striking the right balance between fostering innovation and maintaining robust security measures, particularly with the rise of AI technologies and Shadow IT, demands continuous vigilance and adaptability,” Casey explained. “Additionally, addressing security fatigue among employees and promoting a proactive security stance are crucial challenges.”

Sectors that benefit most from data analysis, automation, and improved decision-making—such as finance, healthcare, and technology—are seeing the highest increases in the use of generative AI and other AI tools.

“This growth underscores the need for a deeper understanding of both the advantages and risks associated with AI. Organizations must proactively adopt secure and ethical AI practices,” he added.

Mitigating Risks of Shadow IT Exposure

Casey emphasized the importance of AI-focused security training for IT leaders. Employees should be aware that their interactions with AI tools could potentially impact the AI’s core models.

Organizations should implement phishing-resistant authentication methods and shift from traditional phishing training to educating employees on the secure use of AI tools. This approach will help protect against inadvertent data breaches and strengthen defenses against third-party cyber threats.

CISOs should also develop and regularly update dynamic policies that address the evolving risks of AI tools. These policies should restrict the input of confidential and proprietary information into public AI services to reduce the risk of data exposure.

“Policies must be adaptable and frequently reviewed to stay effective against new threats,” Casey noted. “By understanding and addressing potential misuse of AI, including vulnerabilities like jailbreaks, CISOs can better protect their organizations from emerging risks.”

By editor1

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *