How enterprises can effectively address the security implications of generative AI

Study Finds Enterprises Struggling with Security Implications of Employee Generative AI Use

In a recent study conducted by cloud-native network detection and response firm ExtraHop, it was discovered that enterprises are facing challenges in managing the security risks associated with the use of generative AI tools by employees.

The Generative AI Tipping Point Report

ExtraHop’s research report, titled The Generative AI Tipping Point, delves into how organizations are dealing with the increasing prevalence of generative AI technology in the workplace. The report reveals a significant cognitive dissonance among IT and security leaders when it comes to addressing the security risks.

Concerns of IT and Security Leaders

  • 73 percent of IT and security leaders admitted that their employees frequently use generative AI tools or Large Language Models (LLM) at work.
  • The majority of these leaders expressed uncertainty about how to effectively address the security risks associated with generative AI use.
  • More concern was focused on the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Ineffectiveness of Generative AI Bans

The study revealed that while 32 percent of organizations had prohibited the use of generative AI tools, only five percent reported that employees never used these tools. This indicates that bans alone are not enough to prevent their usage.

Desire for Government Involvement

  • 90 percent of respondents expressed the need for government involvement in addressing the security risks associated with generative AI use.
  • 60 percent advocated for mandatory regulations, while 30 percent supported government standards for businesses to adopt voluntarily.

Gaps in Basic Security Practices

The study revealed gaps in basic security practices among organizations:

  • Less than half had invested in technology to monitor generative AI use.
  • Only 46 percent had established policies governing acceptable use.
  • Merely 42 percent provided training to users on the safe use of generative AI tools.

Despite a sense of confidence in their current security infrastructure, these findings highlight the need for organizations to strengthen their security measures.

Conclusion

The rapid adoption of generative AI tools underscores the importance of understanding employees’ usage to identify potential security vulnerabilities. Business leaders are encouraged to address the security risks associated with generative AI use and consider implementing guidelines and training to ensure safe and responsible utilization of these tools.

For more detailed insights and findings, you can access the full report here.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *