Confidence in GenAI: The Zero Trust Approach

Organizations face four key security challenges when it comes to generative AI (GenAI) solutions. They need to make sure the ways AI gets used meet compliance requirements. They need to ensure sensitive data isn’t accidentally exposed, and that AI models themselves aren’t used to harm the organization. And they need to give network and security teams visibility into AI platforms to manage usage effectively. Trend Vision One – Zero Trust Secure Access (ZTSA) – AI Service Access meets these security needs by bridging the gap between access control and GenAI services to keep enterprises safe.

After the initial wave of generative AI (GenAI) adoption, enterprises are starting to think about deeper questions like, “How can we get the most value out of our AI models?” “How are those models going to change?” and, crucially, “How can we secure our organization’s use of AI?”

In a way, all three are related. AI models run on data—of which there will be about 175 zettabytes worldwide by next year. According to IDC, 80% of that data will be unstructured, a digital heap of screenshots, AI prompts, and collaboration app messages, most of which can’t be adequately identified or protected by legacy tools.

Instead of leaving that “unstructured data” unanalyzed, with no value to the organization, GenAI could ingest it and put it to use. But if no one knows what the data contains, GenAI could run the risk of exposing information that should be kept private.

Meanwhile, analysts at Gartner suggest that by 2027 more than half of the GenAI models used by enterprises will be industry-specific or tailored to a particular business function—up from about one percent in 2023. This will create new opportunities for organizations to optimize operations, boost employee productivity, and reimagine digital customer experiences. Though here comes another “but”: the more GenAI tools engage with confidential or competitively sensitive operational data, the greater the risk protected information could be leaked.

More of the risks enterprises already face

The risks around AI use of unstructured and operational data are similar to those confronting enterprises today. A company training a GenAI model to help boost gross margin, for example, must feed the model various types of corporate data. If that data isn’t classified correctly, sensitive data could be disclosed or misused when the AI generates content.

Basically, businesses adopting GenAI systems face four main security challenges:

  1. Visibility: Network and security operations center (SOC) teams lack visibility into AI platforms, preventing them from monitoring or controlling usage and managing the associated risks. This has a real impact on the organization’s overall security posture.
  2. Compliance: It can be difficult to implement company-wide policies and know who within the organization is using which AI service(s).
  3. Exposure: Sensitive data can be exposed accidentally by employees interacting with GenAI services or by the GenAI itself through an unauthenticated service response that results in improper data being provided to end users.
  4. Manipulation: Bad actors may exploit GenAI models with inputs crafted to trigger unintended actions or achieve a malicious objective (prompt injection attacks). Examples include jailbreaking/model duping, virtualization/role-playing, and sidestepping.

The zero trust approach provides an excellent framework for addressing security concerns while still allowing enterprises to take full advantage of GenAI as it evolves. ZTSA – AI Service Access makes it easy to apply by providing a cloud-native platform that protects any user accessing public or private GenAI services throughout an organization.

Closing the gap between GenAI and secure access

Trend Vision One ZTSA – AI Service Access enables zero trust access control for public and private GenAI services. It can monitor AI usage and inspect GenAI prompts and responses—identifying, filtering and analyzing AI content to avoid potential sensitive data leakage or unsecured outputs in public and private cloud environments.

Read More HERE