GENAI SECURE

Protect GenAI Models

Building GenAI capabilities can introduce data security risk. GenAI Secure by Cloud Storage Security can help minimize it.

Generative artificial intelligence (GenAI) presents an immense opportunity to automate time-consuming tasks, build faster, and scale business. Yet, the value of these benefits can overshadow some remarkable challenges related to understanding data integrity used by large language models (LLM) for GenAI applications and how to take advantage of GenAI technologies to enhance security. Cloud Storage Security built GenAI Secure to address these challenges. Let’s dive deeper below.

CSS Webinar Thumbnails (17)
Two_Col_Images (2)

Understanding Model Integrity

What are some of the risks? In a nutshell: malware infection and sensitive data loss.

Understanding data integrity of LLMs is one of the most common GenAI security challenges. If the data going in contains malicious code or if the chat/text output generated by the GenAI chatbot/application passes along sensitive data, businesses can unwittingly put themselves at risk simply by adopting new technology.

Two_Col_Images (3)

GenAI Secure by Cloud Storage Security

Find and isolate malicious files. Safeguard sensitive data. Use Amazon Bedrock to Enhance Security.

GenAI Secure by Cloud Storage Security helps businesses protect custom foundation models (FMs) and filter sensitive data out of text output from GenAI chat bots. The solution leverages Amazon Bedrock functionality to write custom regular expression (RegEx) policies, provide forensic capabilities for malicious or sensitive data, and generate remediation suggestions.

Automate and Enhance Security with Amazon Bedrock

Data inputs can contain malicious code and data outputs can pass on sensitive information when they shouldn’t. Leverage Amazon Bedrock within GenAI Secure to ensure data integrity.

  • Custom RegEx
  • Protect Output with Custom RegEx

    Prevent sensitive data in user inputs and FM responses from being exposed
    • bullet-icon CHALLENGE: You need to write specific policies to prevent data loss, but RegEx policy syntax can be complex and dense, making it difficult to produce.
    • bullet-icon SOLUTION: Leverage the GenAI Secure-Amazon Bedrock integration to write the policy for you. All you need to do is enter a simple text prompt to identify patterns or text and the exact value you need for the rule will be created. (GenAI Secure also comes with predefined policies for common personally identifiable information items like social security numbers or credit card numbers.)
    Website_Support_Panes (7)
  • Forensics + Remediation
  • doc-icon

    Forensic Analysis and Malware Remediation

    Analyze data flagged as suspicious and take action
    • bullet-icon CHALLENGE: Suspicious or malicious files can be cryptic or require additional analysis to understand what the malware does and its level of impact, but using tools like Google search or VirusTotal to get context can slow down an investigation, require data transfers, or increase potential for manual error.
    • bullet-icon SOLUTION: Analyze files using the GenAI Secure-Bedrock integration to see the content breakdown and remediation suggestions. No need to worry about data transfers over the public internet or data being sent to a third party because GenAI Secure is built on a Bedrock instance running in your AWS account.
    Website_Support_Panes (8)

Protect Output with Custom RegEx

Prevent sensitive data in user inputs and FM responses from being exposed
  • bullet-icon CHALLENGE: You need to write specific policies to prevent data loss, but RegEx policy syntax can be complex and dense, making it difficult to produce.
  • bullet-icon SOLUTION: Leverage the GenAI Secure-Amazon Bedrock integration to write the policy for you. All you need to do is enter a simple text prompt to identify patterns or text and the exact value you need for the rule will be created. (GenAI Secure also comes with predefined policies for common personally identifiable information items like social security numbers or credit card numbers.)
Website_Support_Panes (7)

Forensic Analysis and Malware Remediation

Analyze data flagged as suspicious and take action
  • bullet-icon CHALLENGE: Suspicious or malicious files can be cryptic or require additional analysis to understand what the malware does and its level of impact, but using tools like Google search or VirusTotal to get context can slow down an investigation, require data transfers, or increase potential for manual error.
  • bullet-icon SOLUTION: Analyze files using the GenAI Secure-Bedrock integration to see the content breakdown and remediation suggestions. No need to worry about data transfers over the public internet or data being sent to a third party because GenAI Secure is built on a Bedrock instance running in your AWS account.
Website_Support_Panes (8)
GenAI_Article_Featured

Understanding the Threat to AI

We are living in the "wild west" of malware, data loss and AIa frontier that is defined by great opportunity, and increasing vulnerabilities.

As enterprises build out GenAI capabilities, one of the biggest security challenges they face is the vulnerability of AI models and ensuring the integrity of external datasets used by large language models (LLM) for their GenAI applications.

Learn more about the threat associated with GenAI in How to Protect GenAI Models.

Read More

How It Works

Completely Self Contained in Your Environment

GenAI Secure provides a quick and easy way to build security into your GenAI application. The solution is built using modern serverless architecture, plus the Bedrock-integration runs in your AWS account, ensuring data never leaves your environment.

blue-box
Rapidly Protects AI Models, Data, and Systems

Deployment is handled via an AWS CloudFormation template or HashiCorp Terraform module making it a breeze to stand up and begin scanning so you can secure AI data rapidly.

blue-box
Uses Industry-leading Scanning Engines

Multiple engines scan datasets in AWS storage. Scan existing data on demand or via schedule. Efficiently scan new files before they are written or when they are dropped into storage to ensure AI remains safe as data outputs grow and its use becomes more widespread across your organization.

blue-box
Most File Types; No Size Limits

GenAI Secure scans a wide variety of file types with no file size limits to ensure complete coverage and security of AI and ML-related files, datasets, and models.

blue-box
Model and Data Protection at the Push of a Button

Automate and manage protection from an easy-to-use console. Tag, delete, or quarantine infected AI/ML files or files that contain sensitive data based on user-defined policies.

blue-box
Untitled design (39)

Expert Design for AI Security

GenAI Secure allows customers to protect both model data and output from GenAI applications, guarding against risks such as data poisoning and data loss, without disrupting workflows.

Use GenAI Secure to enhance data security for FMs services like Amazon Bedrock or Amazon Titan as well as machine learning (ML) services like Hugging Face and Amazon SageMaker. As long as your GenAI application uses AWS storage services such as Amazon S3 to deliver or serve GenAI application data, you can use GenAI Secure.

Frequently Asked Questions

clouds

How can I prevent sensitive information from leaking through my GenAI application?down-arrow

Leveraging GenAI Secure by Cloud Storage Security, you can easily scan and secure all data related to AI and ML models within your AWS storage. This can prevent AI data from creating security breaches through malicious code escaping to downstream uses, and scan all data to ensure sensitive data is not being leveraged for malicious purposes.

Do AI models pose a risk to downstream data use?down-arrow

Yes, if unscanned and unchecked the AI inputs such as external data can harbor malicious code and files that can threaten downstream use. AI models themselves can create issues where they leak data beyond the organization, or create outputs that can jeopardize the business across departmental silos.

What are the dangers of AI models and their outputs?down-arrow

While seemingly harmless, AI models can serve as vectors for malicious actors that can infiltrate business processes and expose customer or private business data. Models can learn from the data around them, intentionally or not, and push content to unauthorized viewers. Outputs can also be found to be malicious, creating risk for downstream users across the business and to any interlinked applications. Securing GenAI outputs is critical to keep businesses secure and data safe from intrusion and abuse. Chat with our team today to learn how to secure your AI applications and outputs and protect your business.

What storage services does CSS support?down-arrow

CSS automates data security for Amazon S3, Amazon Elastic Block Storage (Amazon EBS), Amazon Elastic File Service (Amazon EFS) and Amazon File Server (Amazon FSx).

Data Security for GenAI Made Easy

Safeguard your GenAI and Machine Learning systems with effortless precision: seamless virus and malware scanning plus data loss prevention tailored for simplicity and security across models, datasets, and more.

girl-on-call