BLOG
|

3 min read

What You Need To Know About AI-Generated Malware (And How to Stop It)

In recent months, the release of applications such as OpenAI’s ChatGPT have caused a global spike in the popularity of generative artificial intelligence (AI) technologies. As the value AI provides continues to be showcased and proven, IT industry leaders such as Amazon Web Services (AWS) have invested large amounts of time and money into developing and integrating generative AI into their internal processes and offerings.  

While generative AI creates the opportunity to optimize business operations such as cloud computing, content creation, and even software development, the general availability of generative AI also makes it a tool with almost unlimited skills and knowledge accessible to anyone with an internet connection, including bad actors.

 

What is Generative AI?

Generative AI, in its simplest form, is an artificial intelligence system such as ChatGPT or Bard that can be used to generate content such as text, video, code, and images. Generative AI relies on a “foundation model” (FM), a machine-learning (ML) algorithm trained on huge amounts of data, to produce downstream content at the request of the user. Unlike traditional ML models, which are designed to perform certain tasks such as analyzing text, FMs are larger and more general-purpose, leading to their ability to perform many functions effectively.

Among countless other functions, generative AI can quickly forecast trends, solve problems, and write complex programs that would otherwise take a human anywhere from a few minutes to a few months. Many generative AI applications are publicly available and free to use, providing the unique benefit of producing content both quickly and effectively even if the user has little knowledge of the subject.

 

Why is Generative AI a Cybersecurity Concern?

In the past, the process of building malicious software and code was one that required a high level of skill, education, and experience. In an experiment performed by Forcepoint’s Aaron Mulgrew, he concluded that ChatGPT could easily create malware at a level of complexity that had “previously been reserved for nation state attackers using many resources to develop each part of the overall malware”. In his experiment, Mulgrew, a self-proclaimed novice, used ChatGPT to piece together a program that stole data using steganography. 

Mulgrew’s experiment demonstrates the incoming commoditization of malware development. He estimated that what could take a team of 5-10 people several weeks to build, can now be done by one person in just a few hours. According to Mulgrew, “this is a concerning development, where the current toolset could be embarrassed by the wealth of malware we could see emerge as a result of ChatGPT.”

 

What is a Zero-Day Attack?

The program that Mulgrew developed performs what is referred to as a “zero day attack”, a malicious event that exploits a vulnerability that is unknown to the developers of that infrastructure. In the case of Mulgrew’s malware, ChatGPT created the program in a manner that allowed it to slip past controls such as most antivirus scanning agents.

When a zero-day attack occurs, developers must scramble to detect the vulnerability and remediate the effects of the attack. With the advent of generative AI and its ability to create all-original malware, the amount of zero-day attacks has the potential to increase exponentially in the coming months and years.

 

How can you Prevent Zero-Day Attacks?

At the moment, the IT world is still in the early days of mainstream generative AI and as a result, much research is needed to discover the best way to respond to its malicious uses. However, the threat has already arrived and companies must take action.

According to recent research from the Enterprise Strategy Group, storage systems and cloud-based data are the two most highly targeted infrastructure components for ransomware attacks. In this way, cybercriminals are already focused on storage in the cloud, and the increasing popularity of generative AI only stands to compound this risk.

Cloud Storage Security (CSS) leverages the power of the unified, cloud-native CrowdStrike Falcon® security platform to identify ransomware, viruses, trojans, and worms, protecting businesses from advanced attacks in cloud storage. CrowdStrike Falcon is a leading enterprise threat protection platform, leveraging advanced AI and ML.

Through an original equipment manufacturer (OEM) partnership with CrowdStrike, CSS provides customers with the CrowdStrike File Analyzer Software Development Kit (SDK) scanning engine, which uses market-leading ML technology and CrowdStrike’s massive corpus of malware samples to scan for malicious code. CrowdStrike’s File Analyzer SDK is a proven component of the CrowdStrike Falcon platform, which is powered by the CrowdStrike Security Cloud and world-class AI.

 

Cloud Storage Security is an AWS Public Sector Partner with AWS Security Competency, an AWS Qualified Software offering, and an AWS Authority to Operate designation. Our antivirus solutions are available in AWS Marketplace – scan 500 GB in 30 days for free with a fully featured trial.

 

 

Tired of Reading?

Want to watch something instead?

Website_Case_Studies_Watch_Video (3)