Back

Blogs

Manipulation of Generative AI

View All

Case Studies

Securing Cyber-Physical Systems for a Defence Manufacturer

View All

Upcoming Events

LEMA Summit 2024

View All

Webinars

Thoughts

Manipulation of Generative AI

7th Aug, 2024

Generative AI has been revolutionary across all industries by automating content creation, enhancing productivity, writing code and analysing data as well as much more.

Cyber criminals are also making use of generative AI as a powerful new tool to enhance their malicious actions. Misuse and exploitation of generative AI features and functions allows attackers to bypass the security limitations that are meant to enforce ethical and legal use of the tools. 

  • Prompt injection can be used to trick the AI model into outputting unauthorised responses with methods such as reverse psychology, encoded text, or hidden characters.  
  • Jailbreaking allows attackers to disable the safety restrictions with crafted prompts that alter the AI’s behaviour. 

Threats Leveraging Generative AI 

Phishing Attacks 

The use of AI allows quick creation of targeted or generic phishing emails that will lack the usual warning indicators like grammatical and spelling errors. These AI generated phishing emails are harder to detect by both users and automatic email filtering tools due to their added complexity. Generative AI also speeds up the creation of the phishing emails which means threat actors can carry out more campaigns. This extra speed and complexity dramatically increases the chances of the phishing campaign succeeding. 

Identity Impersonation

Deep fake technology is used to generate images and audio based off real people. Attackers have begun to exploit this in social engineering, scams, and spreading misinformation (such as impersonating political candidates). These attacks require a large amount of time and effort to set up so usually impersonate high profile people like celebrities, politicians and company executives, and can be reused in many attacks. In 2019, scammers convinced an energy company CEO to send €220,000 to them by impersonating the voice of his boss from the parent company on a phone call. 

Exploit and Malware Creation

With AI excelling at code creation, generative AI is perfect for crafting exploits and malware. A jailbroken AI model could be provided with prompts or example code that it can use and adapt into malware almost instantly. This speed of creation provides a real threat to cybersecurity firms, the quickly evolving malware could override typical signature-based detection and could render it obsolete. Attackers are able to feed information they have gathered about a target’s systems into AI along with a malware example to tailor the attack which could enhance the impacts and scale of the attack. 

Data Exfiltration

The training data that is used to create generative AI models can be stolen by threat actors. Proven exploits are available online to execute this attack. Although much of the training data is publicly available online, some of the data used may be sensitive or require a payment to view. However, the biggest threat this poses is when companies are using generative AI for customer support chatbots that have been trained using real customer interactions that could then disclose customer personal information. 

Authors

Toby Knott

Cyber Security Apprentice

Read Bio