Back

Blogs

Ireland's Elections: What's next for climate pledges?

View All

Case Studies

Supporting BrainDrip LLC's Entry into the Hydrogen Market

View All

Upcoming Events

Webinars

Cyber crime with virus or worm symbol digital 3d illustrationCyber crime with virus or worm symbol digital 3d illustration

Thoughts

The Morris II Worm: An AI driven cyber threat

13th Mar, 2024

The past year has seen a significant push towards the integration of generative AI technology – particularly within software like email and chat applications. The ambition behind this movement is clear: to revolutionise the user experience by blending common office functionalities with the capabilities of AI assistants. However, recent developments in cybersecurity research have highlighted a critical need for caution within the industry.

Security experts have recently brought to light a concerning advancement in cyber threats — a zero-click, self-propagating worm capable of infiltrating AI-powered chatbot applications to execute a series of malicious activities. This new threat, affectionately named Morris II as an homage to the infamous Morris worm of 1988, exploits vulnerabilities in applications that incorporate AI-driven email assistants like ChatGPT and Gemini through a method known as prompt injection attacks.

What is the Morris II worm?

Morris II’s mechanism of action is sophisticated and frighteningly effective. It leverages adversarial self-replicating prompts, which manipulate generative AI models into replicating and spreading malicious outputs automatically. This process allows the malware to spread across interconnected networks without requiring any interaction from the victim. Essentially, the AI models, when presented with a malicious prompt, will not only execute the intended malicious activity but will also replicate the prompt in their outputs, spreading malware further.

The implications of such a threat are vast – and could include:

  • Accessing and stealing sensitive data.
  • Reading emails that contain sensitive information.
  • Performance of actions within a network that can further compromise an organisation.
  • The worm spreading to other GenAI agents, applications and users – increasing the scope of the attack.

The ease at which the Morris II worm can carry out such attacks is concerning, especially considering the current trajectory towards an ecosystem heavily reliant on generative AI-powered agents.

Mitigating the exposure to GenAI threats

The growing field of AI-powered email assistants, which is projected to increase significantly in the coming years, underscores the urgency of addressing these vulnerabilities. The researchers’ demonstration not only sheds light on the potential for such malware to compromise personal data and disrupt operations but also shows the broader implications for the security of generative AI ecosystems.

Despite the minimal current deployment of systems utilising generative AI, the expected growth in this area suggests an increasing exposure to such threats. The researchers advocate for the implementation of safeguards that can prevent the replication of malicious inputs and counteract jailbreaking techniques, thereby mitigating the risk of such attacks.

In closing, the discovery of Morris II serves as a critical reminder of the need for ongoing vigilance and proactive security measures in the development and deployment of AI-powered applications. As the integration of generative AI continues to expand, understanding and mitigating the risks associated with these technologies will be paramount to ensuring the security and integrity of digital ecosystems. The collaborative effort between industry and cybersecurity professionals, as demonstrated by the pre-emptive sharing of findings with major AI chatbot developers, represents a constructive step towards safeguarding the future of AI-driven innovations.

With a keen focus on the intersection of technology and regulatory frameworks, Gemserv spearheads initiatives aimed at harnessing the power of AI while ensuring the highest standards of data protection, privacy, and ethical considerations are met. Our commitment to pioneering in AI governance is underscored by our active involvement in shaping industry standards, contributing to cutting-edge research, and providing strategic guidance to organisations navigating the complex regulatory environments surrounding AI. By prioritising risk management and compliance, Gemserv not only promotes the sustainable and responsible development of AI technologies but also ensures that these advancements are accessible, secure, and beneficial for all.

Authors

Ian Hirst

Partner, Cyber Threat Services

Read Bio