Back

Blogs

Global Chaos as Microsoft Outage Disrupts Critical Services

View All

Case Studies

Securing Cyber-Physical Systems for a Defence Manufacturer

View All

Upcoming Events

LEMA Summit 2024

View All

Webinars

Businessman use laptops and Ai intelligent system business management and environmental conservation green economyBusinessman use laptops and Ai intelligent system business management and environmental conservation green economy

Thoughts

ISO 42001: Shaping trust in AI

4th Jul, 2024

Estimates about the future growth of Artificial Intelligence (AI) vary, but all indicate a huge increase in uptake across all industries and sectors. According to some, AI is expected to become a market worth more than half a trillion dollars globally in the next ten years.

Every new and fast-growing technology (e.g., cloud computing) brings a multitude of risks, all of which apply to AI, but there are other risks which are quite specific, especially when it comes to security and data protection. The increasing use of AI is likely to bring concerns around societal challenges such as privacy, safety and accountability. This will require specific considerations and safeguards, which will need to be addressed accordingly.

Up until recently, there had been little in the way of advice around these risk areas for AI and how to address them. However, last year the UK National Cyber Security Centre (NCSC), in conjunction with various US agencies, including the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA) and Federal Bureau of Investigations (FBI), published guidelines for the secure development of AI technologies.

At around the same time, the ISO published a new standard for the management of AI systems, ISO 42001, which is applicable to all AI applications in any context. It is designed for organisations who develop, provide or use AI technologies. The aim is to give assurance to customers that they are following a structured framework in-line with industry best practices to ensure use of AI systems is as safe, secure, and ethical as possible.

ISO 42001 consists of the standard clauses found in most other modern ISO standards and a set of controls across nine AI specific domains. Usefully, it also includes guidance for the implementation of the controls, which for some other standards is sometimes published separately at an additional cost. It is also compatible with other newer ISO standards as it shares the high-level structure. All the mandatory requirements are essentially the same, so should be familiar to organisations already using standards such as ISO 27001, 9001 and 14001 for example.

Why should my business use ISO 42001?

Implementing any ISO standard brings many benefits, but with the uncertainty around AI, adopting ISO 42001 can lead to a range of competitive advantages, such as:

  • Improved security and governance – especially where other security related standards such as ISO 27001 are in place as both standards naturally complement each other
  • Demonstrates the responsible use of AI – increasing stakeholder trust and confidence in a relatively new and developing area
  • Market differentiation leading to wider customer interest – ISO certifications are a common prerequisite when it comes to procurement

Can we get certified?

Although it is not possible to certify to the new standard just yet, organisations can adopt the standard to use as an assurance framework, so that they are able to certify easily/seamlessly when the time comes.

What should we do now?

Contact Gemserv for more advice. Our consultants are experienced at implementing a range of ISO management systems. They can help your organisation meet all the requirements of ISO 42001, giving you and your customers peace of mind around your AI systems and processes.

Authors

Brian Hopla

Information Security Consultant

Read Bio