Back

Blogs

Ireland's Elections: What's next for climate pledges?

View All

Case Studies

Supporting BrainDrip LLC's Entry into the Hydrogen Market

View All

Upcoming Events

Webinars

Genie coming out of a lampGenie coming out of a lamp

Thoughts

Should developers press pause on AI?

30th Mar, 2023

1,000 tech industry names have publicly called for a six-month halt to training new Artificial Intelligence systems to give the industry time to assess the risks.

AI seems to have reached an inflexion point where it is finally powerful enough to really capture the public imagination. Politicians are keen to capitalise on the potential of these systems to drive economic growth and the low carbon economy, and ordinary citizens are signing up to see what they can do with the technology.

However, as these systems get more powerful and widespread, the risks associated with them also become more significant. The Future of Life Institute issued its open letter calling for a halt because it is concerned that the risks are not yet understood and so cannot yet be properly controlled.

Can you put the genie back in the bottle?

We can bet that any moratorium would not be completely respected. Chinese and potentially even American innovators – who are the origin of the main advances in this area – would find it difficult to accept a long period of potential commercial disadvantage.

The developers do seem to be working hard to identify and address risks, but the conflicts of interests in ‘marking their own homework’ are clear. The technical report accompanying GPT-4 addresses risks ranging from hallucinations (where the system provides inaccurate information), to the proliferation of weapons, to privacy and cybersecurity. However the descriptions of the risks are limited, as is the information provided about how the system was developed. There is, for example, nothing about future energy requirements for AI and the potential impact of that on international climate change commitments.

How much do we know about the risks of AI?

We know that the dataset used to train the system is likely to contain personal data that may not have been lawfully obtained as well as information that is the intellectual property of rights owners who have not provided permission. We know that people are only now starting to experiment with the ways the tools can be used – and without understanding that, it’s simply impossible to assess all the risks.

The debate reinforces the need for governments to take up the subject more quickly. The EU is making progress on the AI Act, which aims to regulate the use of certain types of AI. However there is no date for it to be transposed into national laws yet and other countries, including the UK, have no equivalent.

Laws like the AI Act provide ‘guard rails’ to help developers consider wider social and economic risks, but the problems envisaged by the signatories to the open letter go far beyond what can realistically be worked through in a six month development moratorium. Regulators, politicians and think tanks need to consider what model of society we want to implement and how AI will be incorporated into that. The rise of AI is inevitable, but it requires systemic changes, including retraining millions of adults and finding roles for those whose work opportunities are displaced by technology.

Authors

Camilla Winlo

Head of Data Privacy

Read Bio

Nicolas Cambolin

Global Director of Data Intelligence at Talan

Read Bio