Back

Blogs

Three sinister groups threating organisations' cyber security

View All

Case Studies

Powering Alt HAN Co.'s Smart Meter Rollout

View All

Upcoming Events

Solar & Storage Live London

View All

Webinars

Private & blended finance retrofit: lessons from a pioneering partnership

View All

Thoughts

Protecting privacy rights in the age of AI

19th Jul, 2023

ChatGPT has boosted the artificial intelligence (AI) revolution and introduced the term ‘generative AI’ into our daily vocabulary. As AI rapidly evolves, how can organisations ensure they continue to protect privacy rights?

Put simply, generative AI is a type of artificial intelligence used to produce synthetic content, such as text, images, or other media, that resembles content created by a human. Its output and transformative potential for business and society is already bedazzling the world. The global market is estimated to be worth between £150 billion ($200 billion) to £1 trillion ($1.3 trillion) within a decade. As this market continues to grow, there is an increasing need to verify that the AI solution implemented is working as expected – in a manner compliant with standards and legislation, including an individual’s privacy rights. AI creators must be able to demonstrate this.

To help regulate the system, assurance techniques and associated qualifications, like the IAPP’s ‘Artificial Intelligence Governance Body of Knowledge’, are emerging. However, until there is an established base of qualified AI assurance professionals for organisations to draw upon, it is harder to be certain the right governance is in place when integrating AI into processing operations.

The problem with AI’s algorithms

Given that this intelligence is programmed with set rules, one critical risk is inherent algorithmic bias. But what does that look like? In 2018, Amazon became aware that its internally developed recruitment tool was producing higher candidate ratings for male candidates. The algorithm favoured masculine language on CVs, amongst other factors. From this example, it’s clear that bias affects ‘machine learning’. This is a subset of artificial intelligence which learns from patterns in data for the purpose of making better future decisions. Although Amazon allegedly edited out the bias, like humans, once you have learned something, it can be difficult for it to be erased completely from your memory. At present, once AI has learned something with bias, it is difficult to remove such bias and ensure it has been ‘unlearned’. Not even the tech giants have cracked this yet. For individuals exercising their ‘Right to Erasure’, an unlearning algorithm should give people greater control over their information and therefore their privacy.

Without human intervention, biased and unfair decisions are more likely to go unchallenged. In the EU, the AI Act is currently under development, with the intention of regulating the use of AI. The draft Act classifies risk into the following levels: unacceptable risk; high risk; limited risk; and low or minimal risk. Examples as to what AI practices would fall under the different levels accompany each risk. Organisations who are considering introducing AI into their workplace should look to understand the level of risk associated with the desired AI system(s) and withdraw if such system is deemed to pose ‘unacceptable risk’ to individuals. This does not negate the need for organisations to fulfil their accountability obligations. They must still carry out a DPIA for all systems, technologies and processes that pose a high risk (under the GDPR) to individuals and complete an Algorithm Impact Assessment (AIA) to assess the broader potential impacts.

Adding AI to your business

Within the workplace, introducing AI without all the necessary considerations may also pose difficulties. A business must be able to meaningfully assess or explain the outputs to individuals. However, it may also raise ethical concerns such as the development of bias, leading to discriminatory outcomes. To ensure fairness considerations on AI are addressed under the existing data protection legislative framework, the ICO has produced its own guidance on AI and data protection. They’ve also created toolkits for organisations assessing the risks for their own use or intended use. But this guidance on its own may not be enough to protect workers’ existing privacy rights in the UK. Some AI risks will arise in the gaps between existing regulatory remits. It is critical for government intervention to bridge such gaps and address those uncertainties by means of regulating the use of AI.

In May 2023, the Artificial Intelligence (Regulation and Workers’ Rights) Private Members’ Bill was introduced to the House of Commons. Its purpose is to protect the rights of individuals working alongside AI in the workplace. It would introduce a duty for employers to consult with their workforce and trade unions before introducing AI into its operations. For employers, it is in their best interest to carry out a meaningful consultation. This shouldn’t only consider the privacy risks but also on how introducing AI could impact other areas such as wellbeing and loneliness at work; certain job roles responsible for maintaining the AI may experience less human interaction whilst at work. It will be some time before such provisions would become law, if enacted.

Organisations must continue to consider how best to harness technological developments for their individual needs, not forgetting how doing so may impact its employees and customers. It’s vital to ensure the most important issues associated with implementing generative AI are anticipated early on and addressed.

Authors