Back

Blogs

Why cyber security matters this Black Friday

View All

Case Studies

Supporting BrainDrip LLC's Entry into the Hydrogen Market

View All

Upcoming Events

Utility Week Awards 2024

View All

Webinars

The Future of Security: Convergence of Physical and Cyber Domain 3/3

View All

Student taking an examination | Data Protection and AIStudent taking an examination | Data Protection and AI

Thoughts

Why AI can cause more problems than it solves

21st Aug, 2020

By Kaveh Cope-Lahooti, Senior Data Protection Consultant

Popular algorithms are dictating every aspect of our daily lives. From marking exam papers, influencing what we buy on a daily basis, to dictating news that pops up on our apps and social media feeds. But often, no one knows how they do it, so no one is accountable.

We are at a critical juncture in society as machines are seamlessly taking the place of humans across industry, government and in civil society. In fact, the digital and technological revolution is unstoppable, and as we see with the machine-learning algorithms used to mark the exams papers of this year’s graduating students, it comes with many ethical dilemmas and must be subject to scrutiny.

Most recently, the Office of Qualifications and Examinations Regulation (Ofqual) was forced to backtrack after it used an algorithm to predict A-level and GCSE students’ results, which led to grades being generally lowered compared with previous years and teacher-predicted grades. The department has since decided to use students’ predicted grades, as opposed to the algorithm, as the method of determining results.

Budgetary challenges and the increased digitised forms of commerce and business operations presented by the coronavirus pandemic have required both public and private sector to automate and innovate quickly. March and April saw what Microsoft’s CEO Satya Nadella outlined as “two years’ worth of digital transformation in two months”. However, the speed of digitisation has in many cases outpaced the ability of organisations to trial, test and explain the logic and effects of such systems.

The Ofqual algorithm aimed to meet these goals using a machine-learning system that determines what students’ grades would be based on previous results for that school or area, rather than teachers’ predicted grades. (These are often used for university offers but may tend to be overinflated.) Indeed, the use of AI and machine-learning to make informed decisions with speed, accuracy and efficiency is seen as a key competitive advantage whether between countries or companies. However using a technology to make decisions that impact individuals’ lives so significantly – like determining their exam results – as well as lacking the transparency needed to explain to students and parents, was always going to cause reputational challenges.

In essence, the core arguments against the scoring algorithm were not so much about inequities in scoring, but more on the lack of clarification as to how grades were awarded. For example, students were confused as to what extent their academic history had been considered over their algorithm-generated grade. They were also unsure how they could challenge or ask for a re-mark. Schools, meanwhile, were left wondering how adjustments made considering previous students’ attainment at a UK-wide level could potentially drag their own results down.

As such, the key challenge for organisations is to drive efficiencies using automated technologies in a more effective manner; one which has the buy-in of participants. Developers of AI systems will need to be able to provide a transparent set of scoring criteria to the public. This should cover the scope of the personal data used in the algorithm, and details on the factors or variables that influenced its decisions. For example, in relation to students’ grades, this extra information could include providing details on the weighting given to a particular student’s previous results and performance, teacher predictions, and the average student scores at a school and national level.

On top of this, AI systems that require complex human decision-making – arguably what exam scoring and predicting grades can involve are required, by the UK’s Information Commissioner’s Office (ICO),  and other supervisory authorities, to be subject to further scrutiny. For example, particularly when used in the public sphere where accountability is much more strongly demanded, organisations and institutions need to be able to demonstrate that such a system has been thoroughly interrogated and tested, or that it has consulted with teachers’ bodies in an independent review of the costs and benefits of such systems.

In the last week, the ICO has released guidance on AI and data protection, which will help organisations prepare for the roll-out of its AI Auditing Framework, the enforcement of which has been delayed due to coronavirus. Organisations keen on digitising using AI should use this financial year as an opportunity to continue to automate their operations, whilst also preparing to meet the public, and regulatory scrutiny that will undoubtedly come from deploying complex technologies.

Gemserv is currently advising multinational organisations on how to implement effective and ethical data governance, including to prepare for the ICO’s AI Auditing Framework. For more information, please contact bd@gemserv.com

Authors

Kaveh Cope-Lahooti

Principal Consultant - Data Privacy

Read Bio