Back

Blogs

Encouraging Advanced Meter Adoption in Northern Ireland: A Plan Inspired by Rory Sutherland's Alchemy

View All

Case Studies

Supporting BrainDrip LLC's Entry into the Hydrogen Market

View All

Upcoming Events

Utility Week Awards 2024

View All

Webinars

The Future of Security: Convergence of Physical and Cyber Domain 3/3

View All

Thoughts

The future of AI in the public sector

25th Feb, 2022

Kaveh Cope-Lahooti, Senior Data Protection Consultant at Gemserv, attended the TechUK Roundtable: Feed into the Algorithmic Transparency Standard.

The event involved a discussion with representatives from industry, government and civil society around the Central Digital and Data Office (CDDO’s) standard, which is currently being trialled with selected government departments. We were exclusively invited to provide feedback at this critical stage, before the standard is developed further.

The new standard is aimed towards placing transparency obligations on public sector organisations using algorithms, following public concerns in recent years in relation to the Ofqual student grades scoring scandal and challenges to the Home Office’s use of AI to make visa decisions. In particular, the standard aims to remedy these concerns, by introducing a template for providing information to the public where decisions are made using AI. This will allow public-sector organisations to provide information on the potential social or economic impacts of these outcomes (such as with regard to potential discrimination in benefits or services, or unfair profiling), as well as attributes on the categories and sources of data used.

The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why an AI system being used, and contact details of the organisation. The second includes more detailed information about how the tool works, the datasets that have been used to train the model, third parties with whom it might be shared, risk assessments and risks identified with the system, and the level of human oversight over the AI that is involved.

Participants discussed key questions about a) the benefits of the standard, b) the obligations it was likely to have on public-sector organisations and their suppliers, and c) its effect on trust in government and new technologies. In particular, the roundtable members were generally of the opinion that the tool strikes a good balance between requiring useful transparency information to be provided and increase public trust in AI, without revealing commercially sensitive information or intellectual property. It was also discussed that the standard was broadly in line with international norms, such as transparency obligations under the Singaporean Personal Data Protection Commission’s AI Model Framework, and with risk assessment provisions aligned to the Canadian Government’s Algorithmic Impact Assessment.

However, some members cautioned that the transparency information disclosed on the algorithm would need frequent updating, as the nature of machine learning systems meant that the parameters and categories of data used by such systems may vary rapidly over time. Others expressed concern that the standard may lead to excessive obligations on small service providers to local authorities or government, who would be required to disclose detailed information on their systems.

We very much enjoyed the vibrant discussion and look forward to the final standard to be issued by the CDDO, which is expected to be issued from late 2022.

Authors

Kaveh Cope-Lahooti

Principal Consultant - Data Privacy

Read Bio