Back

Blogs

Three sinister groups threating organisations' cyber security

View All

Case Studies

Powering Alt HAN Co.'s Smart Meter Rollout

View All

Upcoming Events

Solar & Storage Live London

View All

Webinars

Private & blended finance retrofit: lessons from a pioneering partnership

View All

A women looks at a smart phone in a dark room | representing data protection rights as a userA women looks at a smart phone in a dark room | representing data protection rights as a user

Thoughts

The role of data protection in the age of big data and algorithms

3rd Feb, 2021

Data protection and the use of our personal data, has had an enormous impact on our lives in the past decades.

We often hear that “data is the new oil”, and it is indeed at the core of our technologies and digital services, ‘fuelling’ many innovations in our daily lives. In the near future, thanks to Big Data technologies, we can expect delivery drones, autonomous vehicles and smart cities all to become everyday, mundane features in our lives.

These innovations usually rely for a large part on massive databases used to feed machine-learning technologies, in order to extract valuable information, from general trends to detailed profiles of data subjects’ interests.

In the early 2000s, the promises of data were high: letting the Big Tech use our data would drive innovation, improve our lives and make content available for free on the internet if we would let go of parts of our right to privacy. The majority of governments followed this agreement, mindful that too much regulation could impede innovation and competitiveness.

So, where are we now in 2021? It is undeniable that some of the promises were respected. A lot of our content, services, information, and knowledge are available and for free. We have become so accustomed to this notion that the thought of reverting and paying for content to solve online privacy problems seems like an odd suggestion. However, it is also useful to do an inventory of the many hitches we have encountered, as a society, in the past years with technologies resting upon big data and algorithms.

Let’s perhaps start with the most typical example: Facebook. From a dubious website set up by and for college students to compare women on campus, Facebook grew to a colossus able to unsettle democratic processes (let’s remember Cambridge Analytica) and be in a quasi-monopoly for picture sharing and messaging services. All those services are provided for free, in other words, in exchange for our data. But what exactly is Facebook’s main activity? Is it mainly social media, or a profiling platform for marketing departments to reach extremely granular audiences? What is the right balance between providing a service collecting so much personal information for free and using this data for profiling purposes? Still in 2021, there are no simple answers to these questions.

Another example is the Ofqual algorithm scandal in the UK back in August 2020. Due to the cancellation of exams because of the Covid-19 pandemic, students’ final marks were calculated and moderated by an algorithm, based on previous exam data. Unfortunately, the results were biased and sometimes outwardly discriminatory, favouring marks of students from wealthy areas and depreciating those of students from less privileged backgrounds. The backlash was enormous and showed that far too many organisations, whether public or private, are not only unaware but purely unprepared for the use of algorithms and their innate limitations.

While AI is already a substantial part of our lives (the question being in which aspects of our lives AI is already in use, and not when AI is going to be used), associated data protection rights are still in their infancy. It will be very interesting in this regard to follow Uber drivers’ action in the Netherlands1 against the algorithm used to dismiss them. However, giving data subjects proper, actionable rights to challenge an automated decision is useless if neither organisations using AI nor data subjects understand what is actually happening, what the potential biases are and how to address them.

Responsible, ethical AI requires more than white papers and codes of conduct, it involves active consideration of ethical issues from the earliest stages of designing an algorithm. Ofqual’s and Uber’s algorithms have been put into the spotlight – but how many biased AI systems are currently being used and causing damage?

On another note, the renewed focus on digital advertising by the Information Commissioner’s Office2 and, more generally, by Data Protection Authorities will be closely followed by privacy professionals. An important shift for the digital advertising industry is anticipated: likely a U.S. Federal privacy law will have strong consequences, but also Apple’s new Identifier for Advertisers (IDFA) requiring opt-in consent, increased enforcement on opt-in cookie banners, Google’s plan to phase out third party cookies from its browser Chrome by January 2022, and challenges to the Internet Advertising Bureau’s (IAB’s) Transparency & Consent Framework v2.0. We can also monitor closely the talks about the elusive ePrivacy Regulation, which have resumed.

However, requiring companies to request informed consent from website visitors or mobile application users is not really protection for data subjects’ privacy. Is there really a choice when the only alternative is to not use a service? Is the anticipated disappearance of third-party advertising technologies really such a positive outlook? In the case of Google Chrome, we can easily imagine that the tracking will still be happening, but will simply be done by the web browser itself. This may consolidate Google’s position in the online advertising market, making the Big Tech company the sole player able to have valuable insights into the data subjects’ interests.

We can see from a quick set of examples that, despite the GDPR coming into effect in 2018, some of the questions that were acute back then are still in sharp focus in 2021: the effective regulation of large tech companies, the extent to which some processing activities should rely on clear information or explicit consent, the effective rights of data subjects, and if some processing activities or practices should be purely prevented.

Some of these issues cannot be resolved by data protection alone. The issue of social media bubbles and the virality of fake news, which have caused huge problems at the beginning of this year, are a much wider matter than data protection. Although it does start with the question of online users’ privacy: we are facing hermetic bubbles because people’s interests have been finely refined to keep them engaged on online platforms.

Public authorities are increasingly reviewing data protection practices under the angle of competition law, and the proposed Digital Services and Digital Market Acts take a step further in this direction – giving the tools to monitor and address market unbalances before they happen rather than after. There is no doubt that we will see vivid debates in the coming years on this, but we must consider, will these pieces of legislation be sufficient and adequate to address all these issues?


References:

  1. Uber sued by drivers over ‘automated robo-firing’ | BBC News Oct 2020
  2. Adtech investigation resumes | ICO Jan 2021

Authors