One of the most important roles of the privacy team is to advocate for individuals.
It’s our job as privacy professionals to think about the ways that data processing could go wrong for the people whose data will be processed. Development teams can’t do it – their role is to think about the opportunities of new processing rather than the potential drawbacks. It’s also difficult and requires a good working knowledge of that can go wrong, which most people simply don’t have.
Most people are simply too nice to imagine how bad actors think. They don’t consider how domestic abusers, hackers or industrial spies might look at a new process because they don’t have the mindset or information. They are also often too used to their own culture, generation and neurological style to realise how different something might seem to someone who thinks very differently.
It asks privacy professionals to consider the following:
What categories of individuals should you consider?
Often, these categories are drawn very widely, for example ‘employees’ or ‘children’. However, it may be more helpful to consider categories in light of the functionality and purpose of the processing. For example, if the processing includes location services, you might consider a category of ‘individuals potentially at risk if located without authorisation’. This will help you identify risks and controls.
What rights and freedoms do individuals have?
These rights and freedoms are likely to extend beyond data protection rights and include other kinds of rights such as consumer rights. Recognising all the relevant rights and freedoms will help you identify risks and controls.
What do users want, think and fear?
Regulators recommend using focus groups to understand how users think about processing activities, including what their concerns are and how they think it will operate. It is unusual for privacy teams to have direct contact with individuals affected by their organisation’s processing activities, but this may be simple to arrange, for example if marketing runs focus groups. Users bring diversity of thought and experience and can give surprising insights that the organisation would not have identified itself.
How can the organisation communicate its approach to privacy?
The user-centred approach to privacy should include privacy communications. This can be as simple as asking marketing to help write the privacy notice in the brand voice, or as complex as creating a privacy information area with information tailored to different user groups. The law requires organisations to communicate certain information but it doesn’t limit organisations to only communicating those things. If specific risks for certain types of users have been identified, it may be a good idea to specifically describe how those risks have been mitigated, especially if it requires the individual to take action themselves. For example, you might provide information targeted at people at risk of domestic violence that explains how to use their privacy settings to protect themselves.
How can the organisation listen and learn?
There will always be people with concerns that you have not considered previously. You might choose to explicitly invite individuals to submit privacy concerns or ideas for controls to help you address their concerns and continuously improve your privacy practices.
For many organisations, there is a commercial as well as an ethical advantage to a user-centred privacy approach. Good data protection practices build trust, and encourage people to feel confident using new services and features. This approach may also help organisations understand what features to include and how to construct marketing messaging, at the same time as improving data protection for individuals. That’s what I call a win/win.