Back

Blogs

Why cyber security matters this Black Friday

View All

Case Studies

Supporting BrainDrip LLC's Entry into the Hydrogen Market

View All

Upcoming Events

Utility Week Awards 2024

View All

Webinars

The Future of Security: Convergence of Physical and Cyber Domain 3/3

View All

A women's face is overlaid with computer graphics ti represent data privacy for the CPO CFO HR Engage SummitA women's face is overlaid with computer graphics ti represent data privacy for the CPO CFO HR Engage Summit

Thoughts

ChatGPT: Opportunities and Privacy Issues

22nd Mar, 2023

Chatbots tackle possibly the best-known test in artificial intelligence: the Turing Test. This test is designed to assess whether a computer can be mistaken for a human. In order to do that, the computer needs to be able to communicate with a human.  

Chatbots have been getting closer to this standard for some time, but ChatGPT represents a real breakthrough and has galvanised the field, with the Big Tech players rushing to catch up and release their own natural language AI chatbots.  

Chatbots have a wide range of potential uses. Simple process-flow based chatbots are increasingly commonly used to power customer services web chats. On the more nefarious side, ChatGPT has been proven capable of passing law exams.  

However, as exciting as the potential may be, these are still very early days and any organisation looking to experiment with these AI chatbots needs to think carefully about how to do that safely and lawfully. Here are some of the key considerations:  

  1.  Are chatbots built on fundamentally lawful training datasets?

    When I asked ChatGPT how it was developed, it said: “The training data for ChatGPT comes from a wide variety of sources, including books, articles, websites and social media platforms. OpenAI trained the model on massive amounts of data, including the full text of the internet (excluding private or password-protected information), in order to give it a broad understanding of the English language.”Regulators have been quite strict about scraping publicly-available data for machine learning, though. In May 2022, the ICO fined Clearview AI, a facial recognition platform, £7.5m “for using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition.” In addition, the ICO ordered Clearview AI to “stop obtaining and using personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.” ChatGPT will be hoping that regulators are less concerned about using data in this way for AI content generation than for facial recognition.

  2. Chatbot content may have unexpected consequences.

    One of the biggest issues new chatbots have historically faced is that, if data entry is manipulated, they can very quickly become racist and sexist when they come into contact with real humans. ChatGPT has controls built in to prevent that, but it is still in a beta testing phase and inevitably issues will continue to come to light. It also warns users in its FAQs that it can ‘hallucinate’ and provide answers that are not true. It is therefore very important that organisations have robust processes in place to fact check and review content before it is published. It’s also worth noting that Google’s Webmaster Guidelines prohibit ‘spammy automatically-generated content’ and that developers are creating tools that can recognise AI-generated content to a good degree of accuracy. Marketers hoping to use this for SEO content may find it has the opposite result to that intended.

  3. Chatbots may change their functionality unexpectedly.

    AI is difficult to explain and there are examples of these technologies having their algorithms and outputs change quite significantly and unexpectedly. Recently, the Italian data protection authority ordered chatbot Replika to prevent erotic content being shared with children. Rather than attempting to verify user ages, Replika simply stopped providing erotic content to any of its users. Some users – who were often socially vulnerable – had built relationships with their Replikas over many years and were left bereft. If Replika can remove a whole category of its functionality, there is a risk that any chatbot provider may do the same – causing issues for organisations that rely on it for core processes. 

It is likely that chatbots are here to stay and will be increasingly important to organisations. However, it is equally likely that the road will be bumpy in the immediate future. This is a novel technology and experiments should be carried out with appropriate controls and oversight.  

Authors

Camilla Winlo

Head of Data Privacy

Read Bio