This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
viewpoints
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 3 minute read

AI bots and EU data protection - the recent actions by the Italian DPA against ChatGPT

Once more we see EU data protection authorities slow down innovation. This time it’s AI and the Italian Garante (data protection authority). Of course, organizations have to comply with GDPR and other AI rules. On the other hand, GDPR is a European piece of legislation that requires uniform interpretation and guidance. The recent order by the Italian Garante against ChatGPT came a bit surprising. We have summarized the order for you as well as the following ToDo-List by the Garante and commented both. The Garante’s order does not only affect ChatGTP, but also all organizations in the EU that use ChatGPT or allow their employees the use of ChatGPT:

  • You might know: The Garante`s Order

The Italian data protection authority (“Garante”) had issued on March 31, 2023 a temporary ban on data processing by the generative AI Tool ChatGTP (“ChatGPT”) from OpenAI LP (“OpenAI”) in Italy (English translation) . The Garante, criticised OpenAI mainly for

i. not having verified the age of ChatGPT users;

ii. the information provided to the data subjects (users and those whose data is processed) in regard to data processing is insufficient; and

iii. there is "no legal basis justifying the massive collection and storage of personal data" to "train" the chatbot.

The publicly available order is not very detailed and does not contain much reasoning unfortunately.

  • Update: The Garante provides a ToDo-List

After heavy critzism, the Garante published on April 12, 2023 a "to do list" for OpenAI. The fulfilment of which (until April 30) will allow ChatGPT to be put back into operation in Italy. The Garante requests:

i. An information notice describing the modalities and logic of the data processing required for the operation of ChatGPT, as well as the rights of data subjects (users and non-users).

ii. This notice, to be placed on the website, must be presented to users before they complete their registration (or, after reactivation of ChatGPT, to already registered users), and they must also declare that they are over 18 years old.

iii. The legal basis for processing user data shall be either consent or legitimate interest.

iv. Measures to ensure the exercise of data subject rights by users and non-users.

v. Introduction of an age verification system for logging into the service.

vi. OpenAI, in consultation with Garante, must develop a public awareness campaign by May 15 (English translation). 

  • Comment

The Garante certainly does have valid points in its criticism. However, at least from what is publicly available, the Garante’s statement is too vague. It provides more uncertainty than clarity and is a blocker to invention. The decision is cited around the globe and Europe, once more, is seen as innovation unfriendly. Given that ChatGPT is not an Italian-only phenomenon, it would have been welcomed to see a EU-wide comment, e.g. by the EDPB.

The ToDo List of the Garante tries to re-open the door to innovation, especially as it remains unspecific in many points. For example:

  • Transparency: Users of bots like ChatGPT must be informed about how their data is processed. The Garante does not address how data subjects whose data is part of the training data shall be informed. The information could be “impossible” in meaning of Art. 14 (5b) GDPR due to the vast amount of data subjects and data. Also for these data subjects, an information on the website of ChatGPT could be a solution.
  • Justification Users: Processing of personal data of users of the bot depends on how the bot processes the personal data, especially how the bot uses the data to learn and develop. Justification under contract performance (Art. 6 (1b) GDPR) will be a main pillar. Consent may, however, be necessary in certain cases.
  • Justification all other data subjects (training data): Certainly the bot will have to try to argue with the justification of legitimate interest for the training data. There can be challenges if health data is processed. In all cases, however, there are good arguments that personal data that is publicly available in the internet (e.g. without login or pay-subscription or “friend-connection”) can be processed without consent (Art. 6 (1f) GDPR and Art. 9 (2e) GDPR). Details depend on how and what personal data the bot collects for the training data.
  • Age verification: Europe generally struggles with mechanisms for age verification. The rules are clear in theory, but there are not real (accepted) solutions. Given that minors in the EU are capable of consent if they are 16 years of age, the 18 year requirement by the Garante needs to be challenged. Reasons for this might lie in the civil law minority age to enter into contracts.

The ToDo list was a good step after the somewhat surprising order by the Garante. Organizations should monitor the situation.  Europe should try harder to support innovation and come up with EU-wide approaches of interpretation of the EU wide GDPR. It remains to be seen if the EDPB publishes guidance that spreads positive vibes from Europe into the world. In this regard, the UK shows what such an approach could look like - see our blog: A “light touch” approach to AI regulation in the UK 

Artificial intelligence: stop to ChatGPT by the Italian SA Personal data is collected unlawfully, no age verification system is in place for children

Tags

ai, data protection, emerging technologies