Italy Temporarily Blocks ChatGPT AI Software Over Data Breach Concerns

Italy's government privacy watchdog announced on Friday that it is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach. The move comes as the country investigates a possible violation of the European Union's strict data protection rules. The Italian Data Protection Authority said that it is taking provisional action "until ChatGPT respects privacy," including temporarily limiting the company from processing Italian users' data.

Italy's government privacy watchdog announced

The AI chatbot, developed by US-based OpenAI, is powered by large language models that mimic human writing styles based on a vast amount of digital books and online writings they have ingested. However, concerns about privacy violations and potential data breaches have been mounting. Some public schools and universities around the world have blocked the ChatGPT website from their local networks over student plagiarism concerns.

The Italian watchdog's move is unlikely to affect applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft's Bing search engine. However, the agency's statement cites the EU's General Data Protection Regulation and noted that ChatGPT suffered a data breach on March 20 involving "users' conversations" and information about subscriber payments.

OpenAI earlier announced that it had taken ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users' chat history. The company said that it had contacted those who might have been impacted and that the number of users whose data was actually revealed to someone else was extremely low.

Italy's privacy watchdog lamented the lack of a legal basis to justify OpenAI's "massive collection and processing of personal data" used to train the platform's algorithms and that the company does not notify users whose data it collects. The agency also said ChatGPT can sometimes generate - and store - false information about individuals. Finally, it noted there's no system to verify users' ages, exposing children to responses "absolutely inappropriate to their age and awareness."

As concerns grow about the artificial intelligence boom, a group of scientists and tech industry leaders published a letter on Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks. San Francisco-based OpenAI's CEO, Sam Altman, announced this week that he's embarking on a six-continent trip in May to talk about the technology with users and developers. This includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.

In response to the data breach and concerns about ChatGPT and similar AI chatbots, European consumer group BEUC called on EU authorities and the bloc's 27 member nations to investigate. BEUC said that it could be years before the EU's AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !