Italy has become the first Western country to block advanced chatbot ChatGPT.The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft.The regulator said it would ban and investigate OpenAI "with immediate effect".
Italy has become the first Western country to block advanced chatbot ChatGPT.
The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft.
The regulator said it would ban and investigate OpenAI "with immediate effect".
Alternative article available here.
Here is the original legalese document from the Italian authority:
Italian speaker here. Skimming it, the short version is: it's unclear what happens to the personal data of the users (in reference to the requirements of GDPR), also there's no age checks nor guarantees that the AI won't just write porn or other inappropriate material at younger people. So it doesn't stand up to the standards of safety that Italian (and EU) law requires, and it has to be blocked unless it fixes that.
Good luck RLHF'ing your way out of this, OpenAI. The age thing I think is Italian exclusive, but the data protection stuff is EU-wide and other countries might follow up.
Does anyone have any guesses what caused this ban?
From what I understand, the reason has to do with GDPR, the EU's data protection law. It's pretty strict stuff and it essentially says that you can't store people's data without their active permission, you can't store people's data without a demonstrable need (that isn't just "I wanna sell it and make moniez off it"), you can't store people's data past the end of that need, and you always need to give people the right to delete their data whenever they wish for it.
Now, this puts ChatGPT in an awkward position. Suppose you have a conversation that includes some personal data, that gets used to fine tune the model, then you want to back out... how do you do that? Could the model one day just spit out your personal information to someone else? Who knows?
It's problems with interpretability and black box behaviour all over again. Basically you can't guarantee it won't violate the law because you don't even know how the fuck it works.