Background: this article. TL;DR: A man who talked with a chatbot about climate change for weeks was recommended by it to commit suicide. 

This story got into several Belgian newspapers, but IIRC never made it to the state media, somewhat to my surprise. The article for which this is a linkpost is an explanation on the attribution of the man's suicide to the AI. It is written by Tim Verheyden, expert at the VRT (regional news) for the influence of social media and technology on society.  This time, I just ran the article through DeepL. My own edits for context are between [square brackets]

It was in the Walloon newspaper La Libre Belgique that an article appeared last week about the suicide of a Belgian man after hours of conversations with a chatbot, Eliza.

The newspaper wrote how the thirty-something [year old], who was depressed by global warming, among other things, found refuge with the chatbot. The newspaper's journalist, who was given access to the conversations, analyses the conversations with the chatbot like this:

"She (the chatbot) appreciates him, never contradicts him and even seems to push him in his concerns." [...] "He mentions the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence."

The conversations last six weeks and are described as "a drug he resorted to morning and night and couldn't live without". The man eventually decides to retire from life.

Barely any background information
The ethical debate about AI, artificial or artificial intelligence, is more prevalent than ever. An open letter [Yes, the FLI one] starting from a number of concerns about the rapid developments, meanwhile, is going around the world and thousands of people have signed it. Among them are a number of Belgian scientists.

One of the signatories is American Gary Marcus, a leading American psychologist. Marcus also shared on his Twitter account the newspaper story about the Belgian man who stepped out of life: "Can we stop pretending there are no risks," he writes. Tonight, State Secretary for Digitalisation, Mathieu Michel, will also join the discussion on the French-language channel RTBF about the impact of chatbots, among other things following this case.

Along with Knack magazine [A Belgian news magazine, similar to TIME in terms of content], we too were aware of the case weeks before the story was published in the newspaper. But we were never able to analyse the suicide and its link to the chatbot objectively. Neither did Knack, as [nor] did other journalists.

Leading up to the interview with the man's widow, we put some background questions to the intermediary, a leading expert specialising in AI who was contacted by the family. Those questions were about the company behind the chatbot and also about the deceased husband, among other things. In order to estimate the impact of the chatbot's conversations with him.

The questions included: did the man already have psychological problems before he discovered Eliza, did he possibly have a psychosis, was he on psychotropic medication (these are drugs that can also affect people's behaviour), ...?

Pertinent questions about limits AI
The questions we have are a crucial part of a thorough journalistic investigation. Yet they are immediately dismissed as 'intrusive' and 'sensational'. The man's case was not important, but the goal was, it was said. And that goal is: a thorough discussion on how to avoid this in the future and what security systems we can build in. How can we legally give that chatbot a higher risk rating? And how can we better educate people with limited technical backgrounds about the dangers of these open tools?

These are very pertinent questions in times when AI and chatbots are more accessible to everyone than ever before. Research by US journalist Kevin Roose shows how things can go wrong with a chatbot. The bot also struggled with its love life and suddenly said it wanted to be 'human'.

US newspaper The Washington Post just a few days ago carried the story about someone who fell in love with a chatbot and that a software update caused its feelings to suddenly go unrequited. The newspaper De Standaard reported on its own experiment with a chatbot that incited suicide. These are new social issues that we should indeed be talking about.

Question marks over complex link chatbot and suicide
A man's suicide and the impact of a chatbot on that sad act deserve serendipitous but also thorough discussion. Even if suicide already requires some restraint, it is important to be able to correctly assess what has happened when technology affects people's behaviour. Therefore, answers to difficult questions are sometimes needed. In the end, we will not participate in the conversation with the widow because we will not get answers to certain questions. We are also just not welcome anymore.

Even though the newspaper published a very small part of the conversations lasting six weeks, it remains difficult to prove the direct link between the suicide and conversations with the chatbot. If the excerpt shows anything at all, it is that the chatbot did not stop the man. But should we expect such a thing from a chatbot?

A chatbot is software, a trained language model. It does not have human characteristics as some people think. Although a chatbot could therefore refer people to suicide prevention services. That is a guardrail that is missing today and has been denounced for some time.

Developers of chatbots and AI, among others, are rightly asked to show the utmost transparency about their technologies.

In a sad, difficult and emotionally very fraught story like this, background information and transparency is anything but easy. But necessary because the public discussion is so necessary and important. With the information available today, it remains remarkable that in this sensitive case, a narrative that is just a little too simple is being pushed very hard.

I think it is an interesting article and I'm glad it provides some context to the previous article, which was lacking there. It somewhat implicitly talks about the counterfactual. Would the man have committed suicide at all if it weren't for the bot? What I'm also wondering is the timeline between the AI sending those 'encouraging' messages to him and him doing what he did, is it hours, days, weeks? I'd argue that the smaller the time between these, the more likely it is he died because of that message. But, if a person tells someone they should kill themselves and they end up doing so shortly after, is that person responsible for their death? 

If the excerpt shows anything at all, it is that the chatbot did not stop the man. But should we expect such a thing from a chatbot?

Intuitively, my reaction was "obviously yes". But after spending some more time thinking about the question, I'm not sure. Can this case be considered "assisted suicide"? How would a chatbot stop someone from committing suicide? Sure, it could suggest contacting a therapist or maybe even doing it for them, but it would it have prevented a suicide entirely? Of course, there is a lot between this and the chatbot actively encouraging suicide.

Regardless, we should be careful about attributing the suicide to the AI entirely. And we need to figure out what an AI should do when someone wants to commit suicide? I'm not quite convinced that just giving them the phone number of the local suicide hotline is the best thing to do.

New Comment