TLDR: We worked with Reuters on an article and just released a paper on the impacts of AI scams on elderly people.
Fred Heiding and I have been working for multiple years on studying how AI systems can be used for fraud or scams online. A few months ago, we got into contact with Steve Stecklow, a journalist at Reuters. We wanted to do a report on how scammers use AI to target people with a focus on elderly people. There have been many individual stories about how elderly people were frequently the victims of scams and how AI made that situation worse.
With Steve, we performed a simple study. We contacted two senior organizations in California and signed up some of the people. We tried different methods to...
Great discussion. I’d add that it’s context-dependent and somewhat ambiguous. It’s noteworthy that our work shows that all tested AI models conflict with at least three of the eight prohibited AI practices outlined in the EU’s AI Act.
It’s also worth noting that the only real difference between sophisticated phishing and marketing can be the intention, making mitigation difficult. Actions from AI companies to prevent phishing might restrict legitimate use cases too much to be interesting.