Background

I recently received this message from a church that I used to be affiliated with:

"There is an email scam going out to all of [CHURCH_NAME]. It is supposedly from Pastor [John Doe] asking for your assistance in purchasing gift cards (or anything else) and requests asking for confidentiality. They are apparently going down the alphabet. We have so far received three reports and have only reached "B". There may also be a picture of Pastor [John] (taken from the website) attached to the email so that it really looks official.

The only valid church email addresses that Pastor [John] uses are:


jdoe@churchdomain.org
pastor@churchdomain.org

If you do receive such an email, please do not respond!! Promptly report it to https://support.google.com/mail/contact/abuse and also let us know in the office that you were hit. If you are unsure of an email please do not respond and immediately call the church office to verify. If it is afterhours, leave a voicemail on the church phone and I will call you back the next morning.

If you ever get a questionable email from any of the church staff, elders or deacons please call the church office to report and verify."

I've been expecting increasingly personalized scams in 2023, thanks to chatGPT and similar models.

Hypothesis

Small churches, like this one, seem like an ideal audience, due to the combination of:

  • extremely high trust environments
  • an older audience that is less adept at picking up on red flags for scam messages
  • largely unquestioning acceptance of messages from authority figures.

Has anyone else on Lesswrong seen what could be the beginning of more automated scam attempts?

If so, in what context did you see it, how credible/sophisticated was the attempt, and were there obvious AI giveaways inside the attempt?

I'm unsure of if this is an early false alarm, or if it is the tip of a growing iceberg.


 

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 5:58 AM
[-][anonymous]1y50

So here's something super interesting.

Most scams try to put an obvious "tell" that it's a scam early in the message.

Why do they do this?  Because every person that falls for the scam initially will have to contact a scammer who then sends further messages/does a phone call etc to pull off the scam.  This requires labor hours to deal with each potential victim.

So by putting an obvious tell, the only people who consume the scammer's time have some mental deficiency - often caused by untreated aging - that causes them to still think the scam is legit.  So they are easier to rob.  

That is really interesting. To me, this implies that as costs for scammers are lowered, the threshold for a useful level of gullibility is about to lower dramatically, given how much cheaper server time is than human time (error bar here, since I don't actually know how much cheaper GPT-X calls will be than the time of an English-speaking human in a developing nation). If it is indeed 10x lower, that would likely lead to scams losing an obvious "tell".

[-][anonymous]1y21

Well more exactly the tell would still be there, it just would become less obvious.  Since GPT-n isn't going to be free either.

Today or yesterday there were four spam messages from a user (now deleted) called huntshaded. They were clearly composed by a language model, likely prompted by text from the posting they were comments to. However, they were no more convincing to me than any other spam.

I have not seen any increase in spam quality or quantity and I have not spoken to anybody who told me that they have.

I am aware of the fear that the current generation of LLMs could make social engineering attacks much cheaper and more effective, but so far have not encountered so much as a proof of concept.

Me neither, @kithpendragon. I've seen a handful of things in the wild (mainly social media accounts, not scammers) that seem like they could be part of a mostly-automated content pipeline, but no compelling proof of concepts or project either. Thanks for the data point!