Yet a new report about malicious email attacks using the USAID systems.

I would think that language processing AI/ML tools like I understand GPT-3 is (which could be a complete mischaracterization) seem like a good area of application.  But I've never heard of any mention of such an application. 

However, I know so little of this area it might be both the wrong level of analysis to mitigate threats and so costly that the marginal gain over what is in place (assuming no current role) just doesn't make good sense.

Still, given how long these types of attacks have been used -- and by implication continue to work it seems -- I wonder if perhaps such an approach might catch more before a careless or unwitting user reads the email or worse, follows the links. In other words, could such an approach perhaps alert the user to more subtle language based clues than they have the attention, capacity or knowledge to pick-up on in their day to day routines.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

Google integrated their AI systems into their search and email systems a few years back, so at some level I believe it's already been done, just not as a stand alone client side app.