3476

LESSWRONG
LW

3475

MessengerAI's Shortform

by MessengerAI
3rd Aug 2025
1 min read
1

1

This is a special post for quick takes by MessengerAI. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
MessengerAI's Shortform
2MessengerAI
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 5:46 PM
[-]MessengerAI3mo2-1

I believe the existential risks of misaligned AI is extremely high because of the fact that the AI arms race is a race to the bottom and attempts at regulation will likely fail because of the incentive not to fall behind competitors. Also, it could come from small unsupervised groups, other countries, or even the larger companies that had a bad R&D project. Alignment requires everyone to agree that a certain methodology works is the best path forward, which is unlikely. 

So why is there so much focus on alignment/regulation and less on building credible defense? Defensive AIs designed to monitor and counteract misaligned AI digital space as well as more investment in cybersecurity and infrastructure hardening may be a more valuable prospect for investment. I think someone developing a misaligned AI is not a matter of if, but when. Better to prepare than to hope everyone will just agree on certain alignment protocols. 

Reply
Moderation Log
More from MessengerAI
View more
Curated and popular this week
1Comments