If you already have lots of peoples attention (for instance because you have a social media following or high status credentials) and you’re a US/UK citizen, your best available plan might be to run a political campaign with AI pause as the agenda.
You’re unlikely to win the election, but it’ll likely shift the Overton window and give people hope that change is possible.
For most people, having a next step after “ok I read the blogposts and I’m convinced, now what?” is important. Voting or campaigning for you could be that next step.
I am glad you atleast recognise the benefits of open source.
My preference order is:
As you say, I think open source today will atleast help build the proof required to convince everyone to ban tomorrow.
I think we should go further, and instead of hoping a benevolent leader to livestream the lab by choice, we should incentivise whistleblowers and cyberattackers to get the data out by any means necessary.
See also: Whistleblower database, Whistleblower guide
I like that you're discussing the question of purpose in a world where intelligences way smarter than you are doing all the useful knowledge work, and you are useless to your civilisation as a result. The frontier intelligences might have purpose (or they might not even care if they do), but you might not.
The post spends most of its time arguing about why ASI is inevitable and only one final para arguing why ASI is good. If you actually believed ASI was good, you would probably spend most of the post arguing that. Arguing ASI is inevitable seems exactly like the sort of cope you would argue if you thought ASI was bad and you were doing a bad thing by building it, and had to justify it to yourself.
Try gpt-5-pro API in the playground. gpt-5 is worse. Use API not the consumer frontend.
Trajectory 3 is the obvious natural conclusion. He who controls the memes controls the world. AI-invented religions and political ideologies are coming soon. There is already billions of dollars invested in propaganda, it will now get invested here.
I support a ban on AI research to prevent this outcome.
A world with very cheap materials and energy, but not cheap intelligence, will still have conflict.
People will still have a) differences in aesthetics and b) differences in their best guess answers to moral and philosophical questions. They will almost certainly still try to accumulate all available resources in service of their ideology. No finite amount of resources will satisfy people. Risking even catastrophic outcomes (like nuclear war) could still be on the table.
Cheap intelligence is what allows you to start resolving the questions that lead to conflict in the first place, for instance by running gazillions of world simulations on gazillions of mind configurations.
I support more advancements in cyberhacking capabilities so that companies and govts are incapable of keeping secrets. Secrecy enables them to act against the wishes of the majority to an extent that couldn’t otherwise.