Already told you yesterday, but great idea! I'll definitely be a part of it, and will try to bring some people with me.
Viginette.
The next task to fall to narrow AI is adversarial attacks against humans. Virulent memes and convincing ideologies become easy to generate on demand. A small number of people might see what is happening, and try to shield themselves off from dangerous ideas. They might even develop tools that auto-filter web content. Most of society becomes increasingly ideologized, with more decisions being made on political rather than practical grounds. Educational and research institutions become full of ideologues crowding out real research. There are some wars. The lines of division are between people and their neighbours, so the wars are small scale civil wars.
Researcher have been replaced with people parroting the party line. Society is struggling to produce chips of the same quality as before. Depending on how far along renewables are, there may be an energy crisis. Ideologies targeted at baseline humans are no longer as appealing. The people who first developed the ideology generating AI didn't share it widely. The tech to AI generate new ideologies is lost.
The clear scientific thinking needed for major breakthroughs has been lost. But people can still follow recipes. And make rare minor technical improvements to some things. Gradually, idealogical immunity develops. The beliefs are still crazy by a truth tracking standard, but they are crazy beliefs that imply relatively non-detrimental actions. Many years of high, stagnant tech pass. Until the culture is ready to reembrace scientific thought.
oh, I didn't realize there was this event yesterday, I wrote an ai-safety inspired short story independently 😅 if anyone would wish to comment, feel free to leave me a github issue
https://peter.hozak.info/fiction/heat_death/prologue
Hey, I'm looking for a way to reach out (:
Would love to chat about getting started in writing essays like "What 2026 looks like" which was truly inspiring for me.
Is there any way to get in touch with one of you?
Thanks!
Hey! Exciting! How about you go ahead and write your first stab at it, and then post it online? You could then make a comment here or on What 2026 Looks Like linking to it.
AI Impacts is organizing an online gathering to write down how AI will go down! For more details, see this announcement, or read on.
Plan
1. Try to write plausible future histories of the world, focusing on AI-relevant features. (“Vignettes.”)
2. Read each others’ vignettes and critique the implausible bits: “Wouldn’t the US government do something at that point?” “You say twenty nations promise each other not to build agent AI–could you say more about why and how?”
3. Amend and repeat.
4. Better understand your own views about how the development of advanced AI may go down.
(5. Maybe add your vignette to our collection.)
This event will happen over two days, so you can come Friday if this counts as work for you, Saturday if it counts as play, and both if you are keen. RSVP to particular days is somewhat helpful; let us know in the comments.
Date, Time, Location
The event will happen on Friday the 25th of June and Saturday the 26th.
It’ll go from 10am (California time) until probably around 4pm both days.
It will take place online, in the LessWrong Walled Garden. Here are the links to attend:
Friday
Saturday
Facebook event
FAQ
> Do I need literary merit or creativity?
No.
> Do I need to have realistic views about the future?
No, the idea is to get down what you have and improve it.
> Do I need to write stories?
Nah, you can just critique them if you want.
> What will this actually look like?
We’ll meet up online, discuss the project and answer questions, and then spend chunks of time (online or offline) writing and/or critiquing vignettes, interspersed with chatting together.
> Have you done this before? Can I see examples?
Yes, on a small scale. See here for some resulting vignettes. We thought it was fun and interesting.
> Any advice on how to get started?
We have lots! We can give it in the comments here, or on the day itself; just ask. You may be interested in this random future generator.
This event is co-organized by Katja Grace and Daniel Kokotajlo. Thanks to everyone who participated in the trial Vignettes Day months ago. Thanks to John Salvatier for giving us the idea. This work is supported by Center on Long-Term Risk, my employer.