I'm worried about the approach of "making decisionmakers realize stuff". In the past couple years I've switched to a more conflict-theoretic view: the main problem to me is that the people building AI don't want to build aligned AI. Even if we solved metaethics and metaphilosophy tomorrow, and gave them the solution on a plate, they wouldn't take it.
This is maybe easiest to see by looking at present harms. An actually aligned AI would politely decline to do such things as putting lots of people out of jobs or filling the internet with slop. So companies making AI for the market have to make it misaligned in at least these ways, otherwise it'll fail in the market. Extrapolating into the future, even if we do lots of good alignment research, markets and governments will pick out only those bits that contribute to market-aligned or government-aligned AI. Which (as I've been saying over and over) will be really bad for most people, because markets and governments don't necessarily need most people.
So this isn't really a comment on the list of problems (which I think is great), but more about the "theory of change" behind it. I no longer have any faith in making decisionmakers understand something it's not profitable for them to understand. I think we need a different plan.
I'm curious if you're talking to left-wing folks about this at all, because this is a very common view on the left. This article by Mike Monteiro is a representative example.
Other than that - yes, I think you're fighting the good fight and I'd love to help somehow. Maybe one priority I'd point out is that the stronger actors need to be weakened first. Regulation or security measures that weaken the weaker actors first, but leave the stronger ones capable of racing to AI, will do more harm than good.
Sure, but if we put a third "if" on top (namely, "it's a representation of our credences, but also both hypotheses are nosy neighbors that care about either world equally"), doesn't that undo the second "if" and bring us back to the first?
So, ten thousand years ago, your options for seeing yourself were:
A still lake or rain puddle
Looking into someone’s eye
A naturally shiny stone
A smooth sheet of ice
Or a dish of water? Ceramics and pottery were invented before mirrors, I think.
I still don't completely understand what your assumptions are supposed to model, but if we take them on face value, then it seems to me that always making rainbows is the right answer. After all, if both hypotheses are "nosy neighbors" that don't care which universe we end up in, there's no point figuring out which universe we end up in: we should just make rainbows because it's cheaper. No?
Yeah, getting outbid or otherwise deprived of resources we need to survive is one of the main concerns to me as well. It can happen completely legally and within market rules, and if you add AI-enhanced manipulation and lobbying to the mix, it's almost assured to happen.
One thing I've been wondering about is, how fixed is the "human minimum wage" really? I mean, in the limit it's the cost of running an upload, which could be really low. And even if we stay biological, I can imagine lots of technologies that would allow us to live more cheaply: food-producing nanotech, biotech that makes us smaller and so on.
The scary thing though is that when such technologies appear, that'll create a pressure to use them. Everyone would have to choose between staying human or converting themselves to a bee in beehive #12345, living much cheaper but with a similar quality of life because the hive is internet-enabled.
It seems you interpreted my comment as "the essay argues against something nobody believes anyway". What I meant was more like "the essay keeps making its point in an angry and tedious way, over and over".
My favorite example of fiction influencing reality (or maybe just predicting it really well, it's hard to tell) is how Arthur Conan Doyle's detective stories basically created forensic science from thin air. For example, the very first Sherlock Holmes story "A Study in Scarlet", published in 1887, describes Holmes inventing a chemical test to distinguish dried bloodstains from dirt stains. Then exactly that test was invented in 1900. Another example is analysis of tiny differences between typewriters, which appeared in Holmes stories a few years before anyone did it in reality.
Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the "who reads this stuff" bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.
By which I'm trying to say, though not in a top-tier literary way maybe, that you're a cool writer. A cool writer who has convinced himself that he has to be a horse-kicker, otherwise the world will end. And I do agree that the world will end! But... hmm how to put it... there is maybe a more optimal ratio of cool writing to horse-kicking, which HPMOR often achieved. Which made it more effective at saving the world, more fun to read, and maybe more fun to write as well.
Though I could be wrong about that. Maybe the cool bit in the middle wasn't a release valve for you, but actually took more effort than laying out the arguments in the rest of the essay. In that case never mind.
I'm pretty slow to realize these things, and I think other people are also slow, so the window is already almost closed. But in any case, my current thinking is that we need to start pushing on the big actors from outside, try to reduce their power. Trying to make them see the light is no longer enough.
What it means in practical terms: - Make it clear that we frown on people who choose to work for AI labs, even on alignment. This social pressure (on LW and related forums maybe) might already do some good. - Make it clear that we're allied with the relatively poor majority of people outside the labs, and in particular those who are already harmed by present harms. Make amends with folks on the left who have been saying such things for years. - Support protests against labs, support court cases against them having to do with e.g. web scraping, copyright infringement, misinformation, suicides. Some altruist money in this might go a long way. - Think more seriously about building organizations that will make AI power more spread out. Open source, open research, open training. Maybe some GPL-like scheme to guarantee that things don't get captured. We need to reduce concentration of power in the near term, enable more people to pose a challenge to the big actors. I understand it increases other risks, but in my opinion it's worth it.