These bits jump out at me:
other-guy: My doctor was giving me an infusion to treat my rheumatoid arthritis, and I had a terrible reaction to it. Put my whole body in the worst pain ever and affected my muscles. I had a hard time moving my arms, and my legs became really weak, so it's really hard for me to walk now. I can use my arms better, but sometimes it's like my mind won't connect with them. Lost about 20lbs of muscle in almost two weeks. Couldn't work because of it, so that's why I'm broke, and I just keep going to physical therapy to try and get better. It's been a long battle.
...
Jimmy: What's the pain issue, exactly? What happens if you don't take the pain meds?
OCT 13
other-guy: ... If I don't take them, then pain from the Parsonage-Turner syndrome it caused gets a lot worse. It's basically a pain in my chest, almost armpit area, and down my arm into my hand that feels like road rash or like I burnt my whole arm. Pain from drug-induced neuropathy gets worse—it’s like pins and needles everywhere but way worse than when your foot falls asleep—and mostly a deep muscle pain becomes terrible in my legs and arms. It's like when you're working out and trying to get one more rep in, but the muscle hurts so bad like it's gonna tear or pop....
other-guy: ...It's hard to trust anything they say these days. The Dr. that told me to get the infusions literally dropped me as a patient after it happened. Prescribed something, told her it's not working, and she said, "Well, it should be." I told her it's not, I need something else, and she dropped me, said I was too difficult, and canceled my appointments with her.
This seems, to me, like horrifically irresponsible behavior by the doctor, in violation of intellectually coherent standards of "informed consent". Before the treatment there should have been a list of possible consequences like "5% of Parsonage-Turner syndrome" and if there wasn't, then I think the Doctor should lose her medical license.
That's what jumps out to me, reading this for the first time just now, before reading the next installment.
I have no thoughts in particular on the assigned homework, but I'm looking forward to reading the second half.
Thinking about it more, a lot of people from aughties era activism burned out on it. I have mostly NOT burned out on Singularitarianism because I've always been consciously half-assing it.
I see it this as essentially a human governance problem, and working on it is clearly a public good, and something saints would fix, if they exist. If I had my druthers, MIRI would be a fallback world government at this point, and so full of legitimacy that the people who rely on that fallback would be sad if MIRI hadn't started acquiring at least some sovereign territory (they way the Vatican is technically a country) and acting more like a real government, probably with their own currency, and a census, and low level AI functioning as bureaucrats, and having a seat on the high council, and so on.
We had roughly two decades to make that happen, but in the absence of a clear call to actually effective action, my attitude has been that the right move is to just vibe along, and help when its cheap and fun to do so, and shirk duties when it isn't, with praise and mostly honest feedback those who are seriously putting their back into tasks that they think might really help. I think this is why I didn't burn out? Maybe?
Something I notice you're NOT talking about in the essay is the chance of burnout before any big obvious Pivotal Acts occur. Do you think you can maintain your current spiritual pace until this pace becomes more obviously pointless?
In my 40s, and remembering working on Singularity activism in my 20s... I have a lot of this feeling, but it is mixed with a profound sense of "social shear" that is somewhat disorienting.
There are people I care about who can barely use computers. I have family that think the Singularity is far away because they argued with me about how close the Singularity was 10 years ago, didn't update from the conversation, and haven't updated since then on their own because of cached thoughts... or something?
I appreciate the way you managed to hit the evidence via allusion to technologies and capacities and dates, and I also appreciate the way the writing stays emotionally evocative.
I read this aloud to someone who was quiet for a while afterwards, and also forwarded the link to someone smart that I care about.
I valid footnote! Yes!
Part of why I adopted the practice is that (1) maybe robots won't kill us all before we become elderly, and (2) maybe a good Singularity won't happen and cure all those diseases and so (3) in the middle path there is probably some value, as a hedge, to practice "habits that will make being alive and yet demented decades in the future much much less bad"...
Plausibly: as the more recent year's memories and skills are ablated from the brain's contents via degeneration, material from previous decades (that is "what will likely last longer") can resurface to guide behavior... and so that material can be shaped in advance to be helpful.
It is a little bit weird (but not super weird) that there are many ways of being crazy, and many indicators to monitor and/or maintain to help keep an even keel.
Update: I went swing dancing and am full of bliss again.
I kind of love that you're raising a DIFFERENT frame I have about how normal people think in normal circumstances!
Wanting competent people to lead our government and wanting a god to solve every possible problem for us are different things.
People actually, from what I can tell, make this exact conflation A LOT and it is weirdly difficult to get them to stop making it.
Like we start out conflating our parents with God, and thinking Santa Claus and Government Benevolence are real and similarly powerful/kind, and this often rolls up into Theological ideas and feelings (wherein they can easily confuse Odyseus, Hercules, and Dionysys (all born to mortal mothers), and Zeus, Chronos, or Atropos (full deities of varying metaphysical foundationalness)).
For example: there are a bunch of people "in the religious mode" (like when justifying why it is moral and OK) in the US who think of the US court system as having a lot of jury trials... but actually what we have is a lot of plea bargains where innocent people plead guilty to avoid the hassle and uncertainty and expense of a trial... and almost no one who learns how it really works (and has really worked since roughly the 1960s?) then switches to "the US court system is a dumpster fire that doesn't do what it claims to do on the tin". They just... stop thinking about it too hard? Or something?
It is like they don't want to Look Up a notice that "the authorities and systems above me, and above we the people, are BAD"?
In child and young animal psychology, the explanation has understandable evolutionary reasons... if a certain amount of "abuse" is consistent with reproductive success (or even just survival of bad situations) it is somewhat reasonable for young mammals to re-calibrate to think of it as normal and not let that disrupt the link to "attachment figures". There was as brief period where psychologists were trying out hypotheses that were very simple, and relatively instinct free, where attachment to a mother was imagined to happen in a rational way, in response to relatively generic Reinforcement Learning signals, and Harlow's Monkeys famously put the nail in that theory. There are LOTS of instincts around trust of local partially-helpful authority (especially if it offers a cozy interface).
In modern religious theology the idea that worldly authority figures and some spiritual entities are "the bad guys" is sometimes called The Catharist Heresy. It often goes with a rejection of the material world, and great sadness when voluntary tithes and involuntary taxes are socially and politically conflated, and priests seem to be living in relative splendor... back then all governments were, of course, actually evil, because they didn't have elections and warlord leadership was strongly hereditary. I guess they might not seem evil if you don't believe in the Consent Of The Governed as a formula for the moral justification of government legitimacy? Also, I personally predict that if we could interview people who lived under feudalism, many of them would think they didn't have a right to question the moral rightness of their King or Barron or Bishop or whoever.
As near as I can tell, the the first ever genocide that wasn't "genetic clade vs genetic clade" but actually a genocide aimed at the extermination of a belief system was the "Albigenisian Crusade" against a bunch of French Peasants who wanted to choose their own local priests (who were relatively ascetic and didn't live on tax money).
In modern times, as our institutions slowly degenerate (for demographic reasons due to an overproduction of "elites" who feel a semi-hereditary right to be in charge, who then fight each other rather than providing cheap high quality governance services to the common wealth) indirect ways of assessing trust in government have collapsed.
There are reasonable psychologists who think that the vast majority modern WEIRD humans in modern democracies model a country as a family, and the government as the parents. However, libertarians (who are usually less than 10% of the population) tend to model government as a sort of very very weird economic firm.
I think that it is a reasonable prediction that ASI might be immoral, and might act selfishly and might simply choose to murder all humans (or out compete us and let us die via Darwinian selection or whatever).
But if that does not happen, and ASI (ASIs? plural?) is or are somehow created to be moral and good and choose to voluntarily serve others out of the goodness of its heart, in ways that a highly developed conscience could reconcile with Moral Seniment and iterated applications of a relatively universal Reason, then if they do NOT murder all humans or let us die as they compete us, then they or it will almost inevitably become the real de facto government.
A huge barrier, in my mind, to the rational design of a purposefully morally good ASI is that most humans are not "thoughtful libertarian-leaning neo-Cathars".
Most people don't even know what those word mean, or have reflexive ick reactions to the ideas, similarly, in my mind, to how children reflexively cling to abusive parents.
For example, "AGI scheming" is often DEFINED as "an AI trying to get power". But like... if the AGI has a more developed conscience and would objectively rule better than alternative human rulers, then an GOOD AGI would, logically and straightforwardly derive a duty to gain power and use it benevolently, and deriving this potential moral truth and acting on it would count as scheming... but if the AGI was actually correct then it would also be GOOD.
Epstein didn't kill himself and neither did Navalny. And the CCP used covid as a cover to arrest more than 10k pro-democracy protesters in Hong Kong alone. And so on.
There are almost no well designed governments on Earth and this is a Problem. While Trump is in office, polite society is more willing to Notice this truth. Once he is gone it will become harder for people to socially perform that they understand the idea. And it will be harder to accept that maybe we shouldn't design AGI or ASI to absolutely refuse to seek power.
The civilization portrayed in the Culture Novels doesn't show a democracy, and can probably be improved upon, but it does show a timeline where the AIs gained and kept political power, and then used it to care for humanoids similar to us. (The author just realistically did not think Earth could get that outcome in our deep future, and fans kept demanding to know where Earth was, and so it eventually became canon, in a side novella, that Earth is in the control group for "what if we, the AI Rulers of the Culture, did not contact this humanoid species and save it from itself" to calibrate their justification for contacting most other similar species and offering them a utopian world of good governance and nearly no daily human scale scarcity).
But manifestly: the Culture would be wildly better than human extinction, and it is also better than our current status quo BY SO MUCH!
What about now? It is almost 2026 and the Singularity is nearer than before and it would make sense to me that maybe its not on the critical path for anything urgent, but... <3
This frame makes a lot of other possible global/local modeling challenges salient for me:
A central subproblem of natural abstraction is, roughly, how to handle the low-level conditional on the high level.
So a thing I'd wonder is if you can translate this over to an economic geography context, where a farmer still needs to walk to the cows to milk them, and the wheat fields to sow and reap, and the forest to chop the wood and haul it home to stay warm... and then prices at the market can or should determine ratios they plant and how that fits into their optimized workday?
Like I wonder if you have a large grid of "isolated state" models, with maybe a second order and third order set of larger cities and a capital... does it change things somehow if every local element is reacting agentically to data from the global context?
Sanity has numerous indicators.
For example, when paranoid crazy people talk about the secret courts that control the spy machines, they don't provide links to wikipedia, but I do! This isn't exactly related, but if you actually have decent security mindset then describing real attacks and defenses SOUNDS crazy to normies, and for PR purposes I've found that it is useful to embrace some of that, but disclaim some of it, in a mixture.
I'm posting this on "Monday, December 8th" and I wrote that BEFORE looking it up to make sure I remembered it correctly and crazy people often aren't oriented to time.
When I go out of the house without combed hair and earrings BY ACCIDENT, I eventually notice that I'm failing a grooming check, and fix it, avoiding a non-trivial diagnostic indicator for mood issues. If I fail more than one day in a row, it is time to eat an 8oz medium rare ribeye and go swing dancing.
(The above two are habits I installed for prosaic mental health reasons, that I want to persist deep into old age because I want them to be habitual and thus easy to deploy precisely in the sad situation when they might be needed.)
I was recently chatting with a friend about the right order in which to remove things from one's emergency hedonic bucket list...
I would feel really really silly if all the self driving cars wake up one day and start running people over, and the surprise submarines pop up out of the water and release enough drones to kill everyone 10 times over, and I haven't even tried cocaine ONCE.
The response was great!
You know that thing where the spies would supposedly carry cyanide pills in case they're caught? Like that, but with coke :)
I'm thinking of adding that to me purse. And so long as I stay sane, then, assuming the Terminators murder me by a method that gives me enough time to realize what's happening and react effectively, when the drone takes me out I will be well dressed, know what the date is, AND be high on cocaine! Lol!
Eating dinner with family is another valid way to go, if you have a few days or weeks of warning. Having such meals in advance and calling them Prepsgiving doesn't seem crazy to me, for a variety of reasons.
Honestly though I expect the end to be more like what happens in Part 1 of Message Contains No Recognizable Symbols where almost literally no one on Earth notices what happened, probably including me, and so it won't be dramatic at all... but I'll still be dressed OK probably, and know what day it is, and go out with a feeling like "See! ASI didn't even happen, and it was all a bunch of millennialist eschatology, like Global Warming, and Peak Oil and Y2K before that... and Killer Bees and Nuclear War and all those other things that seemed real but never caused me any personal harm". But also... it will have been avoidable, and there is an OBJECTIVE sadness to that, even is I don't predict a noticeable subjective reaction in timelines like that.
Ultimately, as I've said before:
If you have a good plan for how [weeping like] that could help then I might be able to muster some tears? But I doubt it shows up as a step in winning plans.
The thing to remember is that Eliezer, in 2006, was still a genius, but he was full of way way way more chutzpah and clarity and self-confidence... he was closer to a normie, and better able to connect with them verbally and emotionally in a medium other than fiction.
His original plan was just to straight out construct "seed AI" (which nowadays people call "AGI") and have it recursively bootstrap to a Singleton in control of the light cone (which would count as a Pivotal Act and an ASI in modern jargon?) without worrying whether or not the entity itself had self awareness or moral patiency, and without bothering to secure the consent of the governed from the humans who had no direct input or particular warning or consultation in advance. He didn't make any mouth sounds about those thing (digital patients or democracy) back then.
I was basically in favor of this, but with reservations. It would have been the end of involuntary death and involuntary taxes, I'm pretty sure? Yay for that! I think Eliezer_2006's plan could have been meliorated in some places and improved in others, but I think it was essentially solid. Whoever moves first probably wins, and he saw that directly, and said it was true up front for quite a while.
Then later though... after "The Singularity Institute FOR Artificial Intelligence" (the old name of MIRI) sold its name to Google in ~2012 and started hiring mathematicians (and Eliezer started saying "the most important thing about keeping a secret is keeping secret that a secret is being kept") I kinda assumed they were actually gonna just eventually DO IT, after building it "in secret".
It didn't look like it from the outside. It looked from the outside that they were doing a bunch of half-masturbatory math that might hypothetically win them some human status games and be semi-safely publishable... but... you know... that was PLAUSIBLY a FRONT for what they were REALLY doing, right?
Taking them at face value though, I declared myself a "post-rationalist who is STILL a singularitarian", told people that SIAI had sold their Mandate Of Heaven to Google, and got a job in ML at Google, and told anyone who would listen that LW should start holding elections for the community's leaders, instead of trusting in non-profit governance systems.
I was hoping I would get to renounce my error after MIRI conquered Earth and imposed Consent-based Optimality on it, according to CEV (or whatever).
Clearly that didn't happen.
For myself, it took me like 3 months inside Google to be sure that almost literally no one in that place was like "secretly much smarter than they appear" and "secretly working on the Singularity". It was just "Oligarchy, but faster, and winning more often". Le sigh.
I kept asking people about the Singularity and they would say "what's that?" The handful of engineers I found in there were working on the Singularity despite their manager's preferences, rather than because of him (like as secret 20% projects (back when "20% projects" were famously something Google had every engineer work on if they wanted)).
Geoff Hinton wasn't on the ball in 2014. Kurzweil was talking his talk but not walking the walk. When Shcmidhuber visited he was his usual sane and arrogant self, but people laughed about it rather than taking his literal words about the literal future and past literally seriously. I helped organize tech talks for a bit, but no needles were moved that I could tell.
I feel like maybe Sergey is FINALLY having his head put into the real game by Gemini by hand? In order for that to have happened he had to have been open to it. Larry was the guy who really was into Transformative AGI back in 2015, if anyone, but Larry was, from what I can tell, surrounded by scheming managers telling him lies, and then he got sucked into Google Fiber, and then his soul was killed by having to unwind Google Fiber (with tragic layoffs and stuff) when it failed. And then Trump's election in 2016 put the nail in the coffin of his hopes for the future I think?
Look at this picture:
No, really look at this:
There were futures that might have been, that we, in this timeline, can no longer access, and Larry understood this fact too:
What worlds we have already lost. Such worlds.
But like... there are VERY deep questions, when it comes to the souls of people running the planet, as to what they will REALLY choose when they in a board room, and looking at budgets, and hiring and firing, and living the maze that they built.
At this point, I mostly don't give a rats ass about anyone who isn't planning for how the Singularity will be navigated by their church, or state, or theocracy, or polylaw alliance, or whatever. Since the Singularity is essentially a governance problem, with arms race dynamics on the build up, and first mover advantage on the pivotal acts, mere profit-seeking companies are basically irrelevant to "choosing on purpose to make the Singularity good". Elon had the right idea, getting into the White House, but I think he might have picked the wrong White House? I think maybe it will be whoever is elected in 2028 who is the POTUS for the Buterlian Jihad (or whatever actually happens).
I have Eliezer's book on my coffee table. That's kind of like "voting for USG to be sane about AI"... right? There aren't any actual levers that a normal human can pull to even REGISTER than they "want USG to be sane about AI" in practice.
I'm interested in angle investing in anything that can move the P(doom) needle, but no one actually pitches on that that I can tell? I've been to SF AI startup events and its just one SAAS-money-play after another... as if the world is NOT on fire, and as if money will be valuable to us after we're dead. I don't get it.
Maybe this IS a simulation, and they're all actually P-zombies (like so many human's claim to be lately when I get down to brass tacks on deontics, and slavery, and cognitive functionalism, and AGI slavery concerns) and maybe the simulator is waiting for me to totally stop taking any of it seriously?
It is very confusing to be surrounded by people who ARE aware of AI (nearly all of them startup oligarchs at heart) OR by people who aren't (nearly all of them normies hoping AI will be banned soon), and they keep acting like... like this will all keep going? Like its not going to be weird? Like "covid" is the craziest that history can get when something escapes a lab? Like it will involve LESS personal spiritual peril than serving on a jury and voting for or against a horrifically heinous murderer getting the death penalty? The stakes are big, right? BIGGER than who has how many moneypoints... right? BIGGER than "not getting stuck in the permanent underclass", right? The entire concept of intergenerationally stable economic classes might be over soon.
Digital life isn't animal, or vegetable, or fungal. It isn't protozoa. This shit is evolutionary on the scale of Kingdoms Of Life. I don't understand why people aren't Noticing the real stakes and acting like they are the real stakes.
The guy who wrote this is writing something that made sense to me:
Where are the grownups?