Formerly known as Trevor1
I've never been to Massachusetts.
I don't think that this specific comment is a very productive way to go about things here. Journalists count as elites in democracies, and they can't publicly apologize when they make a mistake because that embarrasses the paper, so if they ever change their mind about something (especially something really big and important) then their only recourse is to write positive articles to try to make up for the negative article they originally wrote.
I'm not sure I agree with Razied on the whole "sempai noticed me" thing. I agree that it's important to wake up to that dynamic, which is silly; articles like these don't seem to have a track record of vastly increasing the number of alignment researchers, whereas mid-2010s publications like HPMOR and Superintelligence do (and those phenomenon may have failed to replicate in the 2020s, with WWOTF and planecrash). But there's tons of factors at play here that even I'm not aware of, like people at EA university groups being able to show these articles to mathematicians unfamiliar with AI safety, or orgs citing them in publications, which is the kind of thing that determines the net value of these articles.
This bodes well for greenlighting Human Intelligence Amplification research in China (the ultimate goal being to produce better alignment researchers who can hopefully fix the current inadequacy).
Human Intelligence Amplification has recently been gaining momentum as a winning strategy, and China already has incredible comparative advantages and a yearslong lead when it comes to producing fundamental research for Human Intelligence Amplification. It might also be a perfect fit for the government's existing policies on creativity promotion.
I'd say the absence of names from Facebook, Amazon, and Apple in general are worrying, as well as that there were only two from Microsoft. Apple's absence, in particular, is what keeps me up at night.
For those who might not have noticed, this actually is historic, they're not just saying that- the top 350 people have effectively "come clean" about this, at once, in a schelling-point **kind-of** way.
The long years of staying quiet about this and avoiding telling other people your thoughts about AI potentially ending the world, because you're worried that you're crazy or that you take science fiction too seriously- those days **might have** just ended.
This was a credible signal, none of these 350 high-level people can go back and say "no, I never actually said that AI could cause extinction and AI safety should be a top global priority", and from now on you and anyone else can cite this announcement to back up your views (instead of saying "Bill Gates, Elon Musk, and Stephen Hawking have all endorsed...") and go straight to AI timelines (I like sending people Epoch's Literature review).
EDIT: For the record, this might not be true, or it might not stick, and signatories retain ways of backing out or minimizing their past involvement. I do not endorse unilaterally turning this into more of a schelling point than it was originally intended to be.
Yeah, generally when competent people hear a new word (e.g. AI Alignment, Effective Altruism, etc), they go to wikipedia to get a first impression overview of what it's all about.
When you look at it like that, lots of pages e.g. Nick Bostrom and Effective Altruism, seem to have been surprisingly efficiently vandalized to inoculate new people against longtermism and EA, whereas Eliezer Yudkowsky and MIRI are basically fine.
I think it's worth sharing here some details about SquirrelInHell's suicide, specifically to point out to new people that Cognitive Tuning was not what killed SquirrelInHell.
This comment is from Slimepriestess, who is a friendly former-Zizian. I wouldn't necessarily trust 100% of everything said by a former Zizian (and who should definitely not be treated as a pariah). But it's pretty well known that SquirrelInHell was doing a ton of over-the-top shit at once (e.g. simultaneously attempting to use dolphin-like sleep deprivation to turn half of their brain into Lawful Evil and the other half into Transgender Good), and was simultaneously hanging around a bunch of violent and dangerous people, and they were all doing hardcore Roko's Basilisk research.
imo, Maia was trans and the components of her mind (the alter(s) they debucketed into "Shine") saw the body was physically male and decided that the decision-theoretically correct thing to do was to basically ignore being trans in favor of maximizing influence to save the world. Choosing to transition was pitted against being trans because of the cultural oppression against queers. I've run into this attitude among rationalist queers numerous times independently from Ziz and "I can't transition that will stop me from being a good EA" seems troubling common sentiment.
Prior to getting involved with Ziz, the "Shine" half of her personality had basically been running her system on an adversarial 'we must act or else' fear response loop around saving the multiverse from evil using timeless decision theory in order to brute force the subjunctive evolution of the multiverse.
So Ziz and Squirrel's start interacting, and at that point the "Maia" parts of her had basically been like, traumatized into submission and dissociation, and Ziz intentionally stirs up all those dissociated pieces and draws the realization that Maia is trans to the surface. This caused a spiraling optimization priority conflict between two factions that ziz had empowered the contradictory validity of by helping them reify themselves and define the terms of their conflict in her zero sum black and white good and evil framework.
But Maia didn't kill them, Shine killed them. I have multiple references that corroborate that. The "beat Maia into submission and then save the world" protocol that they using cooked out all this low level suicidality and "i need to escape, please where is the exit how do i decision-theoretically justify quitting the game?" type feelings of hopelessness and entrapment. The only "exit" that could get them out of their sense of horrifying heroic responsibility was by dying so Shine found a "decision theoretic justification" to kill them and did. "Squirrel's doom" isn't just "interhemispheric conflict" if anything it's much more specific, it's the specific interaction of:
"i must act or the world will burn. There is no room for anything less than full optimization pressure and utilitarian consequentialism"
vs
"i am a creature that exists in a body. I have needs and desires and want to be happy and feel safe"
This is a very common EA brainworm to have and I know lots of EAs who have folded themselves into pretzels around this sort of internal friction. Ziz didn't create Squirrel's internal conflict she just encouraged the "good" Shine half to adversarially bully the evil "Maia" half more and more, escalating the conflict to lethality.
Generally, I think people should be deferring to Raemon on the question of "is Cognitive Tuning safe?" and should, at minimum, message him to get his side of the story. This situation is a really big deal; if Cognitive Tuning works, that's successful human intelligence augmentation, that is world-saving shit. Cognitive Tuning alone could become an entire field of intelligence augmentation, AND something that anyone with average intelligence can contribute heavily towards, since having a more typical mind will yield more insights that can be picked up and worked with by other people with more typical minds).
Raemon endorsed the Superintelligence FAQ for laymen. He recommended a different one for ML engineers but I don't know where to find that comment. This was a couple months ago so he might have found something even better since then.
I recommend Yudkowsky's The Power Of Intelligence. It has superb quotes like "Intelligence is as real as electricity", and in my experience, one of the biggest hurdles is convincing someone that AI actually does dominate all other calculations about the fate of the earth. Once you pass that hurdle, the person will be less likely to see it as a flex on your end, and more likely to see it as something worth their time to look into.
I also tentatively recommend Tuning Your Cognitive Strategies, as it lets people get an actual up-close look at what intelligence is. Plus, it's very accessible for allowing people to contribute; any findings that anyone discovers might end up being pretty huge discoveries in the history of human intelligence augmentation (which is endorsed for potentially being an ace-in-the-hole for solving alignment, and anyone can contribute).
That's really interesting, do you have a list of resources you could recommend to me for things that are similar to/better than BWT? I wasn't aware that finding more was even possible.
Whoops! I only knew about him from the SSC situation a couple years ago, I had no idea that he was the one behind that NYT article; I guess some people never change (especially people who are living large, like journalists).
I still think it makes sense to give people opportunities to change their ways; if nothing else, so that decent researchers/interns could ghostwrite articles under Cade Metz's name, which is a common thing for major news outlets (journalist positions are plum jobs, so they tend to get occupied by incompetent status-maximizers, who reveal their disinterest with actual work as soon as they get a position at a level they feel satisfied with; and most of the work at news outlets is secretly done by interns since there's tons of competent college students desperate for a tiny number of positions, and news outlet staff lack the will and ability to actually evaluate them for competence).
Also, in terms of treating things as "anthropological curiosities", that's actually a really major tactic for major news corps; it creates the sense that all things are beneath the news outlet itself. There's a surprisingly large proportion of middle-class people out there who buy into the myth of news outlets as the last bastion of truth. Reputation maximization is something that news outlets take very seriously, especially nowadays since they're all on such thin ice.