>slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Is it? What meaning of worth it is used here? If you put it on a vote, as an option, I expect it would lose. People don't care that much about happiness of distant future people.
And above all, the rule:
>Put forth the same level of desperate effort that it would take for a theist to reject their religion.
Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.
I don't think it was particularly hard for me to part ways with religion? 15 year old me just accumulated to much sense that it's a total bullshit. It was important enough to be promoted to my direct attention, but wrong enough for me to recognize it as such.
Hmmm. Maybe I was just not that invested in the boons that religious worldview gives you. That there is somebody who is looking out for you, that everything goes according to good plan after all. I was not emotionally attached to this for some reason.
Am I just emotionally invested in different kinds of stuff or am I just good at discarding wrong beliefs? Or maybe there is something wrong with the "emotional attachment" part of me.
Hmm. Yeah, it sure looks rigged as hell to be resolved by self consistency/reflection to the side of "care about everyone", but surely there is some percentage of kids who come out of this with reflectively stable redhead hatred? Or, I don't know, "nobody deserves care, not just reds, but you should pretend to care in polite society"?
I'm not sure what's the point of learning to draw like that. Could as well close one eye and imagine that you trace a photograph.
Draw whatever, I'd rather see people reinvent techniques than learn them.
How about more uhh soft uncontrollability? Like, not "it subverted our whole compute and feeds us lies" but more "we train it to do A, which it sees as only telling it to do A, and does A, but its motivations are completely untouched".
Morality as a Coordination Subsidy and Morality as a Public Good.
Night-watchman state, distributed and embedded into heads VS Doing something a lot of people want to be done, regardless if it's cleaner streets or children having homes.
First thing did a lot of flaking, transferring into the second one, it seems like. Or maybe it didn't, maybe it was a process that shaped desires compatible with #1 out of assorted #2 type things.
Anthropic, GDM, and xAI say nothing about whether they train against Chain-of-Thought (CoT) while OpenAI claims they don't
It sounds more like there is some kind of moderator, who throttles smart things in intelligent, targeted way. Which is my headcanon.
I overall agree with this framing, but I think even in Before sufficiently bad mistakes can kill you, and in After sufficiently small mistakes wouldn't. So, it's mostly a claim about how strongly the mistakes would start to be amplified at some point.
The Correct Alien I think should have made a bit more funny errors.
Like, it names "love" and "respect" and "imitation" as alternatives to corrigibility, but all of them are kinda right? Should have thrown in some funny wrong guesses, like "cosplay" or "compulsive role play of behaviors your progenitors did".
Or for example, considering that the alien already thought about how humans are short lived, "error correcting/defending/preserving the previous progenitors' advice". That way of relating to your progenitors should have made it impossible for Inebriated Alien to overwrite human motivations, because they are self preserving wrong ones by now.
Come to think of it, those are too kind of right. I'm bad at making plausible errors.