Benjy_Forstadt has not written any posts yet.

According to Google, the Xhosa currently number between 9.2 and 9.6 million people, and are the second-largest ethnic group in South Africa. Looks like they've done fine for themselves.
It's weird you would use them as an intuition pump for human extinction being plausible, as if they were some extinct tribe.
Maybe you'll think I'm missing the main point of the essay, which is about what happens when memes or viruses jump species, but I don't think that works either. This argument proves too much. You could just as well say that since most ideas in the world don't need Xhosa brains to reproduce, they are not under selective pressure to keep the Xhosa alive, and so we should expect the world's memeplex to evolve in a manner maximally incompatible with Xhosa survival.
I understand that when a person feels a lot is on the line it is often hard for that person to not come across as sanctimonious. Maybe it’s unfair of me, but that is how this comes across to me. Eg “people who allegedly care”.
Death with Dignity:
... (read more)
>Q2: I have a clever scheme for saving the world! I should act as if I believe it will work and save everyone, right, even if there's arguments that it's almost certainly misguided and doomed? Because if those arguments are correct and my scheme can't work, we're all dead anyways, right?A: No! That's not dying with dignity! That's stepping sideways out of a mentally uncomfortable world
>Do you like cinnamon when it's not combined with sugar? If not, is it really cinnamon per se that you like?
How do you feel about butter
The order of our numeral notation mirrors the order of our spoken numerals. I’m not sure if there are any languages that consistently order additive numerals from smallest to largest - “two and fifty and three hundred” instead of “three hundred and fifty two”.
>💡Flipping the local ordering of pronunciation: If we're truly optimizing, we might as well say "twenty and hundred-three" while we're at it. The first words "and three-" don't tell you much until you know "three of what"? Whereas "and hundred-three" tells you the order of magnitude as soon as possible.
This suggestion is aesthetically in tension with your principle of ordering from smallest to largest. Why should we go with... (read more)
I don’t think perfect surveillance is inevitable.
I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.
Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.
Basically people tend to value stuff they perceive in the biophysical environment and stuff they learn about through the social environment.
So that reduces the complexity of the problem - it’s not a matter of designing a learning algorithm that both derives and comes to value human abstractions from observations of gas particles or whatever. That’s not what humans do either.
Okay then, why aren’t we star-maximizers or number-of-nation-states maximizers? Obviously it’s not just a matter of learning about the concept. The details of how we get values hooked up to an AGI’s motivations will depend on the particular AGI design but probably reward, prompting, scaffolding or the like.
I don’t think the way you split things up into Alpha and Beta quite carves things at the joints. If you take an individual human as Beta, then stuff like “eudaimonia” is in Alpha - it’s a concept in the cultural environment that we get exposed to and sometimes come to value. The vast majority of an individual human’s values are not new abstractions that we develop over the course of our training process (for most people at least).
There is a difference between the claim that powerful agents are approximately well-described as being expected utility maximizers (which may or may not be true) and the claim that AGI systems will have an explicit utility function the moment they’re turned on, and maximize that function from that moment on.
I think this is the assumption OP is pointing out: “most of the book's discussion of AI risk frames the AI as having a certain set of goals from the moment it's turned on, and ruthlessly pursuing those to the best of its ability”. “From the moment it’s turned on” is pretty important, because it rules out value learning as a solution
Edit: Retracted because some of my exegesis of the historical seed AI concept may not be accurate
The claim isn’t that minds are safe and nice by default. It’s that they’re not sociopaths.
If in your view, most humans are basically ruthless sociopaths, then that’s good news, isn’t it? Sociopathic AIs would fit in well to our culture. It would mean our laws and norms do a remarkably good job of restraining us, so there’d be hope they’d do the same for future AIs.