Sorted by New

Wiki Contributions


Does the federal budget have a line item saying "Money for things that go wrong?" Because every year, something expensive goes wrong.

Sentience is one of the basic goods. If the sysop is non-sentient, then whatever computronium is used in the sysop is, WRT sentience, wasted.

If we suppose that intelligences have a power-law distribution, and the sysop is the one at the top, we'll find that it uses up something around 20% to 50% of the accessible universe's computronium.

That would be a natural (as in "expected in nature") distribution. But since the sysop needs to stay in charge, it will probably destroy any other AIs who reach the "second tier" of intelligence. So it will more likely have something like 70% - 90% of the universe's computronium.

Also, in this post-human world, there aren't large penalties for individuality. That is: In today's world, you can't add up 3 chimpanzee brains and get human-level intelligence. In the world of AIs, you probably can. This means that, to stay on top, the sysop will always need to reserve a majority of the universe's computronium for itself. Otherwise, the rest of the universe can gang up on it.

So creating a non-sentient sysop means cutting the amount of sentient life you can support by at least half.

I recently had 2 occasions where

  • I said X
  • Someone else said, No, Y
  • I thought they were wrong
  • I later realized: Not X, but Z
  • then went back and looked, and saw Y = Z

In both cases, I didn't understand Y the first time because I had expectations for how X would be misunderstood, and looked for indications of them in Y, and on finding terms and phrases that matched my expectations stopped parsing Y prematurely.

This post has an enormous noise to content ratio. You gave only one example of a cost from using borrowed strength, and it was unsupported:

"But if no one had been able to use nuclear weapons without, say, possessing the discipline of a scientist and the discipline of a politician - without personally knowing enough to construct an atomic bomb and make friends - the world might have been a slightly safer place."

This is not clear; I would even say it's less than 50% probable. Many scientists, using heuristics against bias that turned out to be wrong in this case, underestimated the aggressiveness of the Soviet Union. Think Bertrand Russell, Albert Einstein, and maybe Oppenheimer. I am cherry-picking; but I don't think a Union of Concerned Scientists could have gotten us through the 1960s without a war with the Soviet Union.

I didn't mean "proper subset". I mean that if there are organisms that experience pleasure but not fun (or vice-versa), then it's more likely that there's an infinite number of possible "inherently good" noumena like pleasure, fun, and love; and that we've discovered only a small number of them.

That should have said "as qualitatively different and intrinsically good as fun?"

"Heaven is being perfect.": Even a circle can't be perfect, in the classical sense of being the best possible circle. Is a circle of 2cm radius better than a circle of 1cm radius? It is much more nonsensical to talk of a person being perfect. It is even more nonsensical to talk of a still-evolving species being perfect.

What's most interesting to me is that lizards don't have fun.

Maybe they have fun. But if they do, I'm pretty sure worms don't have fun. A discussion like this one, carried on by lizards (or worms), wouldn't have included the concept "fun".

And if you keep going back in time or down in size, I'm sure you'll find organisms that don't experience pleasure.

Are there other types of possible experiences as qualitatively different and intrinsically good? Are there infinitely many of them? Is charting the course based on "fun theory" like lizards charting the course of the future based on "basking on a hot rock theory"?

Probably. And if the set of organisms that experience pleasure is a proper subset of the set of organisms that experience fun, then the answer is even more likely to be yes.