Posts

Sorted by New

Wiki Contributions

Comments

It's actually not just about lie detection, because the technology starts to shade over into outright mind reading.

But even simple lie detection is an example of a class of technology that needs to be totally banned, yesterday[1]. In or out of court and with or without "consent"[2]. The better it works, the more reliable it is, the more it needs to be banned.

If you cannot lie, and you cannot stay silent without adverse inferences being drawn, then you cannot have any secrets at all. The chance that you could stay silent, in nearly any important situation, would be almost nil.

If even lie detection became widely available and socially acceptable, then I'd expect many, many people's personal relationships to devolve into constant interrogation about undesired actions and thoughts. Refusing such interrogation would be treated as "having something to hide" and would result in immediate termination of the relationship. Oh, and secret sins that would otherwise cause no real trouble would blow up people's lives.

At work, you could expect to be checked for a "positive, loyal attitude toward the company" on as frequent a basis as was administratively convenient. It would not be enough that you were doing a good job, hadn't done anything actually wrong, and expected to keep it that way. You'd be ranked straight up on your Love for the Company (and probably on your agreement with management, and very possibly on how your political views comported with business interests). The bottom N percent would be "managed out".

Heck, let's just have everybody drop in at the police station once a month and be checked for whether they've broken any laws. To keep it fair, we will of course have to apply all laws (including the stupid ones) literally and universally.

On a broader societal level, humans are inherently prone to witch hunts and purity spirals, whether the power involved is centralized or decentralized. An infallible way to unmask the "witches" of the week would lead to untold misery.

Other than wishful thinking, there's actually no reason to believe that people in any of the above contexts would lighten up about anything if they discovered it was common. People have an enormous capacity to reject others for perceived sins.

This stuff risks turning personal and public life into utter hell.


  1. You might need to make some exceptions for medical use on truly locked-in patients. The safeguards would have to be extreme, though. ↩︎

  2. "Consent" is a slippery concept, because there's always argument about what sorts of incentives invalidate it. The bottom line, if this stuff became widespread, would be that anybody who "opted out" would be pervasively disadvantaged to the point of being unable to function. ↩︎

Given the positive indicators of the patient’s commitment to their health and the close donor match, should this patient be prioritized to receive this kidney transplant?

Wait. Why is it willing to provide any answer to that question in the first place?

It was mostly a joke and I don't think it's technically true. The point was that objects can't pass through one another, which means that there are a bunch of annoying constraints on the paths you can move things along.

No, the probes are instrumental and are actually a "cost of doing business". But, as I understand it, the orthodox plan is to get as close as possible to disassembling every solar system and turning it into computronium to run the maximum possible number of "minds". The minds are assumed to experience qualia, and presumably you try to make the qualia positive. Anyway, a joule not used for computation is a joule wasted.

You can choose or not choose to create more "minds". If you create them, they will exist and have experiences. If you don't create them, then they won't exist and won't have experiences.

That means that you're free to not create them based on an "outside" view. You don't have to think about the "inside" experiences of the minds you don't create, because those experiences don't and will never exist. That's still true even on a timeless view; they never exist at any time or place. And it includes not having to worry about whether or not they would, if they existed, find anything meaningful[1].

If you do choose to create them, then of course you have to be concerned with their inner experiences. But those experiences only matter because they actually exist.


  1. I truly don't understand why people use that word in this context or exactly what it's supposed to, um, mean. But pick pretty much any answer and it's still true. ↩︎

... but a person who doesn't exist doesn't have an "inside".

I already have people planning to grab everything and use it for something that I hate, remember? Or at least for something fairly distasteful.

Anyway, if that were the problem, one could, in theory, go out and grab just enough to be able to shut down anybody who tried to actually maximize. Which gives us another armchair solution to the Fermi paradox: instead of grabby aliens, we're dealing with tasteful aliens who've set traps to stop anybody who tries to go nuts expansion-wise.

It's not "just to expand". Expansion, at least in the story, is instrumental to whatever the content of these mind-seconds is.

Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel. Then it's just expanding to have more of the same thing that you already have, which is more or less identical from where I sit to expanding just to expand.

And I don't feel bound to account for the "preferences" of nonexistent beings.

I had read it, had forgotten about it, hadn't connected it with this story... but didn't need to.

This story makes the goal clear enough. As I see it, eating the entire Universe to get the maximal number of mind-seconds[1] is expanding just to expand. It's, well, gauche.

Really, truly, it's not that I don't understand the Grand Vision. It never has been that I didn't understand the Grand Vision. It's that I don't like the Grand Vision.

It's OK to be finite. It's OK to not even be maximal. You're not the property of some game theory theorem, and it's OK to not have a utility function.

It's also OK to die (which is good because it will happen). Doesn't mean you have to do it at any particular time.


  1. Appropriately weighted if you like. And assuming you can define what counts as a "mind". ↩︎

I know this sort of idea is inspiring to a lot of you, and I'm not sure I should rain on the parade... but I'm also not sure that everybody who thinks the way I do should have to feel like they're reading it alone.

To me this reads like "Two Clippies Collide". In the end, the whole negotiated collaboration is still just going to keep expanding purely for the sake of expansion.

I would rather watch the unlifted stars.

I suppose I'm lucky I don't buy into the acausal stuff at all, or it'd feel even worse.

I'm also not sure that they wouldn't have solved everything even they thought was worth solving long before even getting out of their home star systems, so I'm not sure I buy either the cultural exchange or the need to beam software around. The Universe just isn't necessarily that complicated.

CEV-ing just one person is enough for the "basic challenge" of alignment as described on AGI Ruin.

I thought the "C" in CEV stood for "coherent" in the sense that it had been reconciled over all people (or over whatever set of preference-possessing entities you were taking into acount). Otherwise wouldn't it just be "EV"?

I think the kind of AI likely to take over the world can be described closely enough in such a way.

So are you saying that it would literally have an internal function that represented "how good" it thought every possible state of the world was, and then solve an (approximate) optimization problem directly in terms of maximizing that function? That doesn't seem to me like a problem you could solve even with a Jupiter brain and perfect software.

Load More