Rafael Harth

Sequences

Litereature Summaries
Factored Cognition
Understanding Machine Learning

Wiki Contributions

Comments

Meditation course claims 65% enlightenment rate: my review

Ok, you're probably right. The one thing that confuses me about it is that I tend to think people's reports of their degree of suffering are reliable, but maybe that's not true. Or maybe the course created exceptions.

Many Gods refutation and Instrumental Goals. (Proper one)

If you're talking amount of possible superintelligent AIs, then yeah, definitely. (I don't think it's likely to have a large number all physically instantiated.)

Many Gods refutation and Instrumental Goals. (Proper one)

I tend to consider Roko's basilisk to be an information hazard and wasn't thinking or saying anything specific to that. I was only making the general point that any argument about future AI must take into account that the distribution is extremely skewed. It's possible that the conclusion you're trying to reach works anyway.

Many Gods refutation and Instrumental Goals. (Proper one)

If you build an AI to produce paperclips, then you've built it to add value to the economy, and that presumably works until it kills everyone -- that's what I meant. Like, an AI that's build to do some economically useful task but then destroys the world because it optimizes too far. That's still a very strong restriction on the total mind-space; most minds can't do useful things like building paperclips.

(Note that "Paperclip" in "Paperclip maximizer" is a standin for anything that doesn't have moral status, so it could also be a car maximizer or whatever. But even literal paperclips have economic value.)

Many Gods refutation and Instrumental Goals. (Proper one)

Any post or article that speculates what future AIs will be like is about the probability distribution of AIs, it's just usually not framed in that way.

I think you're making an error by equivocating between two separate meanings of "impossible to predict":

  • you can make absolutely no statements about the thing
  • you cant predict the thing narrowly

The latter is arguably true, but the former is obviously false. For example, one thing we can confidently predict about AI is that it will add value to the economy (until it perhaps kills everyone). This already shifts the probability distribution massively away from the uniform distribution.

If you actually pulled out a mind at random, it's extremely likely that it wouldn't do anything useful.

Like I think most probability mass comes from a tiny fraction of mind space, probably well below of the entire thing, but the space is so large that even this fraction is massive. But it means many-gods style arguments don't work automatically.

Meditation course claims 65% enlightenment rate: my review

This sounds like enlightenment to me. Enlightenment is the absence of suffering, not the absence of pain or bodily reactions. If both reports are true, then the person was reacting normally in terms of observable symptoms but not experiencing any suffering as a result.

[This comment is no longer endorsed by its author]Reply
Many Gods refutation and Instrumental Goals. (Proper one)

But AI isn't pulled randomly out of mindspace. Your argument needs to be about the probability distribution of future AIs to work.

(We may not be able to align AI, but you can't jump from that to "and therefore it's like a randomly sampled mind".)

Many Gods refutation and Instrumental Goals. (Proper one)

I'm confused about equivalence between "god" and "AI". I know the many gods objection in the context of Pascal's Wager. Are you talking about AI or about gods? Or is the idea that you expect a superintelligent AI in the future which will have the relevant properties of a god?

My initial reaction is that the analogy doesn't work because AI is unlikely to care at all about what you're doing.

How would Logical Decision Theories address the Psychopath Button?

The problem seems under-specified to me. What's Paul's utility function?

Expected (Social) Value

I think this is how I do make decisions in social contexts where I'm not running on instinct. I'm less convinced that expected social value is its own category. Seems more like applying regular expected value to a problem that happens to be social

Also, nitpick:

  1. Our estimated probabilities of a person doing X after we do Y can be wildly off.

It doesn't make too much sense to say that you assign the wrong probabilities. (What are the right probabilities?) It's more that your probability distribution has a lot of uncertainty.[1]

And (2) is also not really specific to social contexts.


  1. "entropy" would be the corresponding mathematical property ↩︎

Load More