All of amit's Comments + Replies

Smart non-reductionists, philosophical vs. engineering mindsets, and religion

Our values determine our beliefs

I don't think the ugly duckling theorem (ie. the observation that any pair of elements from a finite set share exactly half of the powerset elements that they belong to) goes far towards proving that "our values determine our beliefs". Some offhand reasons why I think that:

  • It should be more like "our values determine our categories".
  • There's still solomonoff induction.
  • It seems like people with different values should still be able to have a bona fide factual disagreement that's not just caused by th
... (read more)
A Rationalist's Account of Objectification?

Of course the evidence will never be very communicable to a wide audience

Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who've had experiences and report back their findings.

-1Will_Newsome9yThat's a multi-step plan: at least one of those steps would go wrong. By hypothesis we're talking about transhuman intelligence(s) here (no other explanation for psi makes sense given the data we have). They wouldn't let you ruin their fun like that, per the law of conservation of trolling. (ETA: Or at least, it wouldn't work out like you'd expect it to.)
You're Calling *Who* A Cult Leader?

From this list

It follows from the assumption that you're not Bill Gates, don't have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.

the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when "you" are considered to be controlling the choices... (read more)

-6private_messaging9y
Computation Hazards

An example of a computation that runs most algorithms is a mathematical formalism called Solomonoff induction.

Solomonoff Induction is uncomputable, so it's not a computation. Would be correct if you had written

An example of a computation that runs most algorithms could be some program that approximates a mathematical formalism Solomonoff induction.

Also, strictly speaking no real-world computation could run "most" algorithms, since there are infinitely many and it could only run a finite number. It would make more sense to use an expression like "computations that search through the space of all possible algorithms".

Computation Hazards

A function that could evaluate an algorithm and return 0 only if it is not a person is called a nonperson predicate. Some algorithms are obviously not people. Some algorithms are obviously not people. For example, any algorithm whose output is repeating with a period less than gigabytes...

Is this supposed to be about avoiding the algorithms simulating suffering people, or avoiding them doing something dangerous to the outside world? Obviously an algorithm could simulate a person while still having a short output, so I'm thinking it has to be about the s... (read more)

0Alex_Altair9yI meant the first one. I was thinking that extremely brief "experiences" repeated over and over wouldn't constitute a person, and so I called it periodic output, but obviously that was wrong. I changed it for clarity.
The Truth Points to Itself, Part I

So you're searching for "the most important thing", and reason that this is the same as searching for some utility function, and then you note that one reason this question seems worth thinking about is because it's interesting, and then you refer to Schmidhuber's definition of interestingness (which would yield a utility function), and note that it is itself interesting, so maybe importance is the same as interestingness, because importance has to be itself important and (Schmidhuberian) interestingness satisfies this requirement by being itself... (read more)

That Alien Message

But their proteins aren't necessarily making use of the extra computational power. And we can imagine that the physics of our universe allows for super powerful computers, but we can still obviously make efficient inferences about our universe.

2cousin_it9yIt's an interesting question. I have a vague intuition that one of the reasons for evolution's awesomeness is that it optimizes using physics on all scales at once, where a human engineer would have focused on a more limited set of mechanisms. Protein folding in our universe is already pretty hard to simulate. In a more computationally capable universe, I'd expect evolution to push the basic machinery even further beyond our ability to simulate. No idea if the intuition is correct, though.
Cryonics without freezers: resurrection possibilities in a Big World

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

2FAWS9y(Not Will, but I think I mostly agree with him on this point) There is no such thing as an uniquely specified "next experience". There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?
Our Phyg Is Not Exclusive Enough

I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).

(Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be... (read more)

The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts.

BTW, I've had numerous "wow" moments with philosophical insights, some of which made me spend years considering their implications. For example:

  • Bayesian interpretation of probability
  • AI / intelligence explosion
  • Tegmark's mathematical universe
  • anthropic principle / anthropic reasoning
  • free will as the ability to decide logical facts

I expect that a correct solution to metaethics would pr... (read more)

0Wei_Dai9yIt's mentioned here [http://lesswrong.com/lw/sm/the_meaning_of_right/]: ETA: Just in case you're right and Eliezer somehow meant for that paragraph not to be part of his metaethics, and that his actual metaethics is just "morality is a fixed abstract computation", then I'd ask, "If morality is a fixed abstract computation, then it seems that rationality must also be a fixed abstract computation. But don't you think a complete "solved" metaethics should explain how morality differs from rationality?"
Cryonics without freezers: resurrection possibilities in a Big World

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

7wedrifid9yIt would seem it is trying to fool just the unenlightened masses. But the chosen few who see the Truth shall transcend all that...
Our Phyg Is Not Exclusive Enough

used later on identity

Yes.

and decision theory

No, as far as I can tell.

-1David_Gerard9yProbably not, then. (The decision theory posts were where I finally hit a tl;dr wall.)