Posts

Sorted by New

Wiki Contributions

Comments

Our values determine our beliefs

I don't think the ugly duckling theorem (ie. the observation that any pair of elements from a finite set share exactly half of the powerset elements that they belong to) goes far towards proving that "our values determine our beliefs". Some offhand reasons why I think that:

  • It should be more like "our values determine our categories".
  • There's still solomonoff induction.
  • It seems like people with different values should still be able to have a bona fide factual disagreement that's not just caused by their differing values.
  • It could be true in a theoretical sense but have little bearing on beliefs, values and disagreements in an everyday human context.

(And even if we grant something like that, I see no reason to think that a "philosopher's mindset" would make you lean towards religion (because I don't know any convincing phiosophical arguments for religious propositions, for one).)

Of course the evidence will never be very communicable to a wide audience

Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who've had experiences and report back their findings.

From this list

It follows from the assumption that you're not Bill Gates, don't have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.

the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when "you" are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn't look like an entirely frivolous objection to a naively construed strategy of "give everything to your top charity".

(BTW, It's not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden's normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)

(edit: quickly fixed some errors)

An example of a computation that runs most algorithms is a mathematical formalism called Solomonoff induction.

Solomonoff Induction is uncomputable, so it's not a computation. Would be correct if you had written

An example of a computation that runs most algorithms could be some program that approximates a mathematical formalism Solomonoff induction.

Also, strictly speaking no real-world computation could run "most" algorithms, since there are infinitely many and it could only run a finite number. It would make more sense to use an expression like "computations that search through the space of all possible algorithms".

A function that could evaluate an algorithm and return 0 only if it is not a person is called a nonperson predicate. Some algorithms are obviously not people. Some algorithms are obviously not people. For example, any algorithm whose output is repeating with a period less than gigabytes...

Is this supposed to be about avoiding the algorithms simulating suffering people, or avoiding them doing something dangerous to the outside world? Obviously an algorithm could simulate a person while still having a short output, so I'm thinking it has to be about the second one. But then the notion of nonperson predicates doesn't apply, because it's about avoiding simulating people (that might suffer and that will die when the simulation ends). Also, a dangerous algorithm could probably do some serious damage with under a gigabyte of output. So having less than a gigabyte output doesn't really protect you from anything.

So you're searching for "the most important thing", and reason that this is the same as searching for some utility function, and then you note that one reason this question seems worth thinking about is because it's interesting, and then you refer to Schmidhuber's definition of interestingness (which would yield a utility function), and note that it is itself interesting, so maybe importance is the same as interestingness, because importance has to be itself important and (Schmidhuberian) interestingness satisfies this requirement by being itself interesting

At this point I'm not very impressed. This seems to be the same style of reasoning that gets people obsessed with "complexity" or "universal instrumental values" as the ultimate utility functions.

At the end you say you doubt that interestingness is the ultimate utility function, too, but apparently you still think engaging in this style of reasoning is a good idea, we just have to take it even further.

At this point I'm thinking that it could go either way: you could come up with an interesting proposal in the class of CEV or "Indirect Normativity", which definitely are in some sense the result of going meta about values, or you could come up with something that turns out to be just another fake utility function in the class of "complexity" and "universal instrumental values".

But their proteins aren't necessarily making use of the extra computational power. And we can imagine that the physics of our universe allows for super powerful computers, but we can still obviously make efficient inferences about our universe.

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).

(Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn't consitute as massive progress as it might seem.)

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

Load More