linas

Posts

Sorted by New

Comments

Meetup : Austin, TX - HPMoR Wrap Party

I will come, unless I utterly space it off and forget.

Decision Theory FAQ

The FAQ states that omega has/is a computer the size of the moon -- that's huge but finite. I believe its possible, with today's technology, to create a randomizer that an omega of this size cannot predict. However smart omega is, one can always create a randomizer that omega cannot break.

Decision Theory FAQ

Yes. I was confused, and perhaps added to the confusion.

Decision Theory FAQ

Hmm, the FAQ, as currently worded, does not state this. It simply implies that the agent is human, that omega has made 1000 correct predictions, and that omega has billions of sensors and a computer the size of the moon. That's large, but finite. One may assign some finite complexity to Omega -- say 100 bits per atom times the number of atoms in the moon, whatever. I believe that one may devise pseudo-random number generators that can defy this kind of compute power. The relevant point here is that Omega, while powerful, is still not "God" (infinite, infallible, all-seeing), nor is it an "oracle" (in the computer-science definition of an "oracle": viz a machine that can decide undecidable computational problems).

Decision Theory FAQ

Huh? Can you explain? Normally, one states that a mechanical device is "predicatable": given its current state and some effort, one can discover its future state. Machines don't have the ability to choose. Normally, "choice" is something that only a system possessing free will can have. Is that not the case? Is there some other "standard usage"? Sorry, I'm a newbie here, I honestly don't know more about this subject, other than what i can deduce by my own wits.

Is risk aversion really irrational ?

There needs to be an exploration of addiction and rationality. Gamblers are addicted; we know some of the brain mechanisms of addiction -- some neurotransmitter A is released in brain region B, Causing C to deplete, causing a dependency on the reward that A provides. This particular neuro-chemical circuit derives great utility from the addiction, thus driving the behaviour. By this argument, perhaps one might argue that addicts are "rational", because they derive a great utility from their addiction. But is this argument faulty?

A mechanistic explanation of addiction says the addict has no control, no free will, no ability to break the cycle. But is it fair to say that a "machine has a utility function"? Or do you need to have free before you can discuss choice?

The VNM independence axiom ignores the value of information

The collision I'm seeing is that between formal, mathematical axioms, and English language usage. Its clear that Benelliot is thinking of the axiom in mathematical terms: dry, inarguable, much like the independence axioms of probability: some statements about abstract sets. This is correct-- the proper formulation of VNM is abstract, mathematical.

Kilobug is right in noting that information has value, ignorance has cost. But that doesn't subvert the axiom, as the axioms are mathematically, by definition, correct; the way they were mapped to the example was incorrect: the choices aren't truly independent.

Its also become clear that risk-aversion is essentially the same idea as "information has value": people who are risk-averse are people who value certainty. This observation alone may well be enough to 'explain' the Allais paradox: the certainty of the 'sure thing' is worth something. All that the Allais experiment does is measure the value of certainty.

Decision Theory FAQ

Hmm. I just got a -1 on this comment ... I thought I posed a reasonable question, and I would have thought it to even be a "commonly asked question", so why would it get a -1? Am I misunderstanding something, or am I being unclear?

Decision Theory FAQ

How many times in a row will you be mugged, before you realize that omega was lying to you?

Decision Theory FAQ

OK, but this can't be a "minor detail", its rather central to the nature of the problem. The back-n-forth with incogn above tries to deal with this. Put simply, either omega is able to predict, in which case EDT is right, or omega is not able to predict, in which case CDT is right.

The source of entropy need not be a fair coin: even fully deterministic systems can have a behavior so complex that predictability is untenable. Either omega can predict, and knows it can predict, or omega cannot predict, and knows that it cannot predict. The possibility that it cannot predict, yet is erroneously convinced that it can, seems ridiculous.

Load More