Nebu

Nebu's Comments

Operationalizing Newcomb's Problem

Yeah, which I interpret to mean you'd "lose" (where getting $10 is losing and getting $200 is winning). Hence this is not a good strategy to adopt.

Operationalizing Newcomb's Problem
99% of the time for me, or for other people?

99% for you (see https://wiki.lesswrong.com/wiki/Least_convenient_possible_world )

More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.

I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).

Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.

12020: a fine future for these holidays

Any recommendations for companies that can print and ship the calendar to me?

Operationalizing Newcomb's Problem

Okay, but then what would you actually do? Would you leave before the 10 minutes is up?

Operationalizing Newcomb's Problem
why do I believe that it's accuracy for other people (probably mostly psych students) applies to my actions?

Because historically, in this fictional world we're imagining, when psychologists have said that a device's accuracy was X%, it turned out to be within 1% of X%, 99% of the time.

Overcoming Akrasia/Procrastination - Volunteers Wanted

I really should get around to signing up for this, but...

How much background technical knowledge do LW readers have?

Seems like the survey is now closed, so I cannot take the survey at the moment I see the post.

A Plausible Entropic Decision Procedure for Many Worlds Living, Round 2
suppose Bob is trying to decide to go left or right at an intersection. In the moments where he is deciding to go either left or right, many nearly identical copies in nearly identical scenarios are created. They are almost entirely all the same, and if one Bob decides to go left, one can assume that 99%+ of Bobs made the same decision.

I don't think this assumption is true (and thus perhaps you need to put more effort into checking/arguing its true, if the rest of your argument relies on this assumption). In the moments where Bob is trying to decide whether to go either left or right, there is no apriori reason to believe he would choose one side over the other -- he's still deciding.

Bob is composed of particles with quantum properties. For each property, there is no apriori reason to assume that those properties (on average) contribute more strongly to causing Bob to decide to go left vs to go right.

For each quantum property of each particle, an alternate universe is created where that property takes on some value. In a tiny (but still infinite) proportion of these universe, "something weird" happens, like Bob spontaneously disappears, or Bob spontaneously becomes Alice, or the left and right paths disappear leaving Bob stranded, etc. We'll ignore these possibilities for now.

Of the remaining "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Left", and in other "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Right". There is no apriori reason to think that the proportion of the first type of universe is higher or lower probability than the proportion of the second type of universe. That is, being maximally ignorant, you'd expect about 50% of Bobs to go left, and 50% to go right.

Going a bit more meta, if MWI is true, then decision theory "doesn't matter" instrumentally to any particular agent. No matter what arguments you (in this universe) provide for one decision theory being better than another, there exists an alternate universe where you argue for a different decision theory instead.

The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence

I see some comments hinting at towards this pseudo-argument, but I don't think I saw anyone make it explicitly:

Say I replace one neuron in my brain with a little chip that replicates what that neuron would have done. Say I replace two, three, and so on, until my brain is now completely artificial. Am I still conscious, or not? If not, was there a sudden cut-off point where I switched from conscious to not-conscious, or is there a spectrum and I was gradually moving towards less and less conscious as this transformation occurred?

If I am still conscious, what if we remove my artificial brain, put it in a PC case, and just let it execute? Is that not a simulation of me? What if we pause the chips, record each of their exact states, and instantiate those same states in another set of chips with an identical architecture?

If there consciousness is a spectrum instead of a sudden cut off point, how confident are we that "simulations" of the type that you're claiming are "not" (as in 0) conscious, aren't actually 0.0001 conscious?

Ductive Defender: a probability game prototype

I played the game "blind" (i.e. I avoided reading the comments before playing) and was able to figure it out and beat the game without ever losing my ship. I really enjoyed it. The one part that I felt could have been made a lot clearer was that the "shape" of the mind signals how quickly they move towards your ship; I think I only figured that out around level 3 or so.

Load More