Posts

Sorted by New

Wiki Contributions

Comments

All of Tim Tyler's points have been addressed in previous posts. Likewise the idea that evolution would have more shaping influence than a simple binary filter on utility functions. Don't particularly feel like going over these points again; other commenters are welcome to do so.
Or perhaps someone else will at least explain what "having more shaping influence than a simple binary filter on utility functions" means. It sounds like it's supposed to mean that all evolution can do is eliminate some utility functions. If that's what it means, I don't see how it's relevant.

Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.
But advanced evolved organisms probably will.

The paper-clipper is a straw man that is only relevant if some well-meaning person tries to replace evolution with their own optimization or control system. (It may also be relevant in the case of a singleton; but it would be non-trivial to demonstrate that.)

Vladimir, I don't mean to diss you; but I am running out of weekend, and think it's better for me to not reply than to reply carelessly. I don't think I can do much more than repeat myself anyway.

Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.
But I don't. The problem, phrased in a real world situation that could possibly occur, is that a superintelligence is somehow figuring out what people are likely to do, or else is very lucky. The real-world solution is either

  1. if you know ahead of time that you're going to be given this decision, either pre-commit to one-boxing, or try to game the superintelligence. Neither option is irrational; it doesn't take any fancy math; one-boxing is positing that your committing to one-boxing has a direct causal effect on what will be in the boxes.

  2. if you didn't know ahead of time that you'd be given this decision, choose both boxes.

You can't, once the boxes are on the ground, decide to one-box and think that's going to change the past. That's not the real world, and describing the problem in a way that makes it seem convincing that choosing to one-box actually CAN change the past, is just spinning fantasies.

This is one of a class of apparent paradoxes that arise only because people posit situations that can't actually happen in our universe. Like the ultraviolet catastrophe, or being able to pick any point from a continuum.

"Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis)."

It doesn't address any of the questions a chemist would pose after reading Nanosystems.

"As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb's problem."

Interesting. Although I would say "approaching women with confidence is an instance of a class of problems that Newcomb's problem is supposed to represent but does not." Newcomb's problem presents you a situation in which the laws of causality are broken, and then asks you to reason out a solution assuming the laws of causality are not broken.