GreedyAlgorithm

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

The Informations told/implied to the Humans that they don't lie or withold information. That is not the same as the Humans knowing that the Informations don't lie.

Brian, you want an answer to the real-world situation? Easy. First assume you have a source of inputs that is not antagonistic, as discussed. Then measure which deterministic pivot-choice algorithms would work best on large samples of the inputs, and use the best. Median-of-three is a great pivot choosing algorithm in practice, we've found. If your source of inputs is narrower than "whatever people anywhere using my ubergeneral sort utility will input" then you may be able to do better. For example, I regularly build DFAs from language data. Part of this process is a sort. I could implement this plan and possibly find that, I don't know, median of first, last, and about-twenty-percent-of-the-way-in is in general better. I anticipate the effort would not be worth the cost, so I don't, but there you are.

You don't have to put constraints on the input, you can just measure them (or guess well!). They're probably already there in real-world situations.

Brian, the reason we do that is to avoid the quicksort algorithm being stupid and choosing the worst-case pivot every time. The naive deterministic choices of pivot (like "pick the first element") do poorly on many of the permutations of the input which are far more probable than 1/n! because of the types of inputs people give to sorting algorithm, namely, already or nearly-already sorted input. Picking the middle element does better because inputs sorted inside to outside are rarer, but they're still far more likely than 1/n! apiece. Picking a random element is a very easy way to say "hey, any simple algorithm I think up will do things that correlate with algorithms other people think up, so will hit worst-case running times more often than I'd like, so I'll avoid correlation with other people".

There are variants of quicksort that completely avoid worst-case complexity by choosing the true median of the list each time. They incur an extra cost that makes average case worse, though, and they're usually not the best choice because we're almost always not trying to avoid the worst-case for a single run, we're actually trying to make the average case faster.

If they're both about equally likely to reason as well, I'd say Eliezer's portion should be p * $20, where ln(p/(1-p))=(1.0*ln(0.2/0.8)+1.0*ln(0.85/0.15))/(1.0+1.0)=0.174 ==> p=0.543. That's $10.87, and he owes NB merely fifty-six cents.

Amusingly, if it's mere coincidence that the actual split was 3:4 and in fact they split according to this scheme, then the implication is that we are trusting Eliezer's estimate 86.4% as much as NB's.

"But sometimes experiments are costly, and sometimes we prefer to get there first... so you might consider trying to train yourself in reasoning on scanty evidence, preferably in cases where you will later find out if you were right or wrong. Trying to beat low-capitalization prediction markets might make for good training in this? - though that is only speculation."

Zendo, an inductive reasoning game, is the best tool I know of to practice reasoning on scanty evidence in cases where you'll find out if you were right or wrong. My view of the game: one player at a time takes the role of "reality", which is a single rule classifying all allowed things into two categories. The other players, based on a steadily growing body of examples of correctly classified things and the fact that the other player made up the rule, attempt to determine the rule first. This is fundamentally different from deduction games which traditionally have small hypothesis spaces (Clue - 324, Mystery of the Abbey - 24, Mastermind - I've seen 6561) with each hypothesis being initially equiprobable.

I've seen variants that can be played online with letters or numbers instead of pyramids, but frankly they're not nearly as fun.

Here's what I was missing: the magnitudes of the amplitudes needs to decrease when changing from one possible state to more than one. In drawing-on-2d terms, a small amount of dark pencil must change to a large amount of lighter pencil, not a large amount of equally dark pencil. So here's what actually occurs (I think):

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from E to A (-1/sqrt(2),0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from A to B (0,-1/2) A photon is coming from A to C (-1/2,0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from B to D (1/2,0) A photon is coming from C to D (0,-1/2)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from D to X (0,1/2sqrt(2))+(0,-1/2sqrt(2)) = (0,0) A photon is coming from D to 2 (1/2sqrt(2),0)+(1/2sqrt(2),0) = (1/sqrt(2),0)

Detector 1 hits 1/2 of the time and detector 2 hits 1/2 of the time.

Okay, what happens in this situation: Take figure 2. The arrow coming in from the left? Replace it with figure 1, with its mirror relabeled E and detector 2 removed (replaced with figure 2). And lengthen the distance to detector 1 so that it's equal to the total distance to detector 2 in figure 2. And I guess call the detector 1 in figure 2 "X" for "we know you won't be getting any amplitude". Now what? Here's what I get...

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from E to A (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from A to B (0,-1) A photon is coming from A to C (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from B to D (1,0) A photon is coming from C to D (0,-1)

A photon is coming from E to 1 (0,-1) A photon is coming from D to X (0,1)+(0,-1) = (0,0) A photon is coming from D to 2 (1,0)+(1,0) = (2,0)

From this I conclude that detector 1 will register a hit 1/5 of the time and detector 2 will register a hit 4/5 of the time. Is that correct?

The only way I can see p-zombieness affecting our world is if

a) we decide we are ethically bound to make epiphenomenal consciousnesses happier, better, whatever; b) our amazing grasp of physics and how the universe exists leads our priors to indicate that even though it's impossible to ever detect them, epiphenomenal consciousnesses are likely to exist; and c) it turns out doing this rather than that gives the epiphenomenal consciousnesses enough utility that it is ethical to help them out.

Lee,

I'd assume we can do other experiments to find this out... maybe they've been done? Instead of {98,100}, try all pairs of two numbers from 90-110 or something?

Anon, Wendy:

Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.
Load More