Optimization Process

Comments

A Semitechnical Introductory Dialogue on Solomonoff Induction

Hmm. If we're trying to argmax some function over the real numbers, then the simplest algorithm would be something like "iterate over all mathematical expressions ; for each one, check whether the program 'iterate over all provable theorems, halting when you find one that says ' halts; if it does, return ."

...but I guess that's not guaranteed to ever halt, since there could conceivably be an infinite procession of ever-more-complex expressions, eking out ever-smaller gains on . It seems possible that no matter what (reasonably powerful) mathematical language you choose, there are function-expressions with finite maxima at values not expressible in your language. Which is maybe what you meant by "as far as I know there can't be [an algorithm for it]."

(I'm assuming our mathematical language doesn't have the word , since in that case we'd pretty quickly stumble on the expression , verify that , and return it, which is obviously a cop-out.)

A Semitechnical Introductory Dialogue on Solomonoff Induction

The understanding I came away with: there are (at least) three stages of understanding a problem:

  1. You can't write a program to solve it.
  2. You can write a cartoonishly wasteful program to solve it.
  3. You can write a computationally feasible program to solve it.

"Shuffle-sort" achieves the second level of knowledge re: sorting lists. Yeah, it's cartoonishly wasteful, and it doesn't even resemble any computationally feasible sorting algorithm (that I'm aware of) -- but, y'know, viewed through this lens, it's still a huge step up from not even understanding "sorting" well enough to sort a list at all.

(Hmm, only marginally related but entertaining: if you reframe the problem of epistemology not as sequence prediction, but as "deduce what program is running your environment," then a Solomonoff inductor can be pretty fairly described as "consider every possible object of type EnvironmentProgram; update its probability based on the sensory input; return the posterior PDF over EnvironmentProgram-space." The equivalent program for list-sorting is "consider every possible object of type List<Int>; check if (a) it's sorted, and (b) it matches the element-counts of the input-list; if so, return it." Which is even more cartoonishly wasteful than shuffle-sort. Ooh, and if you want to generalize to cases where the list-elements are real numbers, I think you get/have to include something that looks a lot like Solomonoff induction, forcing countability on the the reals by iterating over all possible programs that evaluate to real numbers (and hoping to God that whatever process generated the input list, your mathematical-expression-language is powerful enough to describe all the elements).)

"New EA cause area: voting"; or, "what's wrong with this calculation?"

Ah! You're saying: if my "500k coin flips" model were accurate, then most elections would be very tight (with the winner winning by a margin of around 1/800, i.e. 0.125%), which empirically isn't what happens. So, in reality, if you don't know how an election is going to turn out, it's not that there are 500k fair coins, it's that there are either 500k 51% coins or 500k 49% coins, and the uncertainty in the election outcome comes from not knowing which of those worlds you're in. But, in either case, your chance of swinging the election is vanishingly small, because both of those worlds put extremely little probability-mass on the outcome being a one-vote margin.

(see also: johnwentworth's comment below)

"New EA cause area: voting"; or, "what's wrong with this calculation?"

Also, note that your probability of swinging the election is only 1/√n if the population is split exactly 50/50; it drops off superexponentially as the distribution shifts to one side or the other by √n voters or more.

Yesss, this seems related to shadonra's answer. If my "500k coin flips" model were accurate, then most elections would be very tight (with the winner winning by a margin of 1/800, i.e. 0.125%), which empirically isn't what happens. So, in reality, if you don't know how an election is going to turn out, it's not that there are 500k fair coins, it's that there are either 500k 51% coins or 500k 49% coins, and the uncertainty in the election outcome comes from not knowing which of those worlds you're in. But, in either case, your chance of swinging the election is vanishingly small, because both of those worlds put extremely little probability-mass on the outcome being a one-vote margin.

if you're actively pushing an election, not just voting yourself, then that plausibly has a much bigger impact than just your one vote

That... is... a very interesting corollary. Although... you only get the "superexponential" benefit in the case where you're far out on the tail of the PDF -- in the "500k 49% coins" world, throwing 100 votes for Heads instead of 1 would increase your chances of swinging the election by a factor of much, much more than 100x, but your probability of swinging the election is still negligible, since the 50% mark is, uh, 14 standard deviations out from the mean. Right?

"New EA cause area: voting"; or, "what's wrong with this calculation?"

Wait... your county has a GDP of over half a million dollars per capita? That is insanely high!

I agree! (Well, actually more like $1-200k/capita, because there are more people than voters, but still.) Sources: population, GDP, turnout.

"New EA cause area: voting"; or, "what's wrong with this calculation?"

Sure! I'm modeling the election as being coin flips: if there are more Heads than Tails, then candidate H wins, else candidate T wins.

If you flip coins, each coin coming up Heads with probability , then the number of Heads is binomially distributed with standard deviation , which I lazily rounded to .

The probability of being at a particular value near the peak of that distribution is approximately 1 / [that standard deviation]. ("Proof": numerical simulation of flipping 500k coins 1M times, getting 250k Heads about 1/800 of the time.)

"New EA cause area: voting"; or, "what's wrong with this calculation?"
  • Possible answer: "Sure, it's individually rational for you to devote your energy to Getting Out The Vote instead of donating to charity, but the group-level rational thing for people to do is to donate to charity, rather than playing tug-o'-war against each other."

    Ugh, yeah, maybe. I see the point of this sort of... double-think... but I've never been fully comfortable with it. It sounds like this argument is saying "Hey, you put yourself at a 60% probability of being right, but actually, Outside View, it should be much smaller, like 51%." But, buddy, the 60% is already me trying to take the outside view! My inside view is that it's more like 95%!

    It sounds like down this path lies a discussion around how overconfident I should expect my brain to be (and therefore how hard I should correct for that). Which is important, sure, but also, ugh.

"New EA cause area: voting"; or, "what's wrong with this calculation?"
  • Possible answer: "You're doing a causal-decision-theory calculation here (assuming that your vote might swing the election while everything else stays constant); but in reality, we need to break out [functional decision theory or whatever the new hotness is], on account of politicians predicting and "pricing in" your vote as they design their platforms."

    Hmm, yeah, maybe. In which case, the model shouldn't be "my vote might swing the election," but instead "my vote will acausally incrementally change candidates' platforms," which I don't have very good models for.

"New EA cause area: voting"; or, "what's wrong with this calculation?"
  • Possible answer: "No election is decided by a single vote; if it's that close, it'll be decided by lawyers."

    Rebuttal: yeah, it's a little fuzzy, but, without having cranked through the math, I don't think it matters: my null hypothesis is that my vote shifts the probability distribution for who wins the legal battle in my desired direction, with an effect size around the same as in the naive lawyer-free model.

Load More