In Luke's recent post on what sort of posts we would like to see more of, one suggestion was "Open Thread: Math". This suggestion has been voted up by (at least) 12 people. Since it's going to take me less than 2 minutes to type this post, I figured I might as well just go ahead and post the thread, rather than vote up the suggestion.

So, this is an open thread on mathematics. As things stand, I have no idea what the rules should be (I don't know what the people who voted up the post suggestion expected the rules to be), but I guess the general principle should be that we have maths questions which are vaguely related to LW-type ideas, as there are plenty of more appropriate fora for general mathematical discussion already out there.

New to LessWrong?

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 9:45 AM

It's been mentioned here on Less Wrong before, but I'll recommend it again - Project Euler. It's a set of 300+ math problems that are to be solved by designing an algorithm to run in under a minute.

Getting into Project Euler last summer is likely the best move I've ever made to improve my programming skills. I'm not a programmer, but coding skills come in handy in lots of places, so I started working through the PE problems as means of learning Python.

Since I started I've replaced almost all my casual reading with research into algorithms and math, and I've gone from a Python novice to a fairly advanced user without it ever seeming like work. Getting the right answer makes you feel smart, which is an ego-stroking way of making you continue.

I've replaced almost all my casual reading with research into algorithms and math

Where did you start? Did you have a vague idea of algorithms before? If you began with introductory texts which ones would you recommend? Did you know any other programming languages before you started on Python? Could one get a job based solely on knowing Python to your level, do you think?

I'd had a "Computer science for electrical engineers" course in school, which discussed data structures and algorithms from a high level (the usual sorting algorithm discussion, implementing a linked list, that kind of thing), but nothing too in-depth. I've had various experience in programming before PE.

In solving PE problems I've mostly used Wikipedia and Mathworld for research, and sometimes I'll Google for lecture notes on a relevant topic.

I've used the Python skills I've picked up from PE in my job already. I think I could function in a more programming-oriented job now, though solving math problems doesn't give you much help in hooking into existing APIs or writing web services, which are probably pretty important.

I heard about it a few times, but finally just registered ... went through 6 problems, they're pretty neat, thanks :) My maths is getting rusty, this is a nice boost.

I just did the first three - and I think my solution to problem 2 was fairly elegant!

Edit: My profile

[-][anonymous]13y00

I'll have to try this!

How many have you solved so far? I just got to level one the other day.

I started last June and am at 196 solved currently.

That's worth a bunch of upvotes. I think I got less than 100 done before losing interest. It was a great way to learn a new programming language (Haskell).

Couldn't reach level one until the submission rate limiter had cleared...

What should you use for the "ignorance prior" for the value of a continuous random variable whose underlying distribution is unknown? (For a random variable that is both discrete and finite, you can always use the uniform distribution as your prior, but that doesn't work when the variable can take an infinite number of values.) And you can't always use an improper prior, either...

Would you be surprised if the absolute value was bigger than 3^^^3? I'm guessing yes, very much so. So that's a reason not to use an improper prior.

If there's no better information about the problem, I sortof like using crazy things like Normal(0,1)*exp(Cauchy); that way you usually get reasonable smallish numbers, but you don't become shocked by huge or tiny numbers either. And it's proper.

Let's say that you know the variable is a real number in [0,1], but nothing else...

If it's a frequency, I believe the accepted prior is (x(1-x))^(-1/2), divided by pi if you want it normalized. This is known as the Jefferys prior.

Well if it's you know it's in [0,1], then you've got a perfectly good uniform distribution.

What if I know that my uncertain parameter C from a model equation C*exp(T/T0) is in a certain range... but then I wonder whether I should instead be evaluating the exact same equation as exp(T/T0 + D) (with D = ln(C), C = exp(D))? Do I use the uniform distribution on C, which will correspond to a skewed distribution on D, or do I use the uniform distribution on D which will correspond to a skewed distribution on C?

There's a uniform prior that works just fine for this. (I think that EY's infinite set atheism might have left you with the notion that only discrete, finite sets can have uniform priors. This is false.)

Namely, if you take any interval in [0,1] of length p, then your prior probability of X lying in that interval equals p.

Obvious corollary: your prior probability for X equaling a particular real number is zero. No amount of evidence can let you conclude the exact value of X, just narrower and narrower intervals to which it belongs. If you're in a situation where you expect it's possible to find out that the answer is some exact real number, then you need to take a prior which accounts for this, and averages this "continuous" prior with a countable discrete prior over the exact values that it could conceivably take. (There are several different questions and objections you could raise here, but this actually works, and I'm happy to say more if you want.)

Yeah, you can have a uniform continuous distribution on a finite interval. The problem is that it's not actually uninformative: if you know nothing about x other than that x is in [0,1], you also know nothing about x^2 other than that it, too, is in [0,1] - but if you use the uniform distribution for x, then you're not using a uniform distribution for x^2... I think the Jeffreys prior was supposed to solve this problem, but I don't really understand what a Jeffreys prior is in general...

but if you use the uniform distribution for x, then you're not using a uniform distribution for x^2

Well, x^2 isn't an isometry, so you shouldn't expect it to leave the prior unchanged.

Let me put it this way: if Omega told you ve had a real number x between 0 and 1, and then ve told you that x^1000 was between 3/10 and 4/10, you probably should be more surprised than if ve told you that x^1000 was between 0 and 1/10. Yes, you could pick your prior to have a uniform distribution for x^1000 rather than for x, but that doesn't seem a natural choice in general.

I have also been wondering about when it's appropriate to use a uniform prior.

As an example (I think) of an instance where a uniform prior is not appropriate: NBA stats people calculate points per minute played for all the players. In some cases, bench players have higher points per minutes played than the starters. However, it does not follow that the bench player should be starting (ignoring defensive stats).

This is because bench players tend to enter the game at a time when they will play against the opposing team's bench. So, presuming that the defensive skills of the opponent's bench are less than the defensive skills of the opponent's starters, it's clear that there is a non-uniform level of defense maintained by the opponent during the game - i.e., bench players should have an easier time scoring than starters. So there is not a simple apples to apples comparison & conclusion based on this stat.

There's no general answer to that question.

I don't think there is any kind of general result for this. It depends on what you're trying to infer. For example, are you trying to find a scale parameter? Are you trying to find a shape parameter? I think the most popular approach is to find the Maximm Entropy distribution. Admittedly I don't know a lot about the math behind this.

Math-wise, basically you pick whatever distribution maximizes integral from negative infinity to infinity of p(x)log(p(x)) with respect to x, multiplied by -1. So -Sp(x)log(p(x))dx.

The crux of this making sense is that that value can be interpreted as the amount of information you expect to learn from hearing that x happened. Or more straightforwardly, its how much you expect to not know about a particular variable/event. If you use log base 2, its measured in the average number of yes/no questions needed to concisely learn that it happened. For an explanation of why that's true, these articles are excellent.

The reason that you want to maximize this value in the distribution is that not doing so assumes that you have information that you don't know. Say you have 5 bits of entropy in the maximum entropy distribution, and 4 in some other one. If you choose the4 bit one then you're basically making up information by thinking that you need one fewer yes/no question than you actually do.

What is a good introduction to Kolmogorov complexity and Solomonoff induction?

You might find http://singinst.org/blog/2007/06/25/solomonoff-induction/ interesting, though it uses a non-Turing complete example. I could write a post on it if there is demand.

There's demand :)

Umm I'm also noting my demand, in case its necessary.

I figured it out from Wikipedia...

This pdf seems to have an introduction, but it's technical.

Some links here. Especially this one.

A very nice computer science puzzle I have just heard. I don't know the solution, please don't spoil it.

Input: a word. Output: the longest prefix of the word that is also a suffix. So for a given w, find the longest x such that w=xy=zx, |y|>0.

Give a deterministic algorithm with the best possible asymptotics.

This is a subroutine of the Knuth-Morris-Pratt algorithm, which works in linear time.

Nice one. My answer, in rot13:

V svther gurer'f ab jnl jr pna trg guvf gb or yrff guna B(a), fvapr jr arrq gb purpx rirel punenpgre bs gur cersvk/fhssvk gb znxr fher gung vg ernyyl vf n cersvk naq fhssvk. Qrgnvyf ner yrsg nf na rkrepvfr gb gur ernqre. ;-)

Svefg, pbafgehpg n fhssvk gerr. Guvf gnxrf B(a) gvzr. Gura znex gur fgevat vgfrys nf n sbeovqqra fhssvk, juvpu gnxrf B(a) gvzr. Gura frnepu sbe nf zhpu bs gur ortvaavat bs gur fgevat nf cbffvoyr va gur fhssvk gerr, nibvqvat gur sbeovqqra fhssvk. Sbe rknzcyr, vs gur fgevat vf "nanan", gura lbh jbhyq ybbx ng gur urnq bs gur fhssvk gerr naq tb qbja gur "n" oenapu, gura qbja gur "an" oenapu, naq gura lbh jbhyq nibvq gur "an" oenapu (juvpu vf znexrq sbeovqqra), vafgrnq tbvat gb gur raq-bs-fhssvk znexre. Lbhe ybatrfg cersvk/fhssvk jbhyq gura or "nan".

Gbgny gvzr pbzcyrkvgl: B(a). Gbgny fcnpr: B(a). Fhssvk gerrf ner nznmvat.

[-]Cyan13y10

I want to learn enough measure theory to deal with theoretical papers on stochastic processes e.g.(warning: pdf). I need a text with lots of homework problems. Suggestions?

Disclaimer: I am not a probabilist.

One possibility would be some combination of the book Probability With Martingales by David Williams, and the lecture notes of James Norris available freely online here and here, depending on your taste. As the names suggest, both of these develop measure theory with probability theory in mind. Section A of Williams's book and the first set of Norris's notes cover basic measure theory, and sections B and C of Williams's book and the second set of Norris's notes cover more advanced topics (conditional expectation, Martingales, Brownian motion). I'm only really familiar with the former, not the latter, and depending on what exactly you need/want to know you might only work through the former, and maybe not even all of that.

Norris's notes in both cases are quite a bit terser and cover a little more material. Both the book and the notes have a large number of very helpful exercises compiled at the end. I remember finding some of Norris's exercises in particular a lot of fun!

Other books I'm aware of: D. H. Fremlin has an encyclopaedic set of volumes on measure theory available for free on his website here. I've found this useful as a reference. Some people like Rudin's Real and Complex Analysis for this kind of thing, but I wouldn't recommend it for your goals. (It's very difficult to jump into that book in the middle: you really need to follow the garden path from beginning to end, and I think that'd force you through a lot of functional analysis that you could survive without). Folland probably suffers from similar problems, but I'm not really familiar with it.

Good luck!

[-]Cyan13y00

Many thanks!

Durrett's Probability: Theory and Examples has many good exercises, and works its way up to Brownian motion. From there, I don't know what to recommend. I tried a reading course with Kallenberg's Foundations of Modern Probability, which focuses heavily on those topics, but the book didn't work for me.

Cox's theorem requires finite additivity of probability, but not countable additivity. The Solomonoff prior, however, only makes sense given countable additivity. Is there any research on how this can be reconciled? Maybe there's a self evident axiom that can be used to prove countable additivity?

[-][anonymous]13y00

For probability to be a measure, which it's usually defined to be, it must be countably additive.

Why must probability be a measure? What is wrong with an agent that uses "sums" of infinite numbers of probabilities that do not satisfy countable additivity?

I guess there are lots of things to say here.

Firstly, note that countable additivity can only fail in one direction: the probability of a countable union of disjoint events can be strictly larger than the sum of the probabilities but it can't be strictly smaller.

Now suppose for a moment that we're only interested in discrete random variables that take at most countably many values. Then it's easy to see that any additive probability measure m which assigns probabilities to each singleton set can be uniquely decomposed as the sum of a sigma-additive measure k and an additive measure m* such that m*(S) = 0 for any finite set S. Perhaps m is one of a set of hypotheses that we're trying to evaluate by applying Bayesian reasoning to a data sample (consisting of some integer values). But then whichever data we receive, m* will assign it zero probability, and therefore drop out as irrelevant. The posterior probabilities will depend only on the sigma-additive components k.

Clearly the continuous case is more complicated - I don't know whether one can make an analogous argument. (Perhaps it would depend on denying that we ever receive 'real numbers' in our data. Rather, all we get are finite amounts of information about each real.)

Perhaps you want something like a "Dutch book" argument to show that dropping countable additivity is irrational? Well, any such argument would presuppose that if you would accept each of the wagers A[n] then you must also accept the sum of the A[n] (as long as it's well-defined), but as long as you reject that presupposition, you remain perfectly self-consistent.

You make a lot of interesting points.

In your example with m*, what if we have a random variable that takes values equal to 1/n, for integer n? Then maybe we measure whether n is even. Wouldn't the probability of this would depend on m* as well as k?

For the case of a Dutch book, the problem is that wagers do not satisfy countable additivity. Maybe we could identify some subset of wagers that do and use this in a proof?

Does anyone keep highly up-to-date with the machine learning, computer vision, and AI literature? Are there any "hot" new topics people are excited about?

There are tons of blogs that discuss the latest exciting development in various fields of philosophy. That makes it really easy to keep up with the latest developments without paying to subscribe to all the major journals or even browsing the abstracts online to find the interesting few of each month.

Aren't there such blogs on the subjects of machine learning and AI? I would suspect so...

Yeah, there are a few. But for the most part, I think AI people don't tend to blog as much as philosophers.

[-][anonymous]13y00

Intention from motion seems pretty fascinating.

The link doesn't work for me. Is this what you're referring to? It's paywalled though -- I can only see the first few pages. Googling "intention from motion" brings up some more accessible papers.