John_Baez

Posts

Sorted by New

Wiki Contributions

Comments

Results from MIRI's December workshop

Thanks for writing this - I've added links in my article recommending that people read yours.

Probability, knowledge, and meta-probability

Ordinary probability theory and expected utility are sufficient to handle this puzzle. You just have to calculate the expected utility of each strategy before choosing a strategy. In this puzzle a strategy is more complicated than simply putting some number of coins in the machine: it requires deciding what to do after each coin either succeeds or fails to succeed in releasing two coins.

In other words, a strategy is a choice of what you'll do at each point in the game tree - just like a strategy in chess.

We don't expect to do well at chess if we decide on a course of action that ignores our opponent's moves. Similarly, we shouldn't expect to do well in this probabilistic game if we only consider strategies that ignore what the machine does. If we consider all strategies, compute their expected utility based on the information we have, and choose the one that maximizes this, we'll do fine.

I'm saying essentially the same thing Jeremy Salwen said.

Probability, knowledge, and meta-probability

Right: a game where you repeatedly put coins in a machine and decide whether or not to put in another based on what occurred is not a single 'event', so you can't sum up your information about it in just one probability.

Einstein's Arrogance

Once you assume:

1) the equations describing gravity are invariant under all coordinate transformations,

2) energy-momentum is not locally created or destroyed,

3) the equations describing gravity involve only the flow of energy-momentum and the curvature of the spacetime metric (and not powers or products or derivatives of these),

4) the equations reduce to ordinary Newtonian gravity in a suitable limit,

then Einstein's equations for general relativity are the only possible choice... except for one adjustable parameter, the cosmological constant.

(First Einstein said this constant was nonzero, then he said that was the "biggest mistake in his life", and then it turned out he was right in the first place. It's not zero, it's roughly 0.0000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000001. So, a bit of waffling on this issue is understandable.)

It took Einstein about 10 years of hard work to figure this out, with a lot of help from a mathematician Marcel Grossman who taught him the required math. But by the time he talked to that reporter he knew this stuff. That's what gave him his confidence.

His assumptions 1)-4) could have been wrong, of course. But he was playing a strong hand of cards - and he knew it.

By the way, he did write a paper where he got the equations wrong and predicted a wrong value for the deflection of starlight by the Earth's gravitational field. But luckily he caught his mistake before the experiment was done. If he'd caught his mistake afterwards, lots of people would have thought he was just retroactively fudging his theory to fit the data.

How valuable is it to learn math deeply?

I agree that math can teach all these lessons. It's best if math is taught in a way that encourages effort and persistence.

One problem with putting too much time into learning math deeply is that math is much more precise than most things in life. When you're good at math, with work you can usually become completely clear about what a question is asking and when you've got the right answer. In the rest of life this isn't true.

So, I've found that many mathematicians avoid thinking hard about ordinary life: the questions are imprecise and the answers may not be right. To them, mathematics serves as a refuge from real life.

I became very aware of this when I tried getting mathematicians interested in the Azimuth Project. They are often sympathetic but feel unable to handle the problems involved.

So, I'd say math should be done in conjunction with other 'vaguer' activities.

Mathematicians and the Prevention of Recessions

According to Wikipedia:

As of December 31, 2012, the Treasury had received over $405 billion in total cash back on Troubled Assets Relief Program investments, equaling nearly 97 percent of the $418 billion disbursed under the program.

But TARP was just a small part of the whole picture. What concerns me is that there seem to have been somewhere between $1.2 trillion and $16 trillion in secret loans from the Fed to big financial institutions and other corporations. Even if they've been repaid, the low interest rates might represent a big transfer of wealth from the poor to the wealthy. And the fact that I'm seeing figures that differ by more than an order of magnitude is far from reassuring, too! The GAO report seems to be worth digging into. If not mathematicians, at least accountants could be helpful for things like this!

Robustness of Cost-Effectiveness Estimates and Philanthropy

Very nice article!

I too wonder exactly what you mean by

effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact.

Which kinds of qualitative analysis do you think are important, and why? Is that what you're talking about when you later write this:

Estimating the cost-effectiveness of health interventions in the developing world has proved to be exceedingly difficult, and this in favor of giving more weight to inputs for which it’s possible to make relatively well-grounded assessments. Some of these are room for more funding, the quality of the people behind a project and historical precedent.

?

I also have a question. Did you spend time looking for ways in which projects could be more effective than initially expected, or only ways in which they could be less effective. For example: did you think much about the 'multiplier effects' where making someone healthier made them better able to earn a living, support their relatives, and help other people... thus making other people healthier as well?

Even if your only ultimate concern were saving lives - which seems narrow-minded to me, and also a bit vague since all these people eventually die - it seems effects like this tend to turn other good things into extra lives saved.

It could be very hard to quantify these multiplier effects. But just as you'll find many negative feedbacks if you look hard for them, like these:

  • Fathers may steal nets from pregnant mothers and sell them for a profit.

  • LLIN recipients may use the nets for fishing.

  • LLIN users may not fasten LLINs properly.

  • Mosquitoes may develop biological resistance to the insecticide used on LLINs.

there could also be many positive feedbacks you'd find if you'd looked for those. So I'm a bit concerned that you're listing lots of "low-probability failure modes" but no "low-probability better-success-than-expected modes".

A History of Bayes' Theorem

Maybe this is not news to people here, but in England, a judge has ruled against using Bayes' Theorem in court - unless the underlying statistics are "firm", whatever that means.

Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased)

I studied particle physics for a couple of decades, and I would not worry much about "mirror matter objects". Mirror matter is just of many possibilities that physicists have dreamt up: there's no good evidence that it exists. Yes, maybe every known particle has an unseen "mirror partner" that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets - more theoretical possibilities not backed up by any good evidence. True, mirror matter is one of many speculative hypotheses that people have invoked to explain some peculiarities of the Tunguska event, but I'd say a comet was a lot more plausible.

Asteroid collisions, on the other hand, are known to have happened and to have caused devastating effects. NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 out of a million. They estimate that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.

On the bright side, even if it hits us, Apophis will probably just cause local damage. The asteroid that hit the Earth in Chicxulub and killed off the dinosaurs released an energy comparable to a 240,000-megatonne bomb. That's the kind of thing that really ruins everyone's day.

Load More