All of MatthewW's Comments + Replies

More generally, the words for the non-metric units are often much more convenient than the words for the metric ones. I think this effect is much stronger than any difference in convenience of the actual sizes of the units.

I think it's the main reason why many of the the non-metric units are still more popular for everyday use than the metric ones in the UK, even though we've all learned metric at school for the last forty years or so.

In a scientific context I have definitely heard some metric units being given one-syllable pronunciations, for example "mg/ml" as "migs per mil" and mg/kg as "migs per kig".
This too. Centimetre and kilometre are four syllables each, inch and mile one.

It's slightly disconcerting to imagine some of the writing coming from the pen of an Anglican deacon.

The useful advice is in the first 5000 words of the essay, most importantly in the examples of bad writing. The 100 words or so of 'rules' are just a summary at the end.

This kind of teaching is common in other subjects. For example, in a Go textbook it's not rare to see a chapter containing a number of examples and a purported 'rule' to cover them, where the rule as stated is broken all the time in professional play. It would be a mistake to conclude that the author isn't a strong player, or that the chapter doesn't contain helpful advice. The 'rule' is ju... (read more)

It would however be reasonable to conclude that the author does not have strong analytic understanding of what exactly makes them a strong player/good writer, and be cautious about the more abstract parts of the advice, similar to how native speakers can tell you whether a sentence is grammatical, but are usually less reliable for giving you general rules than speakers who learned the language as adults to a high level of proficiency.

I don't find it off-putting, but it does make me feel I'm reading Lewis Carrol.

What's wrong with that?

Priors don't come into it. The expert was presenting likelihood ratios directly (though in an obscure form of words).

That isn't what was going on in this case. The expert wasn't presenting statistics to the jury (apparently that's already forbidden).

The good news from this case (well, it's news to me) is that the UK forensic science service both understands the statistics and has sensible written procedures for using them, which some of the examiners follow. But they then have to turn the likelihood ratio into a rather unhelpful form of words like 'moderately strong scientific support' (not to be confused with 'moderate scientific support', which is weaker), because bringing the likelihood ratios into court is forbidden.

(Bayes' Theorem itself doesn't really come into this case.)

This isn't quite "a judge has ruled that [Bayes' theorem] can no longer be used", but I don't think it's good.

The judges decided that using a formula to calculate likelihood isn't allowed in cases where the numbers plugged into the formula are themselves uncertain (paragraph 86), and using conservative figures apparently doesn't help.

Paragraph 90 says that it's already established law that Bayes' theorem and likelihood ratios "should not be used", but I think it means "shouldn't be talked about in front of the jury".

Paragraph ... (read more)

Thanks for the link.

I think paragraphs 80 to 86 are the key paragraphs.

They're declaring that using a formula isn't allowed in cases where the numbers plugged into the formula are themselves uncertain.

But in this case, where there was uncertainty in the underlying data the expert tried to take a conservative figure. The judges don't seem to think that helps, but they don't say why. In particular, para 108 iv) seems rather wrongheaded for this reason.

(It looks like one of the main reasons they overturned the original judgement was that the arguments in cour... (read more)

Well, I'm in the UK, and there's no law against using IQ-style tests for job applicants here. Is that really the case in the US? (I assume the "You're a terrorist" bit was hyperbole.)

Employers here still often ask for apparently-irrelevant degrees. But admission to university here isn't noticeably based on 'generic' tests like the SAT; it's mostly done on the grades from subject-specific exams. So I doubt employers are treating the degrees as a proxy for SAT-style testing.

In your third speculation, I think the first and second category have got swapped round.

I edited it and fixed that, and also the numbers, which weirdly all turned into 1s when I copy-pasted.
You're completely right. Thank you for reading it carefully enough to make that observation.

I don't think there's much need for heuristics like "rate of effectiveness change times donation must be much smaller - say, a few percent of - effectiveness itself."

If you're really using a Landsburg-style calculation to decide where to donate to, you've already estimated the effectiveness of the second-most effective charity, so you can just say that effectiveness drop must be no greater than the corresponding difference.

That's an excellent point that I managed to completely miss. Thank you. I'll try to add an endnote to that effect.

If it's a belief you've previously thought of as obvious and left unexamined, then this is probably a useful heuristic. Otherwise, no.

You say 'the world', but it seems to me you're talking about a region which is a little smaller.

I'm not sure the correction is that relevant. The US and the EU together make up about 40% of global GDP (PPP). Several minor economies with nearly identical conditions and restrictions such as Canada, New Zealand, Australia, South Africa, Norway, Switzerland ... add up to another 3% or so.Most states in Latin America have similar legal prohibitions as well, they are not as well enforced, but avoiding them still imposes costs. This is mentioning nothing of Japan or other developed East Asian economies (though to be fair losses are probably much smaller than the developed West and perhaps even Latin America). The other half of the world's has a massive opportunity cost due to the mentioned half's described inefficiency. Converting this loss into number of lives or quality of life is a depressing exercise. Fortunately that is only a problem if you care about humans.
Correction accepted.

Yes, several of these models look like they're likely to run into trouble of the Goodhart's law type ("Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes").

Principal component analysis of UK political views, from a few years back:

Thanks -- that's roughly what I was talking about.

I think trolley problems suffer from a different type of oversimplification.

Suppose in your system of ethics the correct action in this sort of situation depends on why the various different people got tied to the various bits of track, or on why 'you' ended up being in the situation where you get to control the direction of the trolley.

In that case, the trolley problem has abstracted away the information you need (and would normally have in the real world) to choose the right action.

(Or if you have a formulation which explicitly mentions the 'mad philosopher' and you take that bit seriously, then the question becomes an odd corner case rather than a simplifying thought experiment.)

Exactly. Context is very important.You can't just count deaths. For example, the example [] AlanCrowe gave above has an obvious answer because the military has a clear context: soldiers have already committed their lives and themselves to being 'one of a number'. Based on the limited information of this trolley problem, I think my answer would have to consider that the entire universe would be a better place if 5 people died being run over by an unwitting machine than 1 person dying because he was deliberately pushed by one of his fellows. Taking the constraints of the trolley problem at face value, one action a person might consider is asking the fat man to jump. If asked, ethically, the man should probably say yes. Given that, I am not sure it would be ethical to ask him. Finally, since the fat man could anticipate your asking, it might be most moral, then, to prevent him from jumping. Thus over the course of a comment, I have surprised myself with the position that not only should you not push the man from jumping, you should prevent him if it occurs to him to do so. (That is, if his decision is impulsive, rather than a life commitment of self-sacrifice. I would not prevent a monk or a marine from saving the 5 persons.)

I wonder whether the 'right answers' are what the subject of the photograph was actually feeling, what an expert intended the photograph to represent, or what most people respond.

I wonder that too. Some of the pictures of women look like they could be movie stars or models. The "fantasizing" eyes look more like what you'll see on the cover of a fashion magazine than like anything I've ever seen in real life.

I think it's quite normal that if someone is acknowledged by their peers to be among the very best at what they do, they won't waste much time with status games.

There's an exception if doing what they do requires publicity to bring in sales or votes.

Excellent point.

It would become a mind game: you'd have to explicitly model how you think Omega is making the decision.

The problem you're facing is to maximise P(Omega rewards you|all your behaviour that Omega can observe). In the classical problem you can substitute the actual choice of one-boxing or two-boxing for the 'all your behaviour' part, because Omega is always right. But in the 'imperfect Omega' case you can't.

It's still not clear to me why playing mind games is a better strategy than just one-boxing, even in the 60% case. But I do understand your point about independence assumptions.

To get to the conclusion that against a 60% Omega you're better off to one-box, I think you have to put in a strong independence assumption: that the probability of Omega getting it wrong is independent of the ways of thinking that the player is using to make her choice.

I think that's really the original problem in disguise (it's a 100% Omega who rolls dice and sometimes decides to reward two-boxing instead of one-boxing). The analysis if all you know is that Omega is right 60% of the time would look different.

How exactly different?

So Nabarro explicitly says that he's talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.

A simple fix would be to not bother publishing a top contributors list.

I'm not sure this is a good idea. If there's something that empirically makes us look like we're not being rational we should deal with that issue. Hiding that data is not a good solution. However, I do have to wonder what in general the point of having the top contributors is. I'm not even sure that total karma is a useful metric of much since one person could have much higher quality comments than another but most much more rarely and yet the person with high quality comments presumably should receive more attention and their comments should be more closely paid attention to. It might be nice to have a display of average karma, not just total karma. However, this would still I suspect give Clippy a fairly high karma, so if you object to Clippy this won't solve anything. Also note that many upvotes are are not connected to the quality of remarks in any strong sense. See for example this comment by Eliezer that is now at +53 [] which is presumably connected to the unique status that Eliezer has as the founder of LW.

It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we're supposed to take it as conscious.

It is useful to distinguish the properties "a subsystem C of X is conscious in X" and "C exists in a conscious way" (which means that additionally X=reality). I think Nisan expresses that idea in the parent comment.
The machine considered has the property of being conscious in its context X (i.e. X = the system containing the machine, the producers of its input and consumers of its output). The machine exists in a conscious way if additionally X = reality.

For me, Go helped to highlight certain temptations to behave irrationally, which I think can carry over to real life.

One was the temptation to avoid thinking about parts of the board where I'd recently made a mistake.

And if I played a poor move and my opponent immediately refuted it, there was a temptation to try to avoid seeming foolish by dreaming up some unlikely scheme I might have had which would have made the exchange part of the plan.

Fair enough. I should have said "there are ideas which are useful heuristics in Go, but not in real life", rather than talking about "sound reasoning".

The "I'm committed now" one can be a genuinely useful heuristic in Go (though it's better if you're using it in the form "if I do this I will be committed", rather than "oh dear, I've just noticed I'm committed"). "Spent so much effort" is in the sense of "given away so much", rather than "taken so many moves trying".

It also teaches "if you're behind, try to rock the boat", which probably isn't great life advice.

Well, a similar lesson may well apply in cases where you are losing, and have got to win. Society might want to try and arrange things so not too many people feel that way, though.

You can think of "don't play aji-keshi" as saying "leave actions which will close down your future options as late as possible", which I think can be a useful lesson for real life (though of course the tricky part is working out how late 'as possible' is).

Go teaches that sort of intuitions that are useful but really vague compared to LW-type of stuff. Overall you can get really strong at go if you simply decide to avoid emotional mistakes typical to zero sum game, actual reading and position analysis and planning is of much less importance.

The first is certainly valid reasoning in Go, and I phrased it in a way that should make that obvious. But you can also phrase it as "I've spent so much effort trying to reach goal X that I'm committed now", which is almost never sound in real life.

For the second, I'm not thinking so much of tewari as a fairly common kind of comment in professional game commentaries. I think there's an implicit "and I surely haven't made a mistake as disastrous as a two point loss" in there.

It's probably still not sound reasoning, but for most players t... (read more)

Thanks to go, I've learned NOT to think like this, but to adjust according to the new information that flows in. It seems rather weird that you can get two totally opposite lessons from the same game.
I agree. I think that this phrasing significantly changes the meaning of what you said originally, which was: I interpret this as assigning +infinity utilons to winning the game, and asserting that goal X must be achieved to accomplish that. I think it's completely valid, but the goal structure in life is so much more complicated than it is in go that it doesn't really transfer. Your rewording sounds more like the sunk costs fallacy to me, but I think that it's terrible reasoning in go as well as life. And on point 2: Which would make it valid reasoning. It might not be useful reasoning for life in general (as it's much harder to tell if you made a mistake than it is in go) but I think it's still valid.

Seven stones is a large handicap. Perhaps they're better than the average club player in English-speaking countries, but I think the average Korean club player is stronger than Zen.

On the other hand, there are some ways of thinking which are useful for Go but not for real life. One example is that damaging my opponent is as good as working for myself.

Another example is that, between equal players, the sunk costs fallacy is sometimes sound reasoning in Go. One form is "if I don't achieve goal X I've lost the game anyway, so I might as well continue trying even though it's looking unlikely". Another form (for stronger players than me) is "if I play A, I will get a result that's two points worse than I could have had if I played B earlier, so I can rule A out."

Is that really the sunk costs fallacy? I think it's valid reasoning-- play the moves that give you the best chance of winning even if that chance is looking slimmer. I think the sunk costs fallacy is more like a failure to be flexible-- e.g., insisting on making some stones live when you could have a larger benefit elsewhere by sacrificing them. (And that thinking is punished quite harshly in go.) I don't think that's sound reasoning; you could have made a mistake since having played B, and A might be the best current option. FWIW, I'm a reasonably strong go player - it's easy to lie with tewari analysis, which is what it sounds like you're talking about.
Right. That's because Go is a zero-sum game while real life is not [].

Do you have a reference for the 'discover that the previous version was incomplete' part?

I'm not sure that's a safe assumption (and if they were told, the article really should say so!)

If you did the experiment that way, you wouldn't know whether or not the effect was just due to deficient arithmetic.

One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much h

... (read more)
I assume they were also told the total number of black and white students, in which case the information would be the same.

Further, it's plausible that if you had a 'budget' of N prison places and M police officers for drink-driving deterrence, the most effective way to deploy it would be to arrange for a highish probability of an offender getting a short prison sentence, plus a low probability of getting a long sentence (because we know that a high probability of being caught has a large deterrent effect, and also that people overestimate the significance of a small chance of 'winning the lottery').

So the 'high sentence only if you kill' policy might turn out to be an efficient one (I don't suppose the people who set sentencing policy are really thinking along this lines, though).

But people also play Martingale systems on roulette. These have a good chance of going well, and a small chance of going really, really, badly. So people don't just overestimate small chances. I think they tend to overestimate the probability of events that benefit them, but this may depend on whether they are in near mode or far mode. If they were in far mode they might begin to fret more and more about the small probabilities.

The article is saying that you can't affect your sentence by showing skill at drunk driving, other than by using the (very indirect) evidence provided by showing that nobody died as a result.

I think it's a sound point, given that the question is about identical behaviour giving different sentences.

If you're told that two people have once driven over the limit, that A killed someone while B didn't, and nothing more, what's your level of credence that B is the more skilled drunk driver?

Pretty high, actually. Drunkenness is a red herring here. Let's put it another way: if you're told that A once killed someone accidentally while B didn't, what's your credence that B is better at friggin' not killing people accidentally? You seem to imply the credence should be low. Why on Earth? I say it's pretty high, because A has demonstrated a very low level of said skill.

I think Hofstadter could fairly be described as an AI theorist.

So could Robin Hanson [].

When writing on the internet, it is best to describe children's ages using years, not their position in your local education system.