Posts

Sorted by New

Wiki Contributions

Comments

Johnicholas:

Isn't there a bias something like: "If something actually happens, then people believe that it was foreseeable before it happened."?

Hindsight Bias and, to an extent, Taleb's Narrative Fallacy. This whole topic is quite Talebian. How do we plan for disasters we can't foresee? As Robin says,

There is a vast space of possible things that can go wrong, so each plan will have to cover a pretty wide range of scenarios.

While there might be a very wide range of causes for disasters, the possible effects are likely to be fewer. A government can plan for a crisis by making a shortlist of bad things and planning how to limit their effect, and can deal with the underlying cause on-the-fly. By analogy to emergency medicine, deal with easy-to-see life-threatening symptoms first and figure out the underlying cause later when the patient is stablised. In financial disaster planning, decide how to deal with known knowns like high unemployment, high interest rates, inflation and so on beforehand then figure out how to deal with the failing banks as you go along.

I think Robin has won this argument. Removing rhetorical flourishes makes the post easier to criticise in the comments section. You shouldn't be deliberately trying to make your statements more or less persuasive, just say what you want to say as clear as you can and let other contributors thrash it out in the comments. That is probably part of Robin's point about the importance of academic style: it makes peer-review easier.

"I am 87% confident you will burst into flames"

Ah, at last a practical application of the observation that bayesians cannot agree to disagree.

Virtual environments create possibilities for shock. The ability to torture a (non-sentient) simulated version of someone you hate, or engage in sexual activities that would be illegal in the real world come to mind.

Also what if, given the opportunity to live forever in eutopia, most minds freely choose the hardscrapple frontier? Even if the chances of death are significant?

Thom Blake:

I don't find this surprising at all, other than that it occurred to a consequentialist. Being a virtue ethicist and something of a Romantic, it seems to me that the best world will be one of great and terrible events, where a person has the chance to be truly and tragically heroic.

Every life a work of art! That sounds like my kind of future.

Well... first of all, the notion that "ideas are generated by combining other ideas N at a time" is not exactly an amazing AI theory; it is an economist looking at, essentially, the whole problem of AI, and trying to solve it in 5 seconds or less. It's not as if any experiment was performed to actually watch ideas recombining. Try to build an AI around this theory and you will find out in very short order how useless it is as an account of where ideas come from...

But more importantly, if the only proposition you actually use in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be partially verified by testing your theory.

This is a good idea though. Why doesn't someone combine economics and AI theory? You could build one of those agent-based computer simulations where each agent is an entrepreneur searching the (greatly simplified) space of possible products and trading the results with other agents. Then you could tweak parameters of one of the agents' intelligences and see what sort of circumstances lead to explosive growth and what ones lead to flatlining.

And obviously we're not looking for software that lets our users throw sheep at one another. The Internet already offers enough ways to waste time, thank you. More like - how people can find each other geographically and meet up

This is an interesting idea, since local groups of rationalists raise the possibility of Overcoming Bias becoming a political project. We've discussed the fact that institutional irrationality causes resources to be misallocated and lives to be lost, so why don't we aim to make more people aware of that fact? Evidence Based Medicine has already been a triumph, so why not try Evidence Based Everything? As far as I'm aware there is no organisation dedicated to encouraging bayesianism in national and corporate governance, why don't we form one?

Bo:

It's impossible for me to imagine a tiered system that wouldn't degenerate into a status competition. Can you think of examples of one that hasn't?

Anonymous BBSs avoid the problem of status-seeking commentors - overcomingbiaschan!

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system. Indeed, one does this every time one speaks of the purpose of an event, rather than speaking of some particular agent desiring the consequences of that event.

I'm vaguely reminded of The Camel Has Two Humps. Perhaps it's the case that some people naturally have a knack for systemisation, while others are doomed to repeat the mind projection fallacy forever.

According to Norvig, Holmes is a Bayesian. Though I think it would be cool if there were a mystery story whose sleuth-protagonist made explicit use of stats and probability.

Load More