Posts

Sorted by New

Wiki Contributions

Comments

andreas11y470

"I design a cell to not fail and then assume it will and then ask the next 'what-if' questions," Sinnett said. "And then I design the batteries that if there is a failure of one cell it won't propagate to another. And then I assume that I am wrong and that it will propagate to another and then I design the enclosure and the redundancy of the equipment to assume that all the cells are involved and the airplane needs to be able to play through that."

Mike Sinnett, Boeing's 787 chief project engineer

andreas12y50

The game theory textbook "A Course in Microeconomic Theory" (Kreps) addresses this situation. Quoting from page 516:

We will give an exact analysis of this problem momentarily (in smaller type), but you should have no difficulty seeing the basic trade-off; too little punishment, triggered only rarely, will give your opponent the incentive to try to get away with the noncooperative strategy. You have to punish often enough and harshly enough so that your opponent is motivated to play [cooperate] instead of [defect]. But the more often/more harsh is the punishment, the less are the gains from cooperation. And even if you punish in a fashion that leads you to know that your opponent is (in her own interests) choosing [cooperate] every time (except when she is punishing), you will have to "punish" in some instances to keep your opponent honest.

andreas13y70

I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.

andreas13y130

Back when Eliezer was writing his metaethics sequence, it would have been great to know where he was going, i.e., if he had posted ahead of time a one-paragraph technical summary of the position he set out to explain. Can you post such a summary of your position now?

andreas13y30

Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid.

When you are trying to build up a new argument, temporarily accepting steps of uncertain correctness can be helpful (if mentally tagged as such). This strategy can move you out of local optima by prompting you to think about what further assumptions would be required to make the steps correct.

Techniques based on this kind of reasoning are used in the simulation of physical systems and in machine inference more generally (tempering). Instead of exploring the state space of a system using the temperature you are actually interested in, which permits only very particular moves between states ("provably correct reasoning steps"), you explore using a higher temperature that makes it easier to move between different states ("arguments"). Afterwards, you check how probable the state is that you moved to when evaluated using the original temperature.

andreas13y60

As you wish: Drag the link on this page to your browser's bookmark bar. Clicking it on any page will turn all links black and remove the underlines, making links distinguishable from black plain text only through changes in mouse pointer style. Click again to get the original style back.

andreas13y20

See also: A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, which treats the Liar's paradox as an instance of a generalization of Cantor's theorem (no onto mapping from N->2^N).

The best part of this unified scheme is that it shows that there are really no paradoxes. There are limitations. Paradoxes are ways of showing that if you permit one to violate a limitation, then you will get an inconsistent systems. The Liar paradox shows that if you permit natural language to talk about its own truthfulness (as it - of course - does) then we will have inconsistencies in natural languages.

andreas13y20

Do you think that your beliefs regarding what you care about could be mistaken? That you might tell yourself that you care more about being lazy than about getting cryonics done, but that in fact, under reflection, you would prefer to get the contract?

andreas13y100

Please stop commenting on this topic until you have understood more of what has been written about it on LW and elsewhere. Unsubstantiated proposals harm LW as a community. LW deals with some topics that look crazy on surface examination; you don't want people who dig deeper to stumble on comments like this and find actual crazy.

andreas13y20

Similarly, inference (conditioning) is incomputable in general, even if your prior is computable. However, if you assume that observations are corrupted by independent, absolutely continuous noise, conditioning becomes computable.

Load More