All of Paul_Crowley's Comments + Replies

I've read countless papers on crypto, and they mostly seem pretty formal to me - what are people comparing them to? Is it really worse in other fields? There is some variation - DJB's style is distinctly less formal than other authors - but my perception is that papers on for example network engineering seem a lot less formal than crypto papers. I think there's plenty of room to improve the readability of crypto papers by encouraging less formality.

One trivial example of signalling here is the way everyone still uses the Computer Modern font. This is a... (read more)

I'd have more sympathy with Luke (and thus more forgiveness for Lucas) if instead of the whole X-Wing moving when he tries it, we see a much less dramatic effect; perhaps aerials that were drooping stand up, or the flaps lift gently, or some such.

However, in such films the plausibility of the character's behaviour is always sacrificed in the interests of better visuals, or better drama; cf the zillion ludicrous excuses scriptwriters present for characters not telling each other what's going on.

Regret of rationality in games isn't a mysterious phenomenon. Let's suppose that after the one round of PD we're going to play I have the power to destroy a billion paperclips at the cost of one human life, and Clippy knows that. If Clippy thinks I'm a rational outcome-maximizer, then he knows that whatever threats I make I'm not going to carry out, because they won't have any payoffs when the time comes. But if it thinks I'm prone to irrational emotional reactions, it might conclude I'll carry out my billion-paperclip threat if it defects, and so cooperate.

If I could prevent only one of these events, I would prevent the lottery.

I'm assuming that this is in a world where there are no payoffs to the LHC; we could imagine a world in which it's decided that switching the LHC on is too risky, but before it is mothballed a group of rogue physicists try to do the riskiest experiment they can think of on it out of sheer ennui.

In what context is $10 trillion not a huge amount of money? It's approximately the entire US national debt, but the difference is nearly enough to pay off the entire debt of the third world; it's what the UK governent spends in ten years. If I had that kind of wealth, after I'd cleared all third world debts, I'd carpet developing nations everywhere with infrastructure like roads and such, and pay for literacy and clean water everywhere, and I'd still have money left over.

Nick Tarleton: sadly, it's my experience that it's futile to try and throw flour over the dragon.

Tomorrow I will address myself to accusations I have encountered that decoherence is "unfalsifiable" or "untestable", as the words "falsifiable" and "testable" have (even simpler) probability-theoretic meanings which would seem to be violated by this usage.

Doesn't this follow trivially from the above? No experiment can determine whether or not we have souls, but that counts against the idea of souls, not against the idea of their absence. If decoherence is the simpler theory, then lack of falsifiability counts against the other guys, not against it.

Roland: yes, at least one. Where did you give up and why?

This is what I thought at first, but on reflection, it's not quite right.

Could you explain a little more the distinction between the position preceding this remark and that following it? They seem like different formulations of the same thing to me.

7Manfred11y
I'll give it a shot. Solomonoff induction doesn't even mention photons, so the statement about the photon doesn't follow directly from it. Solomonoff induction just tells you about the general laws, which then you can use to talk about photons. So "belief in the implied invisible" means you're going through this two-step process, rather than directly computing probabilities about photons.

Heterophenomenology!

Sorry, I thought it needed saying.

Caledonian: you can stop talking about wagering credibility units now, we all know you don't have funds for the smallest stake.

Ben Jones: if we assume that Omega is perfectly simulating the human mind, then when we are choosing between B and A+B, we don't know whether we are in reality or simulation. In reality, our choice does not affect the million, but in the simulation this will. So we should reason "I'd better take only box B, because if this is the simulation then that will change whether or not I get the million in reality".

  1. 400 people die, with certainty.
  2. 90% chance no one dies; 10% chance 500 people die.

ITYM 1. 100 people die, with certainty.

Obviously there's another sort of discounting that does make sense. If you offer me a choice of a dollar now or $1.10 in a year, I am almost certain you will make good on the dollar now if I accept it, whereas there are many reasons why you might fail to make good on the $1.10. This sort of discounting is rationally hyperbolic, and so doesn't lead to the paradoxes of magnitude over time that you highlight here.

0waveman9y
Good point. More generally as per the wikipedia article http://en.wikipedia.org/wiki/Hyperbolic_discounting#Criticism [http://en.wikipedia.org/wiki/Hyperbolic_discounting#Criticism] exponential discounting is only correct if you are equally certain of the payoffs at all the different times. More broadly it assumes no model error. Whatever decision model you are using you need to be 100% certain of it to justify exponential discounting. Nassim Taleb points out that quite a few alleged biases are actually quite rational when taking into account model error and he includes a derivation of why the hyperbolic discounting formula is actually valid in many situations. Silent Risk Section 4.6 Psychological pseudo-biases under second layer of uncertainty. Draft at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2392310 [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2392310]
9gwern12y
Yes, that discounting makes sense, but it's explicitly not what Eliezer is talking about. His very first sentence: (Also, I don't see how that example is 'hyperbolic'.)

Some have said this essay is a poor, ad hominem criticism of Objectivism. This isn't a criticism of Objectivism per se at all and isn't meant to be - it is intended to answer the question "how did a belief that ostensibly venerates reason and independent thought give rise to cult-like behaviour?" Thus discussion of the merits of Objectivism itself don't address the question, while an account of Rand's life sheds a lot of light.

And of course, Eleizer has already quoted the scripture of the prophet Brian, who sayeth:

"Look. You've got it all wrong. You don't need to follow me. You don't need to follow anybody! You've got to think for yourselves. You're all individuals! You're all different! You've all got to work it out for yourselves! Don't let anyone tell you what to do!" (Life of Brian, scene 19)

If this is the same Caledonian who used to post to the Pharyngula blog, he's barred from there now with good reason.

Is there a cognitive bias at work that makes it hard for people not to feed trolls?

_Gi: you have described exactly my lottery strategy, as well as that of Patti Smith:

Every night before I go to sleep I find a ticket, win a lottery Scoop them pearls up from the sea Cash them in and buy you all the things you need...

It may be that I need to read one of those links in the previous post, but - I tend to imagine that AIs will need to have upbringings of some sort. We acquire morality much as we acquire knowledge - does it suffice for the AIs to do the same?