Dagon

Just this guy, you know?

Wiki Contributions

Comments

LOL!  If you think an executor (or worse, an heir if the estate is already settled) is going to pay $100K to a rando based on a 5-year old less-wrong post, you have a VERY different model of humanity than I do.  Even more so if the estate didn't include any mention of it or money earmarked for it.

I think you're starting with the wrong prior.  It's very distantly relevant whether there are aliens in the universe (or even in our past lightcone).  It's important what is the prior for "aliens physically present on earth, now, at a scale (quantity and size) and tech level that makes them very intermittently and unreliably detectable".  The second is orders of magnitude smaller than the first.

I do agree with your logic about the inequalities, but the magnitude of difference matters a lot.  I give pretty low values for p[whistleblower|aliens] - p[whistleblower|no aliens] and the like, EVEN WHILE agreeing that it's greater than zero.

I agree with your conclusion, including the fact that we need some value for p[aliens that could easily remain hidden, but are screwing with us], and that this may even be the majority of the weight for p[our observations|aliens].  

I should be clearer yet.  I'm wondering how you distinguish "the community in aggregate has gone (just somewhat) horribly wrong" from "I don't think this particular mechanism works for everyone, certainly not me".  

If making actual wagers makes you uncomfortable, don't do it.  If analyzing many of your beliefs in a bet-like framing (probability distribution of future experiences, with enough concreteness to be resolvable at some future point) is uncomfortable, I'd recommend giving that part of it another go, as it's pretty generally useful as a way to avoid fuzzy thinking (and fuzzy communication, which I consider a different thing).

In any case, thanks for the discussion - I always appreciate hearing from those with different beliefs and models of how to improve our individual and shared beliefs about the world.

Thanks for the detail - it makes me realize I responded unclearly.  I don't understand your claim (presumably based on this offer of a wager) that "the LessWrong community in aggregate, something has gone horribly horribly wrong."

I don't disagree with most of your points - betting is a bit unusual (in some groups; in some it's trivially common), there are high transaction costs, and practical considerations outweigh the information value in most cases.  

I don't intend to say (and I don't THINK anyone is saying) you should undertake bets that make you uncomfortable.  I do believe (but tend not to proselytize) that aspiring rationalists benefit a lot by using a betting mindset in considering their beliefs: putting a number to it and using the intuition pump of how you imagine feeling winning or losing a bet is quite instructive.  In cases where it's practical, actually betting reifies this intuition, and you get to experience actually changing your probability estimate and acknowledging it with an extremely-hard-to-fool-yourself-or-others signal.

I don't actually follow the chesterton's fence argument.  What is the taboo you're worried that you don't understand well enough to break (in some circumstances)?  "normies don't do this" is a rotten and decrepit enough fence that I don't think it's sufficient on it's own for almost anything that's voluntarily chosen by participants and has plausibly low (not provably, of course, but it's not much of a fence to start with) externalities.

Why would someone who's built up a reputation in the LW/rationalist/etc. community wreck it, publicly and on-the-record, over <$50k USD?

A lot can happen in 5 years.  The OP could die.  The bettor could die.  And who knows, maybe the evidence of aliens is just deniable enough that it doesn't cost reputation to claim a win.

Do you mean "when can we distinguish exponential from logistical curve"?  I dunno, but I do know that many things which look exponential turn out to slow down after a finite (and small) number of doublings.

THAT is a crux.  whether any component of it is exponential or logistical is VERY hard to know until you get close to the inflection.  Absent "sufficiently advanced technology" like general-purpose nanotech (able to mine and refine, or convert existing materials into robots & factories in very short time), there is a limit to how parallel the building of the AI-friendly world can be, and a limit to how fast it can convert.

For that path, it takes AI that's capable enough for all industrial (and non-industrial) tasks.  But you also need all the physical plant (both the factories and the compute power to distribute to the tasks) that the AI uses to perform these industrial tasks.  

I think it's closer to 20 than 5 that the capabilities will be developed, possibly longer until the knowledge/techniques for the necessary manufacturing variants can be adapted to non-human production.  And it's easy to underestimate how long it takes to just build stuff, even if automated.  

It's not clear it's POSSIBLE to convert enough stuff without breaking humanity badly enough that they revolt and destroy most things.  Whether that kills everyone, reverts the world to the bronze age, or actually gets control of the AI is deeply hard to predict.  It does seem clear that converting that much matter won't be quick.  

Parts of it do match (free money, to be repaid years from now), parts don't (large liability years from now if the OP is correct, preference for crypto as irrevocable money transfer, desire for public agreement and public adjudication).  The trust level implied by "accepting party has final say" and "hold all the money for years" is much higher than normal, which often indicates scam.  The fact that I don't see the scam (despite knowing a bit about common ones) is some evidence that it's not a scam.  The non-specificity of terms (which payment method(s) to use, what odds they'll take, what min/max amount to consider) could go either way.  

If OP were trolling for suckers or running an overpay/refund/revoke scam, they'd scale out rather than picking just one target - offer a bet to all takers, in hopes that multiple will be duped.  That doesn't seem to be happening.  

Note that it can fail to be real without being a scam.  An over-simple offer that is regretted before payment is irrevocable means no bet occurs, but that's not scammy, it's just over-aggressive signaling in wanting to make a bet and then avoiding the pain of actually making the payment.  This is where I put most of my probability weight on failure (though some to scam, of course).  

I don't doubt that a lot is wrong with the LW community, both in aggregate and among many individuals. I'm not sure WHAT wrongness you're pointing out, though.  

There are good reasons for exploring normie behavior and being careful of things you don't understand (Chesterton's fence).  They mostly apply strongly when talking about activities at scale, especially if they include normies in the actor or patient list.   

Wagering as a way to signal belief, to elicit evidence of different beliefs, and to move resources to the individuals who are less wrong than the market (or counterparty in a 2-party wager) is pretty well-studied, and the puzzle of why most humans don't do it more is usually attributed to those illegible reasons, which include signaling, status, and other outside-of-wager considerations.  

IMO, that's enough understanding to tear down the fence, at least when people who choose not to participate aren't penalized for that choice.

That seems so clear to me that I'm surprised there can be any objection.  Can you restate why you think this indicates "horribly wrong", either as a community, or as the individuals choosing to offer wagers?

Load More