benjamincosman
Urbana, IL, USA

Posts

Sorted by New

Wiki Contributions

Comments

As a partial answer, note that you will get extremely different results for P(H|E) depending on your choice of hypothesis H and evidence E. In particular, while naively E here is "Ms X won the lottery 4 times", I think you'd still be posting the same question if instead you'd heard that "Mr Y won the lottery 4 times", or more importantly, any other extremely unlikely positive-valence event ("Ms Z was struck by a bullet in the tiny area that happened to be protected by an object in her pocket", etc). Which means that your E should perhaps be "some lucky unlikely thing happened to someone at some point in history", which no longer seems very low probability after all.

Also the jump from "there's something weird here" to "the Christian god did it" is too big - even if we do decide that some extraordinary explanation is required, our hypothesis space should at the very least include other gods :) and perhaps more realistically, things like bugs in the lottery program, collusion with corrupt lottery officials, etc.

ProjectLawful.com: Eliezer's latest story, past 1M words

The author's (user)name is always the final line of the inset box. If there are three lines in that box (e.g. "Carissa Sevar // to-let-you-in // lintamande"), then lintamande is the author, Carissa Sevar is the character, and "to-let-you-in" can be ignored (it's some sort of thematic tag for the character).

Authors other than the main two don't appear for an extremely long time so I'd worry about that after getting that far :)

Salvage Epistemology

Who is “we”? This is more or less what I’ve observed from 100% of my (admittedly small sample size of) rationalist acquaintances who have taken psychedelics.

Salvage Epistemology

it’s especially egregious when you apply this “salvage epistemology” approach to, say, taking drugs

I'm not so certain of that? Of the two extreme strategies "Just Say No" and "do whatever you want man, it feels goooooood", Just Say No is the clear winner. But when I've interacted with reasonable-seeming people who've also done some drugs, it looks like "here's the specific drug we chose and here's our safety protocol and here's everything that's known to science about effects and as you can see the dangers are non-zero but low and we think the benefits are also non-zero and are outweighing those dangers". And (anecdotally of course) they and all their friends who act similarly appear to be leading very functional lives; no one they know has gotten into any trouble worse than the occasional bad trip or exposure to societal disapproval (neither of which was ultimately a big deal, and both of which were clearly listed in their dangers column).

Now it is quite possible they're still ultimately definitively wrong - maybe there are serious long-term effects that no one has figured out yet; maybe it turns out that the "everyone they know turns out ok" heuristic is masking the fact that they're all getting really lucky and/or the availability bias since the ones who don't turn out ok disappear from view; etc. And you can certainly optimize for safety by responding to all this with "Just Say No". But humans quite reasonably don't optimize purely for safety, and it is not at all clear to me that what these ones have chosen is crazy.

Dath Ilan vs. Sid Meier's Alpha Centauri: Pareto Improvements

Simon's services are only offered to those who submit a treaty over a full range of possible outcomes. Alice could try to bully Bob into accepting a bullshit treaty ("if I win you give me X; if I lose you still give me X"), but just like today Bob has the backup plan of refusing arbitration and actually waging war. (Refusing arbitration is allowed; it's only going to arbitration and then not abiding by the result that is verboten.) And Alice could avoid committing to concessions-on-loss by herself refusing arbitration and waging war, but she wouldn't actually be in a better position by doing so, since the real-world war also involves her paying a price in the (to her mind) unlikely event that Bob wins and can extract one. Basically the whole point is that, as long as the simulated war and the agreed-upon consequences of war (minus actual deaths and other stuff) match a potential real-world war closely enough, then accepting the simulation should be a strict improvement for both sides regardless of their power differential and regardless of the end result, so both sides should (bindingly) accept.

Dath Ilan vs. Sid Meier's Alpha Centauri: Pareto Improvements

Good point, so continuing with the superhuman levels of coordination and simulation: instead of Alice and Bob saying "we're thinking of having a war" and Simon saying "if you did, Bob would win with probability p"; Alice and Bob say "we've committed to simulating this war and have pre-signed treaties based on various outcomes", and then Simon says "Bob wins with probability p by deploying secret weapon X, so Alice you have to pay up according to that if-you-lose treaty". So Alice does learn about the weapon but also has to pay the price for losing, exactly like she would in an actual war (except without the associated real-world damage).

Dath Ilan vs. Sid Meier's Alpha Centauri: Pareto Improvements

As an example, part of your military strength might be your ability to crash enemy systems with zero-day software exploits (or any other kind of secret weapon they don't yet have counters for). At least naively, you can't demonstrate you have such a weapon without rendering it useless. Though this does suggest a (unrealistically) high-coordination solution to at least this version of the problem: have both sides declare all their capabilities to a trusted third party who then figures out the likely costs and chances of winning for each side.

A Quick Guide to Confronting Doom

Note that an observer must always see that all previous prophets of existential doom were wrong. (Yay anthropics. I'm not sure if or how much this should change our reasoning though.)

Fiction: My alternate earth story.

Neat story :) a few typos to fix:

  • "coal minor" -> coal miner
  • "went rouge" -> went rogue
  • "their I was" -> there I was
Lies Told To Children

Your feelings about lies depend on the context - for example I assume you'd be willing to play the game "Two Truths and a Lie", and you would not feel harmed by the certainty that someone is lying to you? In fact people enjoy the game, since seeing through the lie is a fun puzzle. Now outside of games like that, the majority of the time someone lies to you on Earth, it's to profit at your expense - they want to take your stuff, your vote, etc. So with the exception of games, you've quite reasonably developed strong negative emotions about being lied to, and those emotions may transfer even to the rarer cases where the lie isn't directly hurting you. But dath ilan is extremely high trust and high coordination; from childhood you will experience that the vast majority of the times someone lies to you, it's clearly-in-retrospect for your own benefit, and the vast majority of the remaining times, it's solidly for the benefit of Civilization and they're willing to eventually tell you the truth and pay for your inconvenience. So while you still try to see through lies whenever you can, it's much more like the Earth game setting: most lies are just harmless puzzles. So you don't grow up with the same internalized feelings that anyone lying to you is hurting you? (Which also reduces the price they'd have to pay you, since they're not trying to compensate you for an Earth-level of negative emotions.)

Load More