Wiki Contributions

Comments

Good to know. In that case the above solution is actually even safer than that.

Plausible Deniability yes. Reason agnostic. It's hard to know why someone might not want to be known to have their address here, but with my numbers above, they would have the statistical backing that 1/1000 addresses will appear in the set by chance, meaning a someone who wants to deny it could say "for every address actually in the set, 1000 will appear to be" so that's only a 1/1000 chance I actually took the survey! (Naively of course; rest in peace rationalist@lesswrong.com)

Thanks for your input. Though ideally we wouldn't have to go through an email server, it may just be required at some level of security.

As for the patterns, the nice thing is that with a small output space in the millions, there are tons of overlapping reasonable addresses even if you pin it down to a domain. Every English first and last name combo even without any numbers in it is already a lot larger than 10 million, meaning even targeted domains should have plenty of collisions.

I have done something similar using draw.io for arguments regarding a complex feature. Each point often had multiple counterpoints, which themselves sometimes split into other points. I think this is only necessary for certain discussions and should probably not be the default though.

I'm a software developer and father interested in:

  • General Rationality: eg. WHY does Occam's Razor work? Does it work in every universe?
  • How rationality can be applied to thinking critically about CW/politics in a political-party agnostic way
  • Concrete understanding of how weird arguments: Pascals Wager, The Simulation Hypothesis, Roko's B, etc. do or don't work
  • AI SOTA, eg. what could/should/will OpenAI release next?
  • AI Long Term arguments from "nothing burger" all the way to Yudkowsky
  • Physics, specifically including Quantum Physics and Cosmology
  • Lesswrong community expansion/outreach

Time zone is central US. I also regularly read Scott Alexander.

I am concerned for your monetary strategy (unless you're rich). Let's say you're absolutely right that LW is overconfident, and that there is actually a 10% chance of aliens rather than 0.5. So this is a good deal! 20x!

But only on the margin.

Depending on your current wealth it may only be rational to take a few hundred dollars worth of these bets for this particular bet. If you go making lots of these types of bets (low probability, high payoff, great EXpected returns) for a small fraction of your wealth each, you should expect to make money, but if you make only 3 or 4 of these types of bets, you are more likely to lose money because your are loading all your gains into a small fraction of possibilities in exchange for huge payouts, and most outcomes end up with you losing money.

See for example the St. Petersburg paradox which has infinite expected return, but very finite actual value given limited assets for the banker and or the player.

Smaller sums are more likely to convey probabilities of each party accurately. For example, if Elon Musk offers me $5000 to split between two possible outcomes, I will allocate them close to my beliefs, but if he offers me 5mil, I'll allocate about 2.5mil each because either one is a transformative amount of money.

People are more likely to be rational with their marginal dollar because of pricing in the value of staying solvent. The first 100k in my bank account IS worth more than the second, and so the saying, a non-marginal bird in the hand is worth two in the bush.

Good to know! I'll look more into it.

I agree that's all it is, but you can make all the same general statements about any algorithm.

The problem is that some people hear you say "constructed" and "nothing special", and then conclude they can reconstruct it any way they wish. It may be constructed and not special in a cosmic sense, but it's not arbitrary. All heuristics are not made equal for any given goal.

I'm not saying "the experts can be wrong" I'm saying these aren't even experts.

Pick any major ideology/religion you think is false. One way or another (they can't all be right!), the "experts" in these areas aren't experts, they are basically insane: babbling on at length about things that aren't at all real, which is what I think most philosophy experts are doing. Making sure you aren't one of them is the work of epistemology which The Sequences are great at covering. In other words, the philosopher experts you are citing I view as largely [Phlogiston](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality) experts.

 

Load More