Posts

Sorted by New

Wiki Contributions

Comments

Two clicks away I read this: "Once this cohort fills, we do not expect to be accepting registrations and this price again."

https://45daystoawakening.com/sales-page-412549911595138654190

This changes my odds that this is a scam by a factor of 10-or-so or maybe even 100 (it's something I'd rarely if ever expect to see on a legit course pitch). I still have to decide on what a reasonable prior should be.  Any suggestions?

https://www.lesswrong.com/tag/odds

I don't understand how

“We have forgotten that the first purpose of government is not the economy, it is not health care, it is defending the country from attack.”

was a smarter-than-one-would-have-guessed response to 9/11. Had anyone forgotten to hire soldiers and fund the secret services before 9/11? Why was preventing 9/11 more important than reducing the number of traffic fatalities by, say, 30% (and thereby saving about 10000 lives per year)? Or preventing 30% of the 45,000 yearly deaths due to lack of health insurance? What am I missing?

y=x/(1-x) is not the bijection that he asserts it is, [...]. It's a function that maps [0,1] onto [1,\intfy] as a subset of the topological closure of R.

How is that not a bijection? Specifically, a bijection between the sets and , which seems exactly to be the claim EY is making.

On a broader point, EY was not calling into question the correctness or consistency of mathematical concepts or claims but whether they have any useful meaning in reality. He was not talking about the map, he was talking about the territory and how we may improve the map to better reflect the territory.

It seems dangerous to say, before running the experiment, that there is a “scientific belief” about the result.

I don't understand what the danger is. It seems just true that there is a scientific belief about the result in this case.

But if you already know the “scientific belief” about the result, why bother to run the experiment?

I can see immediately two reasons, namely because

  • scientific beliefs can be wrong, and
  • the only way to strengthened scientific beliefs is by experiments that could have falsified them.

I got to know this idea recently also under the names of virtue signaling (to members of her community) or a loyalty badge (to her community or doctrine). The more outlandish the story, the stronger is the signal or badge.

Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.

I only need to assume that everybody else or, at least, many other people are similarly irrationally optimistic as I and then the effect of optimism on the world could well be significant and make a 20% change? The assumption is not at all far fetched.

I am not sure, but there seem to be a couple of apostrophes missing in the sentence

[...] if were going to improve our skills of rationality, go beyond the standards of performance set by hunter-gatherers, well need deliberate beliefs [...]

I would be interested to see whether computing falsely to the average of and would model the error well. Like this any detail that fits well to the very unlikely primary event increases its perceived likelihood.

What empirical evidence do we have that rationality is trainable like martial arts? How do we measure (change of) rationality skills?

I can see natural situations where scope insensitivity seems to be the right place to be:

  • Assuming we are ignorant about the absolute value of saving 4500 lives.
  • Assuming all potentially affected people contribute on average with the scope insensitive (constant) value. Then, the contribution per saved live would become a constant, like for 45 saved of 200 we have 200 contributions to save 45 lives and for 45,000 saved of 200,000 we have 200,000 contributions to save 45,000. That seems to make perfectly sense.
  • Assuming that the number of people who get to know about a problem is proportional to the problem size. Hence the number of people who can (and will on average) contribute to its solution is proportional to the problem size. Hence each single contribution should not be proportionate to its size. That is not at all a bad (implicit) assumption to have, IMHO.

It even seems to me that any personal contribution must be intrinsically scope insensitive w.r.t. the denominator (the out of how many birds/humans/...), because any single person can't possibly pay alone for a solution of a problem that affects a billion humans.

Load More