Jake_NB
Jake_NB has not written any posts yet.

The effective altruism movement and the 80000 hours project in particular seem to be stellar implementations of this line of thinking.
Also seconding the doubts about the refrain from saving puppies - at the very least, extending compassion to other clusters in mindspace not too far from our own seems necessary from a consistent standpoint. It may not be the -most- cost-effective, but no reason to just call it a personal interest.
Really liked this one. One thing that bugs me is the recurring theme of "you can't do anything short of the unreasonably high standard of Nature". This goes against the post of "where recursive reasoning hits bottom" and against how most of good science and practically all of good engineering actually gets done. I trust that later posts talk about this in some way, and the point I touch on is somewhat covered in the rest of the collections, but it can stand to be pointed out more clearly here.
It's true that Nature doesn't care about your excuses. No matter your justification for cutting corners, either you did it right or not.... (read more)
I know I'm way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don't hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.
Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
First case in point: can a surgeon harvest organs from a healthy innocent bystander to... (read 495 more words →)
Actually Brennan's idea is common knowledge in physics - energy is derived as the generator of time translation, both in GR and in QFT, so there is nothing new here.
Great observation. One inaccuracy is that velocity in special relativity isn't quite the same as acceleration in GR - since we can actually locally measure acceleration, and therefore know if we're accelerating or the rest of the universe is. This is unless you also count spacetime itself in the rest of the universe, in which case it's best to specify it or avoid the issue more decisively. The actual equivalence is accelerating vs. staying in constant velocity/still in a gravitational field.
Another interesting point is that this chain of "character of law" reasoning in the absence of experimental possibilities is the MO of the field of theoretical high energy physics, and many scientists are trained on ways to make progress anyway under these conditions. Most aren't doing as well as Einstein, but arguably things have gotten much more difficult to reason through at these levels of physics.
Cool story, great insights, but I gotta say, huge planning fallacy on Jeffreyssai's part. Giving rigid deadlines on breakthroughs without actual experience with them or careful consideration of their internal mechanisms, and when the past examples are few and very diverse.
I do agree that speed is important, but maybe let's show some humility about things that humans are apparently hard-wired to be bad at.
If there were something else there instead of quantum mechanics, then the world would look strange and unusual.
If there were something else instead of quantum mechanics, it would still be what there is and would still add up to normality.
About a few of the violations of the collapse postulate: this wouldn't be the only phenomenon with a preferred reference frame of simultaneity - the CMB also has that. Maybe a little less fundamental, but nonetheless a seemingly general property of our universe. This next part I'm less sure about, but locality implies that Nature also has a preferred basis for wavefunctions, i.e. the position basis (as opposed to, say, momentum). Acausal - since nothing here states that the future affects the past, I assume it's a rehash of the special relativity violation. Not that I'm a fan of collapse, but we shouldn't double-count the evidence.
Also, to quote you, models that are surprised by facts do not gain points by this - neither does Mr. Nohr as he fails to imagine the parallel world that actually is.
Just one quick note: this formulation of Bayes' theorem implicitly assumes that the A_j are not only mutually exclusive, but cover the entire theory space we consider - their joint probability is assigned a value of 1.
>A refused claim is (legally) an event that was never covered by the insurance, and is therefore irrelevant if the question is "take policy A or not at all".
This implicitly assumes all legal cases are clear-cut and independent of perspective or legally-related capabilities; and also perfect knowledge of the fine print (which is desirable but not always realistic).
Usually the insurance company has more relevant experience, better specialized lawyers and generally more resources to spend fighting claims than the consumer. So they can often successfully claim that your particular circumstances are not covered after the fact, even when you believe they were.