Posts

Sorted by New

Wiki Contributions

Comments

Jake_NB10mo10

The effective altruism movement and the 80000 hours project in particular seem to be stellar implementations of this line of thinking.

Also seconding the doubts about the refrain from saving puppies - at the very least, extending compassion to other clusters in mindspace not too far from our own seems necessary from a consistent standpoint. It may not be the -most- cost-effective, but no reason to just call it a personal interest.

Really liked this one. One thing that bugs me is the recurring theme of "you can't do anything short of the unreasonably high standard of Nature". This goes against the post of "where recursive reasoning hits bottom" and against how most of good science and practically all of good engineering actually gets done. I trust that later posts talk about this in some way, and the point I touch on is somewhat covered in the rest of the collections, but it can stand to be pointed out more clearly here.

It's true that Nature doesn't care about your excuses. No matter your justification for cutting corners, either you did it right or not. Win or lose. But it's not as if your reasoning for leaving some black boxes unopened doesn't matter. In practice, with limited time, limited information, and limited reasoning power, you have to choose your battles to get anything done. You may be taken by surprise by traps you ignored, and they will not care to hear your explanations on research optimization, and that's why you have to make an honest and thorough risk assessment to minimize the actual chance of this happening, while still getting somewhere. As in, you know, do your actual best, not some obligatory "best". It may very well still not suffice, but it is your actual best.

The other lessons seem spot on.

I know I'm way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don't hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.

Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.

First case in point: can a surgeon harvest organs from a healthy innocent bystander to save the lives of five people in dire need of those organs? Assuming they match and there is no there donor, an unfortunately likely incident. According to this, we must say that they not only can, but should, since the surgeon is damned as a murderer either way, so at least stack the lower number of bodies. I hope I don't need to explain that this goes south. This teaches us that there must be some distinction between taking negative action and avoiding a (net) positive one.

Another case: suppose I'm in a position to save lives on a daily basis, e.g. an ER doctor. Then if a life not saved is a life lost, then every hour that I rest, or you know, have fun, is another dead body on my scoreboard. Same goes for anyone doing their best to save lives, but in any way other than the single optimal one with the maximal expected number of lives. This one optimal route, if we're not allowed to rest, leads to burnout very quickly and loses lives on the long run. So we must find (immediately!) the One Best Way, or be doomed to be perpetual mass murderers.

As Zach Weinersmith (and probably others) once said, "the deep lesson to learn from opportunity cost is that we're all living every second of our lives suboptimally". We're not very efficient accuracy engines, and most likely not physically able to carry out any particular plan to perfection (or even close), so almost all of the time we'll get things at least somewhat wrong. So we'll definitely be mass murderers by way of failing to save lives, but... Then... Aren't we better off dead? And then are lives lost really that bad...?

And you can't really patch this neatly. You can't say that it's only murder if you know how to save them, because then the ethical thing would be to be very stupid and unable to determine how to save anyone. This is also related to a problem I have with the Rationalist Scoreboard of log(p) that Laplace runs at the Great Spreadsheet Above.

And even if you try to fix this by allowing that we maintain ourselves to save more lives in the long run, we 1) don't know exactly how much this should be, and 2) doing our best attempt at this is going to end up with everyone being miserable, just trying to maximize lives but not actually living them, since pain/harm is typically much easier to produce and more intense than pleasure.

And, of course, all of this is before we consider human biases and social dynamics. If we condemn the millionaire who saves lives inefficiently, we're probably drawing attention from the many others who don't even do that. Since it's much easier to be exposed to criticism than earn praise in this avenue (and this in the broad sense is a strong motivation for people to try and be virtuous), many people would see this and give up altogether.

The list goes on, but my rant can only go so long, and I hope that some of the holes in this approach are now more transparent.

Actually Brennan's idea is common knowledge in physics - energy is derived as the generator of time translation, both in GR and in QFT, so there is nothing new here.

Great observation. One inaccuracy is that velocity in special relativity isn't quite the same as acceleration in GR - since we can actually locally measure acceleration, and therefore know if we're accelerating or the rest of the universe is. This is unless you also count spacetime itself in the rest of the universe, in which case it's best to specify it or avoid the issue more decisively. The actual equivalence is accelerating vs. staying in constant velocity/still in a gravitational field.

Another interesting point is that this chain of "character of law" reasoning in the absence of experimental possibilities is the MO of the field of theoretical high energy physics, and many scientists are trained on ways to make progress anyway under these conditions. Most aren't doing as well as Einstein, but arguably things have gotten much more difficult to reason through at these levels of physics.

Cool story, great insights, but I gotta say, huge planning fallacy on Jeffreyssai's part. Giving rigid deadlines on breakthroughs without actual experience with them or careful consideration of their internal mechanisms, and when the past examples are few and very diverse.

I do agree that speed is important, but maybe let's show some humility about things that humans are apparently hard-wired to be bad at.

If there were something else there instead of quantum mechanics, then the world would look strange and unusual.

If there were something else instead of quantum mechanics, it would still be what there is and would still add up to normality.

About a few of the violations of the collapse postulate: this wouldn't be the only phenomenon with a preferred reference frame of simultaneity - the CMB also has that. Maybe a little less fundamental, but nonetheless a seemingly general property of our universe. This next part I'm less sure about, but locality implies that Nature also has a preferred basis for wavefunctions, i.e. the position basis (as opposed to, say, momentum). Acausal - since nothing here states that the future affects the past, I assume it's a rehash of the special relativity violation. Not that I'm a fan of collapse, but we shouldn't double-count the evidence.

Also, to quote you, models that are surprised by facts do not gain points by this - neither does Mr. Nohr as he fails to imagine the parallel world that actually is.

Just one quick note: this formulation of Bayes' theorem implicitly assumes that the A_j are not only mutually exclusive, but cover the entire theory space we consider - their joint probability is assigned a value of 1.

I know I'm really late with this, but what do you consider as "studying science"? Making a career of it? Does being an engineer count (I guess it does)? Or is getting (an amount of knowledge equivalent to) a B.Sc. enough too? Maybe even less than that, learning cool nuggets of science as a hobby? I think this should be better defined. If it's just a career that counts, I'm afraid that the main inhibitor is not interest, but fear for career prospects. Most often when I head people's reasons not to pursue a career in science, it's because they don't think they'll make a good living out of it, or because it's hard and they don't think they'll make it. If it's the hobbyist population you're worried about, I think it's pretty decent, after factoring in access to prerequisite knowledge, free time, and upbringings. Though there is a LOT of room for improvement on that front. Those who actually don't find science interesting seem to think that way mostly because of bad teacher experiences or the social stigma of "nerds", as far as I've seen.

Load More