Jordan Arel

Wiki Contributions

Comments

Sorted by

Yes, that’s the main place I’m still uncertain, the ten combinations of three 1’s have to be statistically independent which I’m having trouble visualizing; if you rolled six die, the chance that either three pre-selected specific die would be 1’s or the other three die would all be 1’s could just be added together.

But since you have five die, and you are asking whether three of them will be 1’s, or another overlapping set will be 1’s, you have to somehow get these to be statistically independent. Part of that is actually what I left out (that GPT told me, so not sure but sounds sensible), you take the chance that the other two leftover die will both not be 1’s; there’s a 9/10 chance that each will not be a 1, so .81 chance that both will not be ones, and you actually have to multiply this .81 by the 1/1000 for each set of three 1’s. So that slightly lowes that part of the estimate to (1/100010).81=.81%

So you have excluded the extra 1’s from the sets of three 1’s but then you have to do the same calculation for the sets of four 1’s and the one set of five 1’s. The set a five 1’s is actually very easy, there’s a 1/10 chance that each will land on one, so all of them together is 10^5=1/100,000, adding only .001% to the final calculation, and the four 1’s are also about a factor of 5 less likely then three 1’s because you have to roll another 1 to get four 1’s. So you have to roll four 1’s and one not-1, or (1/10,000).95=.045%

.81+.001+.045=.856%

Still not 100% sure because I suck at combinatorials but this seems pretty likely to be correct. Mainly going off that 1/1,000 intuition for any three sets of 1’s and that being repeated ~10 times because there are five die, and the rest sounds sensible

I’m quite sure now, I came to the same conclusion independently of GPT after getting a hint from it, which itself I had already almost guessed.

A woman having the top 10% of any characteristic is almost the same as rolling a 10 sided die and coming up with a 1 (this was the actual problem I presented GPT with, and when it answered it did so in what looked like a hybrid of code and text so I’m quite sure it is computing this somehow).

What was clearly wrong with the first math was that if I roll just three die, there would already be1*10^3 or1/1000 chance of getting all 1’s. And if I roll five die, there would be a much higher, not lower chance that I get at least three 1’s.

When rolling five die, there are 10 different possible combinations of those five die that have exactly three out of five 1’s, and it’s a little bit more complicated than this, but almost all of the probability mass comes from rolling three 1’s, since rolling four or five 1’s is far less likely. So you get very close (much closer than needed for a Fermi estimate) to the answer by simply multiplying the 10 possible combinations by 1/1000 chance that each of those combinations will be all 1’s, for a total of about 1/100 or ~1%. Pretty basic once you see it, I would be surprised if this is incorrect.

Ah dang sorry, was not aware of this. Brute force re-taught myself how to do this quick 10^5 / (5-2)! = 100,000 / 6 = 1/16,666. You are right, that was off by more than a factor of ten! Thanks for the tip.

Edit: agghh I hate combinatorials. This seemed way off to me, I thought the original seemed correct. GPT had originally explained the math but I didn’t understand the notation, after working on the problem again for a while I had it explain it’s method to me in easier to understand language and I’m actually pretty sure it was correct.

Ah, thanks for the clarification, this is very helpful. I made a few updates including changing the title of the piece and adding a note about this in the assumptions. Here is the assumption and footnote I added, which I think explains my views on this:

Whenever I say "lives saved" this is shorthand for “future lives saved from nonexistence.” This is not the same as saving existing lives, which may cause profound emotional pain for people left behind, and some may consider more tragic than future people never being born.[6]

Here is footnote 6, created for brevity of the main piece: 

This post originally used the "term lives" saved without mentioning nonexistence, but JBlack on LessWrong pointed out that the term “lives saved” could be misleading in that it equates saving present lives with creating new future lives. While I take the total view and so feel these are relatively equivalent (if we exclude the flow-through effects, including the emotional pain caused to those left behind by the deceased), those who take other views such as the person-effecting view may feel very differently about this.

Here is a related assumption I added based on an EA Forum comment:

I assume a zero-discount rate for the value of future lives, meaning I assume the value of a life is not dependent on when that life occurs.

I hope this shows why I think the term is not unjustified, I certainly was not intending to be willfully deceptive and apologize if it seemed this way. I believe in the equal value of all conscious experience quite strongly, and this includes future people, so for me “lives saved” or “lives saved from nonexistence” carries the correct emotional tone and moral connotations from my point of view. I can definitely respect that other people may feel differently.

I am curious whether this clarifies our difference in intuitions, or if there is some other reason you see the ending of a life as worse than the non-existence of life.

As to your second objection, I think that for many people the question of whether murdering people in order to save other people is a good idea is a separate moral question from which altruistic actions we should take to have the most positive impact. I am certainly not advocating murdering billions of people.

But whether saving present people or (in expectation) saving many more unborn future people is a better use of altruistic resources seems to be largely a matter of temperament. I have heard a few discussions of this and they never seem to make much sense to me. For me it is literally as simple as people being further away in time which is another dimension, not really any different than spatial dimensions, except that time flows in one direction and so we have much less information about it.

But uncertainty only calls into question whether or not we have impact in expectation, for me it has no bearing on the reality of this impact or the moral value of these lives. I cannot seem to comprehend why other people value future people less than present people, assuming you have equal ability to influence either. I would really like for there to be some rational solution, but it always feels like people are talking past each other in these types of discussions. If there is one child tortured today it cannot somehow be morally equivalent to ten children being tortured tomorrow. If I can ensure one person lives a life overflowing with joy today, I would be willing to forego this if I knew with certainty I could ensure one hundred people live lives overflowing with joy in one hundred years. I don’t feel like there is a time limit on morality, to be honest it still confuses me why exactly some people feel otherwise.

You also mentioned something about differing percentages of the population. Many of these questions don’t work in reality because there are a lot of flow-through effects, but if you ignore those, I also don’t see how 8,000 people today suffering lives of torture might be better than 8 early humans a couple hundred thousand years ago suffering lives of torture, even if that means it was 1 /1,000,000 of the population in the the first case (just a wild guess) and 1 / 1,000 of the population in the second case.

These questions might be complicated if you take the average view on population ethics instead of the total view, and I actually do give some credence to the average view, but I nonetheless think the amount of value created by averting X-risk is so huge that it probably outweighs this considerations, at least for the risk neutral.

Interesting objections!

I mentioned a few times that some and perhaps most x-risk work may have negative value ex post. I go into detail how work may likely be negative in footnote 13.

It seems somewhat unreasonable to me, however, to be virtually 100% confident that x-risk work is as likely to have zero or negative value ex ante as it is to have positive value.

I tried to include the extreme difficulty of influencing the future by giving work relatively low efficacy, i.e. in the moderate case 100,000 (hopefully extremely competent) people working on x-risk for 1000 years only cause a 10% reduction of x-risk in expectation, in other words effectively a 90% likelihood of failure. In the pessimistic estimate 100,000 people working on it for 10,000 years only cause a 1% reduction in x-risk.

Perhaps this could be a few orders of magnitude lower, say 1 billion people working on x-risk for 1 million years only reduce existential risk by 1/1trillion in expectation (if these numbers seem absurd you can use lower numbers of people or time, but this increases the number of lives saved per unit of work). This would make the pessimistic estimate have very low value, but the moderate estimate would still be highly valuable (10^18 lives per minute of work.)

All that is to say, I think that while you could be much more pessimistic, I don’t think it changes the conclusion by that much, except in the pessimistic case - unless you have extremely high certainty that we cannot predict what is likely to help prevent x-risk. I did give two more pessimistic scenarios in the appendix which I say may be plausible under certain assumptions, such as 100% certainty that X-risk is inevitable. I will add that this case is also valid if you assume a 100% certainty that we can’t predict what will reduce X-risk, as I think this is a valid point.

Hm, logically this makes sense, but I don’t think most agents in the world are fully rational, hence the continuing problems with potential threats of nuclear war despite mutually assured destruction and extremely negative sum outcomes for everyone. I think this could be made much more dangerous by much more powerful technologies. If there is a strong offense bias and even a single sufficiently powerful agent willing to kill others, and another agent willing to strike back despite being unable to defend themselves by doing so, this could result in everyone dying.

The other problem is maybe there is an apocalyptic terrorist Unabomber Anti-natalist negative utilitarian type who is able to access this technology and just decides to literally kill everyone.

I definitely think a multipolar decaying into a unipolar situation seems like a possibility, I guess one thing I’m trying to do is weigh how likely this is against other scenarios where multipolarity leads to mutually assured destruction or apocalyptic terrorism.