Well, in the prior comment, I was coming at it as an egoist, as the example demands. It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
I reject the notion that each (time,utility) event can be calculated in the way you suggest. Successive speck-type experiences for an individual (or 1,000 successive dust specks for 1,000,000 individuals) over the time period we are talking about would easily overtake 50 years of torture. It doesn't make sense to tally (total human disutility of torture (1 person/50 years in this case))(some quantification of the disutility of a time unit of torture) vs. (total human speck disutility)(some quantification of a unit of speck disutility).
The universe is made up of distinct beings (animals included), not the sum of utilities (which is just a useful contruct.)
All of this is to say:
If we are to choose for ourselves between these scenarios, I think it is incredibly bizarre to prefer 3^^^3 satisfying lives and one indescribably horrible life to 3^^^3 infinitesimally better lives than the alternative 3^^^3 lives. I think doing so ignores basic human psychology, from whence our preferences arise.
I'd take it.
I find your choice/intuition completely baffling, and I would guess that far less than 1% of people would agree with you on this, for whatever that's worth (surely it's worth something.) I am a consequentialist and have studied consequentialist philosophy extensively (I would not call myself an expert), and you seem to be clinging to a very crude form of utilitarianism that has been abandoned by pretty much every utilitarian philosopher (not to mention those who reject utilitarianism!). In fact, your argument reads like a reductio ad absurdum of the point you are trying to make. To wit: if we think of things in equivalent, additive utility units, you get this result that torture is preferable. But that is absurd, and I think almost everyone would be able to appreciate the absurdity when faced with the 3^^^3 lives scenario. Even if you gave everyone a one week lecture on scope insensitivity.
So... I don't think I want you to be one of the people to initially program AI that might influence my life...
True: my expected value would be 50 years of torture, but I don't think that changes my argument much.
I'm not sure I understand what you're trying to say. (50*365)/3^^^3 (which is basically the same thing as 1/3^^^3) days of torture wouldn't be anything at all, because it wouldn't be noticeable. I don't think you can divide time to that extent from the point of view of human consciousness.
I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
To the extent that you think that and it is reasonable, I suppose that would undermine my argument that the personal choice framework is the wrong way of looking at the question. I would choose the speck every day, and it seems like a clear choice to me, but perhaps that just reflects that I have the bias this thought experiment was meant to bring out.
I am not convinced that this question can be converted into a personal choice where you face the decision of whether to take the speck or a 1/3^^^3 chance of being tortured. I would avoid the speck and take my chances with torture, and I think that is indeed an obvious choice.
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.
The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.