I think there's another element to this: moral judgement. Genocide is seen as an active choice. Somebody (or some group) is perpetrating this assault, which is horrible and evil. Many views of extinction don't have a moral agent as the proximal cause - it's an accident, or a fragile ecosystem that tips over via distributed little pieces, or something else that may be horrible but isn't evil.
Ignorant destruction is far more tolerated than intentional destruction, even if the scales are such that the former is the more harmful.
It shouldn't matter to those who die, but it does matter to the semi-evolved house apes who are pontificating about it at arms' length.
Note that "Canada" is not a simple singular viewpoint. there is probably no "best" approach that perfectly satisfies all parties. Even more so, "Trump" isn't a coherent agent that can be modeled simply.
Some basics that I'd use to suggest a path:
Most of this points to: don't negotiate or retaliate, just announce that Canada continues to seek free trade, and sees the value to Canadian citizens of trade with all nations. Trump is within his rights to punish the US Citizens with high duties, but Canadians don't see the point and intend to maintain low duties and easy trade.
Really, it's different kinds of fear, and different tolerances for different anticipated pain. Enterpreneurs tend to have fear of mediocrity rather than fear of failure. I really disagree with the implied weights in your assymetries:
Consider the asymmetry: You can ask out 100 people, apply to 1,000 jobs, or launch 50 failed startups without any lasting harm, but each attempt carries the possibility of life-changing rewards. Yet most people do none of these things, paralyzed by phantom risks.
Not universal at all. For some, getting rejected by 2 is crippling. I could barely apply to 20 jobs over 6 months when I got laid off a few years ago (I'm a very senior IC, and applying is not "send a resume", it's "learn about the company, find a referral or 2nd degree contacts, get lunch with senior executives, sell myself). I've launched only 3 startups, one of which did "eh, OK" and the others drained me, and I'm well aware I never want to do any of that ever again.
If you say, "get tough, so it doesn't hurt as much to fail", I kind of agree, but also that's way easier said than done. I fully disagree that it's only about fear, and fully disagree that this advice applies to a very large percentage of even the fairly well-educated and capable membership of LessWrong.
Superintelligence that both lets humans survive (or revives cryonauts) and doesn't enable indefinite lifespans is a very contrived package.
I don't disagree, but I think we might not agree on the reason. Superintelligence that lets humanity survive (with enough power/value to last for more than a few thousand years, whether or not individuals extend beyond 150 or so years) is pretty contrived.
There's just no reason to keep significant amounts of biological sub-intelligence around.
I don't think I agree with #3 (and I'd frame #2 as "localities of space-time gain the ability to to sense and model things", but I'm not sure if that's important to our miscommunication). I think each of the observers happens to exist, and observes what it can independently of the others. Each of them experiences "you-ness", and none are privileged over the others, as far as any 3rd observer can tell.
So I think I'd say
I don't think active verbs are justified here - not necessarily "created", "placed", or "assigned".
I don't know for sure whether there is a god's eye view or "outside" observation point, but I suspect not, or at least I suspect that I can never get access to it or any effects of it, and can't think of what evidence I could find one way or the other.
I think it goes to our main point of agreement: there is ambiguity in what question is being asked. For sleeping beauty, the ambiguity is "probability of WHAT future experience for WHOM" is she calculating a probability for. I was curious if you can answer that for your universe question: whose future experience will be used to resolve the truth of the matter for what probability was appropriate to use for the prediction?
Math is math, and at the end of the day the SB problem is just a math problem.
No, it's also an identity/assumption problem. Probability is subjective - it's an agent's estimate of future experience. In the sleeping beauty case, there is an undefined and out-of-domain intuition about "will it be one or two individuals having this future experience?" We just don't have any identity quantification experience in the case of split/merge from this memory-wipe setup.
the unstated disagreement is whether it's one or two experiences that resolve the probability. This ambiguity is clear by the fact that simplifications into clearly-distinct people don't trigger the same confusions. The memory-wipe is the defining element of this problem.
And to tie this to the universe question - how will the probability be resolved? What future experience are you predicting with either interpretation?
Efficient Markets Hypothesis has plenty of exceptions, but this is too coarse-grained and distant to be one of them. Don't ask "what will happen, so I can bet based on that", ask "what do I believe that differs widely from my counterparties". This possibility is almost certainly "priced in" to the obvious bets (TSMC).
That said, you may be more correct than the sellers of long-term puts, so maybe it'll work out. Having a theory and then examining the details and modeling the specific probabilities is exactly what you should be doing. Have you looked at prices and premia for those specific investments? A quick spreadsheet of win/loss in various future paths with as close to real numbers as possible goes a long way.
I'm not a rationalist, and I don't think I hit all your best posts and comments, just some of the mediocre ones (though now that I think about it, that COULD BE all your best, by sheer luck). Do I still get a Boo?
Oh, willful risk-taking ALSO gets a pass, or at least less-harsh judgement. The distinction is between "this is someone's intentional outcome" for genocide, and "this is an unfortunate side-effect" for x-risk.