How should Canada Negotiate with Trump on Tariffs?
What’s the wisest way for Canada to negotiate with Trump-era tariffs and why? I’d like to hear from folks with a background in economics, game theory, or negotiation strategy.
What’s the wisest way for Canada to negotiate with Trump-era tariffs and why? I’d like to hear from folks with a background in economics, game theory, or negotiation strategy.
> I want to retain the ability to update my values over time, but I don’t want those updates to be the result of manipulative optimization by a superintelligence. Instead, the superintelligence should supply me with accurate empirical data and valid inferences, while leaving the choice of normative assumptions—and thus...
In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote, > Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these,...
What’s the wisest way for Canada to negotiate with Trump-era tariffs and why? I’d like to hear from folks with a background in economics, game theory, or negotiation strategy.
I have thought a lot about anthropics.
In an infinite universe, there are an infinite number of identical observers. You cannot define a probability distribution with an infinite sample space, and infinite cardinalities cannot help you. You cannot ask for the probability that it is Monday or Tuesday upon flipping tails because there are an infinite number of observers in both cases.
Do you agree that anthropic questions like these are meaningless if we live in an infinite universe?
This is not the same as CEV. CEV involves the AI extrapolating a user’s idealized future values and acting to implement them, even overriding current preferences if needed, whereas my model forbids that. In my framework, the AI never drives or predicts value change; it simply provides accurate world models and optimal plans based on the user’s current values, which only the user can update.
CEV also assumes convergence; my model protects normative autonomy and allows value diversity to persist.
... (read 175 more words →)I want to retain the ability to update my values over time, but I don’t want those updates to be the result of manipulative optimization by a superintelligence. Instead, the superintelligence should supply me with accurate empirical data and valid inferences, while leaving the choice of normative assumptions—and thus my overall utility function and its proxy representation (i.e., my value structure)—under my control. I also want to engage in value discussions (with either humans or AIs) where the direction of value change is symmetric: both participants have roughly equal probability of updating, so that persuasive force isn’t one-sided. This dynamic can be formally modeled as two agents with evolving objectives or changing
Of course it is, but I'm a functionalist
I'd like to offer a counterargument, that, I'll admit, can get into some pretty gnarly philosophical territory quite quickly.
Premise 1: We are not simulated minds—we are real, biological observers.
Premise 2: We can treat ourselves as a random sample drawn from the set of all conscious minds, with each mind weighted by some measure—i.e., a way of assigning significance or “probability” to different observers. The exact nature of this measure is still debated in cosmology and philosophy of mind.
Inference: If we really are a typical observer (as Premise 2 assumes), and yet we are not simulated (as Premise 1 asserts), then the measure must assign significantly greater weight to real biological observers than... (read more)
This sounds like another crazy thing that the logic says is right but is probably not right, but I don't know why.
Also, does this imply that a technologically mature civilization can plausibly create uncountably infinite conscious minds? What about other sizes of infinity? This, I suppose, could have weird implications for the measure problem in cosmology.
I'm not sure if I understand, but sounds interesting. If true, does this have any implications for ethics more broadly, or are the implications confined only to our interpretation of computations?
Maybe. But what do you mean by, "you can narrow nothing down other than pure logic"?
I interpret the first part—"you can narrow nothing down"—to mean that the simulation argument doesn't help us make sense of reality. But I don't understand the second part: "other than pure logic." Can you please clarify this statement?
Thank you, I feel inclined to accept that for now.
But I'm still not sure, and I'll have to think more about this response at some point.
Edit: I'm still on board with what you're generally saying, but I feel skeptical of one claim:
It seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other.
My intuition tells me there will probably be superior methods of... (read more)
I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument's conclusion or discard that assumption. I'm open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:
If we have no grasp on anything outside our virtualized reality, all is lost.
I agree with this statement, and that's my fear. However, you don't seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim:... (read more)
In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's... (read more)