I'd also like an EPUB version that is stripped as possible. I guess it might be necessary to prepend the characters name to know who is saying what, but I find the rest very distracting. I find it makes it hard to read.
"Most of us will find it easy to believe that nuclear war becomes more probable if Putin is massively defeated"
How do you know? What do you mean by "Putin is massively defeated". Do you mean Putin is ousted? I guess you mean that Putin's attack on the Urkraine looks bad. Hard to know what will happen then. Depending on what happens the danger of future nuclear war may either drop, or get higher.
Who will draw what conclusions is hard to judge (particular for those in positions of influence within the russian elite).
I think that Ukraine will not become a NATO country regardless of what happens. This seems to be the general concensus.
I was thinking specifically here of maximizing the value function (desires) across the agents interacting with other. Or more specially adapting the system in a way that it self maintains "maximizing the value function (desires) across the agents" property.
An example is an ecomonic system which seeks to maximize the total wealthfare. Current systems though don't maintain themselves. More powerful agents take over the control mechanisms (or adjust the market rules) so that they are favoured (lobbying, cheating, ignoring the rules, mitageting enforcement). Similar problems occur in other types of coallitions.
Postulating a more powerful agent that forces this maximization property (an aligned super AGI) is cheating unless you can describe how this agent works and self maintains itself and this goal.
However coming to a solution of a system of agents that self maintains this property with no "super agent" might lead to solutions for AGI alignment, or might prevent the creation of such a misaligned agent.
I read a while ago the design/theoritics of corruption resistent systems is an area that has not received much research.
How do you know they don't generalize? As far as I know, no one has solved these problems for coallitions of agents, regardless of human, theoritical or otherwise.
What do you mean by "technical" here?
I think solving the alignment problem for government, corporations, and other coallitions would probably help solving the alignment problem in AGI.
I guess you are saying that even if we could solve the above alignment problems it would still not go all the way to solving it for AGI? What particular gaps are you thinking of?
Yes, the postive reframing step from TEAM (version of CBT) / Feeling great by Dr David Burns is missing from the above, as is the "Magic Dial" step.
A bit odd, as I would have guessed that the above lists or taken directly from "Feeling Great", or from his web site.
An agent typically maximizes their expected utility. i.e. they make the choices under their control that lead to the highest expected utility.
If they predict their efforts to solving aging and mitigating other risks to themselves have minimal effect on the expected utility they will spend most of their time playing Factorio while they can. This will lead to to the maximum expected utility.
If they spend all their time trying to not die, and then they die their total utility will be zero.
I think you can compare modern chess programs with each other to evaluate this.
Some comparisons have been made between different modern chess engines in TCEC.
Stockfish is particularly well adapted to using lots of cores. i.e. Stockfish has a much larger advantage over the other modern programs when lots of CPU cores a available as they have optimized hash table contention very well.
If you compare NNUE stockfish to classic stockfish is also the question of how much strength stockfish NNUE loses when playing on hardware that does not support SIMD.
Similarly you can look at compare Leela Chess with and without GPU.
On the old vs new:
Old chess engines will have been optimized to use much less memory, where as modern chess engines use a very large hash table.
i.e. 3. Software that is better-adapted to new, bigger computers. is quite a large component.
I am not sure what exactly you are meaning by predicting. You can tell the donor a different amount, than you are internally expecting to obtain.