For constant-relative-risk-aversion (CRRA) utility functions, the Kelly criterion is optimal iff you have logarithmic utility. For proof, see Samuelson (1971), The "Fallacy" of Maximizing the Geometric Mean in Long Sequences of Investing or Gambling.
I think there is only a fixed-proportion betting rule (i.e. "bet P% of your bankroll" for fixed P) for CRRA utility functions, because if risk aversion varies then the betting rule must also vary. But I'm not sure how to prove that.
ETA: Actually I think it shouldn't be too hard to prove that using the definition of CRRA. You could do something like, assume a fixed-proportion betting rule exists for some constant P, and then calculate the implied relative risk aversion and show that it must be a constant.
There is a harder second-order question of "what sorts of videos maximize watch time, and will those be bad for my child?" Hastings's evidence points toward "yes", but I don't think the answer is obvious a priori. (The things YouTube thinks I want to watch are almost all good or neutral for me; YMMV.)
For posterity, I would just like to make it clear that if I were ever cloned, I would treat my clone as an equal, and I wouldn't make him do things I wouldn't do—in fact I wouldn't try to make him do anything at all, we'd make decisions jointly.
(But of course my clone would already know that, because he's me.)
(I've spent an unreasonable amount of time thinking about how to devise a fair decision procedure between me and my clone to allocate tasks and resources in a perfectly egalitarian way.)
FWIW I think Habryka was right to call out that some parts of my comment were bad, and the scolding got me to think more carefully about it.
Are you also lifting weights? I'm quite confident that you can gain muscle while taking retatrutide if you lift weights.
IIRC GLP-1 agonists cause more muscle loss than "old-fashioned" dieting, but the effect of resistance training far outweighs the extra muscle loss.
My question is, how do you make AI risk known while minimizing the risk of paradoxical impacts? "Never talk about it" is the wrong answer, but I expect there's a way to do better than we've done so far. This seems like an important thing to try to understand.
I don't do this on purpose but I feel like 90% of what I write about AI is something Eliezer already said at some point.
Yeah I pretty much agree with what you're saying. But I think I misunderstood your comment before mine, and the thing you're talking about was not captured by the model I wrote in my last comment; so I have some more thinking to do.
I didn't mean "can be trusted to take AI risk seriously" as "indeterminate trustworthiness but cares about x-risk", more like "the conjunction of trustworthy + cares about x-risk".
I plan on writing something longer about this in the future but people use "alignment" to refer to two different things, basically thing 1 is "ASI solves ethics and then behaves ethically" and thing 2 is "ASI does what people want it to do". Approximately nobody is working on thing 1, only on thing 2, and thing 2 doesn't get us a solution to non-alignment problems.