LESSWRONG
LW

615
MichaelDickens
1816142990
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2MichaelDickens's Shortform
4y
141
What Is The Basin Of Convergence For Kelly Betting?
MichaelDickens17h110

For constant-relative-risk-aversion (CRRA) utility functions, the Kelly criterion is optimal iff you have logarithmic utility. For proof, see Samuelson (1971), The "Fallacy" of Maximizing the Geometric Mean in Long Sequences of Investing or Gambling.

I think there is only a fixed-proportion betting rule (i.e. "bet P% of your bankroll" for fixed P) for CRRA utility functions, because if risk aversion varies then the betting rule must also vary. But I'm not sure how to prove that.

ETA: Actually I think it shouldn't be too hard to prove that using the definition of CRRA. You could do something like, assume a fixed-proportion betting rule exists for some constant P, and then calculate the implied relative risk aversion and show that it must be a constant.

Reply
Hastings's Shortform
MichaelDickens1d40

There is a harder second-order question of "what sorts of videos maximize watch time, and will those be bad for my child?" Hastings's evidence points toward "yes", but I don't think the answer is obvious a priori. (The things YouTube thinks I want to watch are almost all good or neutral for me; YMMV.)

Reply
Knowing Whether AI Alignment Is a One-Shot Problem Is a One-Shot Problem
MichaelDickens1d20

Oops! Fixed.

Reply
Your Clone Wants to Kill You Because You Assumed Too Much
MichaelDickens2d20

For posterity, I would just like to make it clear that if I were ever cloned, I would treat my clone as an equal, and I wouldn't make him do things I wouldn't do—in fact I wouldn't try to make him do anything at all, we'd make decisions jointly.

(But of course my clone would already know that, because he's me.)

(I've spent an unreasonable amount of time thinking about how to devise a fair decision procedure between me and my clone to allocate tasks and resources in a perfectly egalitarian way.)

Reply
GradientDissenter's Shortform
MichaelDickens4d40

FWIW I think Habryka was right to call out that some parts of my comment were bad, and the scolding got me to think more carefully about it.

Reply2
Brendan Long's Shortform
MichaelDickens4d*22

Are you also lifting weights? I'm quite confident that you can gain muscle while taking retatrutide if you lift weights.

IIRC GLP-1 agonists cause more muscle loss than "old-fashioned" dieting, but the effect of resistance training far outweighs the extra muscle loss.

Reply
AI safety undervalues founders
MichaelDickens4d52

My question is, how do you make AI risk known while minimizing the risk of paradoxical impacts? "Never talk about it" is the wrong answer, but I expect there's a way to do better than we've done so far. This seems like an important thing to try to understand.

Reply
Tapatakt's Shortform
MichaelDickens5d20

I don't do this on purpose but I feel like 90% of what I write about AI is something Eliezer already said at some point.

Reply
GradientDissenter's Shortform
MichaelDickens6d52

Yeah I pretty much agree with what you're saying. But I think I misunderstood your comment before mine, and the thing you're talking about was not captured by the model I wrote in my last comment; so I have some more thinking to do.

I didn't mean "can be trusted to take AI risk seriously" as "indeterminate trustworthiness but cares about x-risk", more like "the conjunction of trustworthy + cares about x-risk".

Reply1
GradientDissenter's Shortform
MichaelDickens6d*40

ETA: I think this comment is missing some important things and I endorse Habryka's reply more than I endorse this comment

Like, the most important thing to estimate when evaluating a political candidate is their trustworthiness and integrity! It's the thing that would flip the sign on whether supporting someone is good or bad for the world.

I agree that this is an important thing that deserved more consideration in Eric's analysis (I wrote a note about it on Oct 22 but then I forgot to include it in my post yesterday). But I don't think it's too hard to put into a model (although it's hard to find the right numbers to use). The model I wrote down in my note is

  • 30% chance Bores would oppose an AI pause / strong AI regulations (b/c it's too "anti-innovation" or something)
  • 40% chance Bores would support strong regulations
  • 30% chance he would vote for strong regulations but not advocate for them
  • 90% chance Bores would support weak/moderate AI regulations

My guess is that 2/3 of the EV comes from strong regulations and 1/3 from weak regulations (which I just came up with a justification for earlier today but it's too complicated to fit in this comment), so these considerations reduce the EV to 37% (i.e., roughly divide EV by 3).

FWIW I wouldn't say "trustworthiness" is the most important thing, more like "can be trusted to take AI risk seriously", and my model is more about the latter. (A trustworthy politician who is honest about the fact that they don't care about AI safety will not be getting any donations from me.)

Reply
Load More
31Knowing Whether AI Alignment Is a One-Shot Problem Is a One-Shot Problem
3d
2
56Epistemic Spot Check: Expected Value of Donating to Alex Bores's Congressional Campaign
7d
1
7Things I've Become More Confident About
18d
0
66Outlive: A Critical Review
5mo
4
9How concerned are you about a fast takeoff due to a leap in hardware usage?
Q
5mo
Q
7
29Why would AI companies use human-level AI to do alignment research?
7mo
8
17What AI safety plans are there?
7mo
3
8Retroactive If-Then Commitments
10mo
1
5A "slow takeoff" might still look fast
3y
3
2How much should I update on the fact that my dentist is named Dennis?
Q
3y
Q
3
Load More