# All of dust_to_must's Comments + Replies

2Jsevillamol4mo
My intuition is that it's not a great approximation in those cases, similar to how in regular Laplace the empirical approximation is not great when you have eg N<5 Id need to run some calculations to confirm that intuition though.

Oops, I meant lambda! edited :)

2Jsevillamol4mo
I still don't understand - did you mean "when T/t is close to zero"?

Thanks for the confirmation!

In addition to what you say, I would also guess that  is a reasonable guess for P(no events in time t) when t > T, if it's reasonable to assume that events are Poisson-distributed. (but again, open to pushback here :)

2Jsevillamol4mo
What's r?

Great post, thanks for sharing!

I don't have good intuitions about the Gamma distribution, and I'd like to have good intuitions for computing your Rule's outcomes in my head. Here's a way of thinking about it -- do you think it makes sense?

Let  denote either  or  (whichever your rule says is appropriate).

I notice that for , your probability of zero events , where  is what I'd call the estimated event rate

So one nice intuitive inter...

3Jsevillamol4mo
That's exactly right, and I think the approximation holds as long as T/t>>1. This is quite intuitive - as the amount of data goes to infinity, the rate of events should equal the number of events so far divided by the time passed.

In general, this post has prompted me to think more about the transition period between AI that's weaker than humans and stronger than all of human civilization, and that's been interesting! A lot of people assume that that takeoff will happen very quickly, but if it lasts for multiple years (or even decades) then the dynamics of that transition period could matter a lot, and trade is one aspect of that.

some stray thoughts on what that transition period could look like:

• Some doomy-feeling states don't immediately kill us. We might get an AI that's able to d
...

I love the genre of "Katja takes an AI risk analogy way more seriously than other people and makes long lists of ways the analogous thing could work." (the previous post in the genre being the classic "Beyond fire alarms: freeing the groupstuck.")

Digging into the implications of this post:

In sum, for AI systems to be to humans as we are to ants, would be for us to be able to do many tasks better than AI, and for the AI systems to be willing to pay us grandly for them, but for them to be unable to tell us this, or even to warn us to get out of the way

...
9dust_to_must5mo
In general, this post has prompted me to think more about the transition period between AI that's weaker than humans and stronger than all of human civilization, and that's been interesting! A lot of people assume that that takeoff will happen very quickly, but if it lasts for multiple years (or even decades) then the dynamics of that transition period could matter a lot, and trade is one aspect of that. some stray thoughts on what that transition period could look like: * Some doomy-feeling states don't immediately kill us. We might get an AI that's able to defeat humanity before it's able to cheaply replicate lots of human labor, because it gets a decisive strategic advantage via specialized skill in some random domain and can't easily skill itself up in other domains. * When would an AI prefer to trade rather than coerce or steal? * maybe if the transition period is slow, and it knows it's in the earlier part of the period, so reputation matters * maybe if it's being cleverly watched or trained by the org building it, since they want to avoid bad press  * maybe there's some core of values you can imprint that leads to this? but maybe actually being able to solve this issue is basically equivalent to solving alignment, in which case you might as well do that. * In a transition period, powerful human orgs would find various ways to interface with AI and vice versa, since they would be super useful tools / partners for each other. Even if the transition period is short, it might be long enough to change things, e.g. by getting the world's most powerful actors interested in building + using AI and not leaving it in the hands of a few AGI labs, by favoring labs that build especially good interfaces & especially valuable services, etc. (While in a world with a short take off rather than a long transition period, maybe big tech & governments don't recognize what's happening before ASI / doom.)

Maybe one useful thought experiment is whether we could train a dog-level intelligence to do most of these tasks if it had the actuators of an ant colony, given our good understanding of dog training (~= "communication") and the fact that dogs still lack a bunch of key cognitive abilities humans have (so dog-human relations are somewhat analogous to human-AI relations).

(Also, ant colonies in aggregate do pretty complex things, so maybe they're not that far off from dogs? But I'm mostly just thinking of Douglas Hofstadter's "Aunt Hillary" here :)

My gu...

Yeah. It's conceivable you have an AI with some sentimental attachment to humans that leaves part of the universe as a "nature preserve" for humans. (Less analogous to our relationship with ants and more to charismatic flora and megafauna.)

2avturchin5mo
I think that there is a small instrumental value in preserving humans. They could be exchange with Alien friendly AI, for example.

In light of the FTX thing, maybe a particularly important heuristic is to notice cases where the worst-case is not lower-bounded at zero. Examples:

• Shorting stock vs buying put options
• Running an ambitious startup that fails is usually just zero, but what if it's committed funding & tied its reputation to lots of important things that will now struggle?
• More twistily -- what if you're committing to a course of action s.t. you'll likely feel immense pressure to take negative-EV actions later on, like committing fraud in order to save your company or
...

Thanks for your posts, Scott! This has been super interesting to follow.

Figuring out where to set the AM-GM boundary strikes me as maybe the key consideration wrt whether I should use GM -- otherwise I don't know how to use it in practical situations, plus it just makes GM feel inelegant.

From your VNM-rationality post, it seems like one way to think about the boundary is commensurability. You use AM within clusters whose members are willing to sacrifice for each other (are willing to make Kaldor-Hicks improvements, and have some common currency s.t. ...

2PaulK6mo
Wow, I came here to say literally the same thing about commensurability: that perhaps AM is for what's commensurable, and GM is for what's incommensurable. Though, one note is that to me it actually seems fine to consider different epistemic viewpoints as incommensurate. These might be like different islands of low K-complexity, that each get some nice traction on the world but in very different ways, and where the path between them goes through inaccessibly-high K-complexity territory.

FWIW, I went through pretty much the same sequence of thoughts, which jarred me out of what was otherwise a pleasant/flowing read. Given the difficulty people unfamiliar with the notation faced in looking it up, maybe you could say "∃ (there exists)", and/or link to the relevant Wiki page (https://en.wikipedia.org/wiki/Existential_quantification)?

If you're comfortable rephrasing the sentence a little more for clarity, I'd suggest replacing the part after the quantifier with something like "some length of delay between behavior and

...