That's a good point, though I do still think you need the right motivation. Where you're convinced you're right, it's very easy to skim past passages that are 'obviously' incorrect, and fail to question assumptions.(More generally, I do wonder what's a good heuristic for this - clearly it's not practical to constantly go back to first principles on everything; I'm not sure how to distinguish [this person is applying a poor heuristic] from [this person is applying a good heuristic to very different initial beliefs])
Perhaps the best would be a combination: a conversation which hopefully leaves you with the thought that you might be wrong, followed by the book to allow you to go into things on your own time without so much worry over losing face or winning.
Another point on the cause-for-optimism side is that being earnestly interested in knowing the truth is a big first step, and I think that description fits everyone mentioned so far.
I'd guess that reciprocal exchanges might work better for friends:I'll read any m books you pick, so long as you read the n books I pick.
Less likely to get financial ick-factor, and it's always possible that you'll gain from reading the books they recommend.
Perhaps this could scale to public intellectuals where there's either a feeling of trust or some verification mechanism (e.g. if the intellectual wants more people to read [some neglected X], and would willingly trade their time reading Y for a world where X were more widely appreciated).
Whether or not money is involved, I'm sceptical of the likely results for public intellectuals - or in general for people strongly attached to some viewpoint. The usual result seems to be a failure to engage with the relevant points. (perhaps not attacking head-on is the best approach: e.g. the asymmetrical weapons post might be a good place to start for Deutsch/Pinker)
Specifically, I'm thinking of David Deutsch speaking about AGI risk with Sam Harris: he just ends up telling a story where things go ok (or no worse than with humans), and the implicit argument is something like "I can imagine things going ok, and people have been incorrectly worried about things before, so this will probably be fine too". Certainly Sam's not the greatest technical advocate on the AGI risk side, but "I can imagine things going ok..." is a pretty general strategy.
The same goes for Steven Pinker, who spends nearly two hours with Stuart Russell on the FLI podcast, and seems to avoid actually thinking in favour of simply repeating the things he already believes. There's quite a bit of [I can imagine things going ok...], [People have been wrong about downsides in the past...], and [here's an argument against your trivial example], but no engagement with the more general points behind the trivial example.
Steven Pinker has more than enough intelligence to engage properly and re-think things, but he just pattern-matches any AI risk argument to [some scary argument that the future will be worse] and short-circuits to enlightenment-now cached thoughts. (to be fair to Steven, I imagine doing a book tour will tend to set related cached thoughts in stone, so this is a particularly hard case... but you'd hope someone who focuses on the way the brain works would realise this danger and adjust)
When you're up against this kind of pattern-matching, I don't think even the ideal book is likely to do much good. If two hours with Stuart Russell doesn't work, it's hard to see what would.
Unless I've confused myself badly (always possible!), I think either's fine here. The | version just takes out a factor that'll be common to all hypotheses: [p(e+) / p(e-)]. (since p(Tk & e+) ≡ p(Tk | e+) * p(e+))
Since we'll renormalise, common factors don't matter. Using the | version felt right to me at the time, but whatever allows clearer thinking is the way forward.
Taking your last point first: I entirely agree on that. Most of my other points were based on the implicit assumption that readers of your post don't think something like "It's directly clear that 9 OOM will almost certainly be enough, by a similar argument".
Certainly if they do conclude anything like that, then it's going to massively drop their odds on 9-12 too. However, I'd still make an argument of a similar form: for some people, I expect that argument may well increase the 5-8 range more (than proportionately) than the 1-4 range.
On (1), I agree that the same goes for pretty-much any argument: that's why it's important. If you update without factoring in (some approximation of) your best judgement of the evidence's impact on all hypotheses, you're going to get the wrong answer. This will depend highly on your underlying model.
On the information content of the post, I'd say it's something like "12 OOMs is probably enough (without things needing to scale surprisingly well)". My credence for low OOM values is mostly based on worlds where things scale surprisingly well.
But this is a bit weird; my post didn't talk about the <7 range at all, so why would it disproportionately rule out stuff in that range?
I don't think this is weird. What matters isn't what the post talks about directly - it's the impact of the evidence provided on the various hypotheses. There's nothing inherently weird about evidence increasing our credence in [TAI by +10OOM] and leaving our credence in [TAI by +3OOM] almost unaltered (quite plausibly because it's not too relevant to the +3OOM case).
Compare the 1-2-3 coins example: learning y tells you nothing about the value of x. It's only ruling out any part of the 1 outcome in the sense that it maintains [x_heads & something independent is heads], and rules out [x_heads & something independent is tails]. It doesn't need to talk about x to do this.
You can do the same thing with the TAI first at k OOM case - call that Tk. Let's say that your post is our evidence e and that e+ stands for [e gives a compelling argument against T13+].Updating on e+ you get the following for each k:Initial hypotheses: [Tk & e+], [Tk & e-]Final hypothesis: [Tk & e+]
So what ends up mattering is the ratio p[Tk | e+] : p[Tk | e-]I'm claiming that this ratio is likely to vary with k.
Specifically, I'd expect T1 to be almost precisely independent of e+, while I'd expect T8 to be correlated. My reason on the T1 is that I think something radically unexpected would need to occur for T1 to hold, and your post just doesn't seem to give any evidence for/against that.I expect most people would change their T8 credence on seeing the post and accepting its arguments (if they've not thought similar things before). The direction would depend on whether they thought the post's arguments could apply equally well to ~8 OOM as 12.
Note that I am assuming the argument ruling out 13+ OOM is as in the post (or similar).If it could take any form, then it could be a more or less direct argument for T1.
Overall, I'd expect most people who agree with the post's argument to update along the following lines (but smoothly):T0 to Ta: low increase in credenceTa to Tb: higher increase in credenceTb+: reduced credence
with something like (0 < a < 6) and (4 < b < 13).I'm pretty sure a is going to be non-zero for many people.
[[ETA, I'm not claiming the >12 OOM mass must all go somewhere other than the <4 OOM case: this was a hypothetical example for the sake of simplicity. I was saying that if I had such a model (with zwomples or the like), then a perfectly good update could leave me with the same posterior credence on <4 OOM.In fact my credence on <4 OOM was increased, but only very slightly]]
First I should clarify that the only point I'm really confident on here is the "In general, you can't just throw out the >12 OOM and re-normalise, without further assumptions" argument.
I'm making a weak claim: we're not in a position of complete ignorance w.r.t. the new evidence's impact on alternate hypotheses.
My confidence in any specific approach is much weaker: I know little relevant data.
That said, I think the main adjustment I'd make to your description is to add the possibility for sublinear scaling of compute requirements with current techniques. E.g. if beyond some threshold meta-learning efficiency benefits are linear in compute, and non-meta-learned capabilities would otherwise scale linearly, then capabilities could scale with the square root of compute (feel free to replace with a less silly example of your own).
This doesn't require "We'll soon get more ideas" - just a version of "current methods scale" with unlucky (from the safety perspective) synergies.
So while the "current methods scale" hypothesis isn't confined to 7-12 OOMs, the distribution does depend on how things scale: a higher proportion of the 1-6 region is composed of "current methods scale (very) sublinearly".
My p(>12 OOM | sublinear scaling) was already low, so my p(1-6 OOM | sublinear scaling) doesn't get much of a post-update boost (not much mass to re-assign).My p(>12 OOM | (super)linear scaling) was higher, but my p(1-6 OOM | (super)linear scaling) was low, so there's not too much of a boost there either (small proportion of mass assigned).
I do think it makes sense to end up with a post-update credence that's somewhat higher than before for the 1-6 range - just not proportionately higher. I'm confident the right answer for the lower range lies somewhere between [just renormalise] and [don't adjust at all], but I'm not at all sure where.
Perhaps there really is a strong argument that the post-update picture should look almost exactly like immediate renormalisation. My main point is that this does require an argument: I don't think its a situation where we can claim complete ignorance over impact to other hypotheses (and so renormalise by default), and I don't think there's a good positive argument for [all hypotheses will be impacted evenly].
Yes, we're always renormalising at the end - it amounts to saying "...and the new evidence will impact all remaining hypotheses evenly". That's fine once it's true.
I think perhaps I wasn't clear with what I mean by saying "This doesn't say anything...".I meant that it may say nothing in absolute terms - i.e. that I may put the same probability of [TAI at 4 OOM] after seeing the evidence as before.
This means that it does say something relative to other not-ruled-out hypotheses: if I'm saying the new evidence rules out >12 OOM, and I'm also saying that this evidence should leave p([TAI at 4 OOM]) fixed, I'm implicitly claiming that the >12 OOM mass must all go somewhere other than the 4 OOM case.
Again, this can be thought of as my claiming e.g.:[TAI at 4 OOM] will happen if and only if zwomples workThere's a 20% chance zwomples workThe new 12 OOM evidence says nothing at all about zwomples
In terms of what I actually think, my sense is that the 12 OOM arguments are most significant where [there are no high-impact synergistic/amplifying/combinatorial effects I haven't thought of].My credence for [TAI at < 4 OOM] is largely based on such effects. Perhaps it's 80% based on some such effect having transformative impact, and 20% on we-just-do-straightforward-stuff. [Caveat: this is all just ottomh; I have NOT thought for long about this, nor looked at much evidence; I think my reasoning is sound, but specific numbers may be way off]
Since the 12 OOM arguments are of the form we-just-do-straightforward-stuff, they cause me to update the 20% component, not the 80%. So the bulk of any mass transferred from >12 OOM, goes to cases where p([we-just-did-straightforward-stuff and no strange high-impact synergies occurred]|[TAI first occurred at this level]) is high.
It's not entirely clear to me either.Here are a few quick related thoughts:
Of course none of this makes it any less of a problem (to the extent it's bringing down collective QoL) - but possibly a difficult problem that we'd expect to exist.
Solutions-wise, my main thought is that you'd want to find a way to channel signalling-waste efficiently into public goods - so that personal 'waste' becomes a collective advantage (hopefully).
It is also worth noting that not all 'wasteful' spending is bad for society. E.g. consider early adopters of new and expensive technology: without people willing to 'waste' money on the Tesla Roadster, getting electric cars off the ground may have been a much harder problem.
We do gain evidence on at least some alternatives, but not on all the factors which determine the alternatives. If we know something about those factors, we can't usually just renormalise. That's a good default, but it amounts to an assumption of ignorance.
Here's a simple example:We play a 'game' where you observe the outcome of two fair coin tosses x and y.You score:1 if x is heads2 if x is tails and y is heads3 if x is tails and y is tailsSo your score predictions start out at:1 : 50%2 : 25%3 : 25%
We look at y and see that it's heads. This rules out 3.Renormalising would get us:1 : 66.7%2 : 33.3%3: 0%
This is clearly silly, since we ought to end up at 50:50 - i.e. all the mass from 3 should go to 2. This happens because the evidence that falsified 3 points was insignificant to the question "did you score 1 point?".On the other hand, if we knew nothing about the existence of x or y, and only knew that we were starting from (1: 50%, 2: 25%, 3: 25%), and that 3 had been ruled out, it'd make sense to re-normalise.
In the TAI case, we haven't only learned that 12 OOM is probably enough (if we agree on that). Rather we've seen specific evidence that leads us to think 12 OOM is probably enough. The specifics of that evidence can lead us to think things like "This doesn't say anything about TAI at +4 OOM, since my prediction for +4 is based on orthogonal variables", or perhaps "This makes me near-certain that TAI will happen by +10 OOM, since the +12 OOM argument didn't require more than that".
If you have a bunch of hypotheses (e.g. "It'll take 1 more OOM," "It'll take 2 more OOMs," etc.) and you learn that some of them are false or unlikely (only 10% chance of it taking more than 12" then you should redistribute the mass over all your remaining hypotheses, preserving their relative strengths.
This depends on the mechanism by which you assigned the mass initially - in particular, whether it's absolute or relative. If you start out with specific absolute probability estimates as the strongest evidence for some hypotheses, then you can't just renormalise when you falsify others.
E.g. consider we start out with these beliefs:If [approach X] is viable, TAI will take at most 5 OOM; 20% chance [approach X] is viable.If [approach X] isn't viable, 0.1% chance TAI will take at most 5 OOM.30% chance TAI will take at least 13 OOM.
We now get this new information:There's a 95% chance [approach Y] is viable; if [approach Y] is viable TAI will take at most 12 OOM.
We now need to reassign most of the 30% mass we have on >13 OOM, but we can't simply renormalise: we haven't (necessarily) gained any information on the viability of [approach X].Our post-update [TAI <= 5OOM] credence should remain almost exactly 20%. Increasing it to ~26% would not make any sense.
For AI timelines, we may well have some concrete, inside-view reasons to put absolute probabilities on contributing factors to short timelines (even without new breakthroughs we may put absolute numbers on statements of the form "[this kind of thing] scales/generalises"). These probabilities shouldn't necessarily be increased when we learn something giving evidence about other scenarios. (the probability of a short timeline should change, but in general not proportionately)
Perhaps if you're getting most of your initial distribution from a more outside-view perspective, then you're right.
Oh I'm not claiming that non-wasted wealth signalling is useless. I'm saying that frivolous spending and saving send very different signals - and that saving doesn't send the kind of signal tEitB focuses on.
Whether a public saving-signal would actually help is an empirical question. My guess is that it wouldn't help in most contexts where unwise spending is currently the norm, since I'd expect it to signal lack of ability/confidence. Of course I may be wrong.
When considering status, I think wealth is largely valued as an indirect signal of ability (in a broad sense). E.g. compare getting $100m by founding a business vs winning a lottery. The lottery winner gets nothing like the status bump that the business founder gets.This is another reason I think spending sends a stronger overall signal in many contexts: it (usually) says both [I had the ability to get this wealth] and [I have enough ability that I expect not to need it].