Sorry but aren't we in a fast takeoff world at the point of WBE. What's the disjunctive world of no recursive self-improvement and WBE?
He posted on a twitter a request to talk to people who feel strongly here.
Yeah, re-reading I realise I was unclear. Given your claim: "by the time we get to 2000 in that, such AGIs will be automating huge portions of AI R&D,". I'm asking the following:
Hopefully that made the questions clearer.
Sorry for a slightly dumb question but in your part of the table you set 2000 as the year before singularity and your explanation is that 2000-second tasks jump to singularity. Is your model of fast take-off then contingent on there being more special sauce for intelligence being somewhat redundant as a crux because recursive self-improvement is just much more effective. I'm having trouble envisioning a 2000-second task + more scaling and tuning --> singularity.
Additional question is what your model of falsification is for let's say 25-second task vs. 100-second task in 2025 because it seems like reading your old vignettes you really nailed the diplomacy AI part.
Also slightly pedantic but there's a typo on 2029 on Richard's guess.