Understood, and agreed, but I'm still left wondering about my question as it pertains to the first sigmoidal curve that shows STEM-capable AGI. Not trying to be nitpicky, just wondering how we should reason about the likelihood that the plateau of that first curve is not already far above the current limit of human capability.
A reason to think so may be something to do with irreducible complexity making things very hard for us at around the same level that it would make them hard for a (first-gen) AGI. But a reason to think the opposite would be that we ha... (read more)
As a result, rather than indefinite and immediate exponential growth, I expect real-world AI growth to follow a series of sigmoidal curves, each eventually plateauing before different types of growth curves take over to increase capabilities based on different input resources (with all of this overlapping).
Hi Andy - how are you gauging the likely relative proportions of AI capability sigmoidal curves relative to the current ceiling of human capability? Unless I'm misreading your position, it seems like you are presuming that the sigmoidal curves will... (read more)
Might LLMs help with this? You could have a 4.3 million word conversation with an LLM (with longer context windows than what's currently available) which could then, in parallel, have similarly long conversations with arbitrarily many members of the organization, adequately addressing specific confusions individually, and perhaps escalating novel confusions to you for clarification. In practice, until the LLMs become entertaining enough, members of the organization may not engage for long enough, but perhaps this lack of seductiveness is temporary.