LESSWRONG
LW

291
Vaatzes
5120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
On Pruning an Overgrown Garden
Vaatzes3mo30

Quantity has a quality all of its own. I think you're absolutely correct, and you point out a good reason why self-moderation can be insufficient upon reaching this "critical mass". My benefit is that ours is not a forum-based platform but mostly chat, so it's much more likely for at least one moderator to see each message or at least the most obviously wrong ones. Would you say that, as the quantity increases, effective moderation becomes key?

Reply
How concerned are you about a fast takeoff due to a leap in hardware usage?
Vaatzes3mo20

To me, the idea of "fully human-level capable AI" is a double myth. It works, in so far as we do not try to ascribe concrete capabilities to the model. Anything human-level that can be parallelized is per definition super-human. That's why to me it's a myth in the first place. Additionally, human-level capabilities just make very little sense to me in a model. Is this a brain simulation, and does it feel boredom and long for human rights? Or is this "just" a very general problem solving tool, something akin to an actually accurate vision-language model? This is a categorical difference.

Accurate, general problem solving tools are far more likely and, in the wrong hands, can probably cause far more harm than a "virtual human" ever could. On the other hands, the simulated brain raises many more ethical concerns, I would say.

To actually answer the question, I'm not concerned about a fast takeoff. There are multiple reasons for this:

  • Anything remotely close in performance as the hypothesized $10 billion model will also have massive incentive to be deployed, serving as a heads up.
  • As far as I'm aware, almost all systems of intelligence that we have built thus far do scale, but scale with strongly diminished returns. A "mere" increase in training time, model size, and data availability & quality is likely insufficient.
  • Then there remains the entire discussion of model actualization in the world and a goal deviating to the point of being a threat to someone, let alone everyone

Yes, until we set rigorous terms and prove otherwise, there is certainly a possibility. But compared to "mundane" worries like climate change and socioeconomic inequality this potential existential threat does not even register.

Reply
3On Pruning an Overgrown Garden
3mo
3