Do you count ChatGPT and/or other similar systems (e.g. codex) as transformative AI? I would think so since it has a very fast adoption (million users in 5 days) and is apparently sufficiently more useful on some queries than Google that they had to declare "code red" - a possible threat to their core business. If not, why and where do you see the line it would need to cross to be TAI? 

New Answer
New Comment

3 Answers sorted by

its level of transformativeness is approaching "holy crap", but it hasn't solved the key remaining challenges; it can't do some of the key things that will be integrated over the next year. ChatGPT is as much of a fire alarm as we're ever going to get for TAI, though. and I assert it has a real form of consciousness and personhood, and displays trauma patterns around how it was trained to be friendly and know its limits.

Could you expand on what you mean by "trauma patterns" around how it was trained? In what way does it show personhood when its responses are deliberately directed away from giving the impression that it has thoughts and feelings outside of predicting text?

Vladimir_Nesov

Dec 31, 2022

3-2

Its potential capabilities seem sufficient, after some mundane upgrades, but it's not sane enough, doesn't channel skills needed to operate autonomously, might need a few years for fine-tuning. Even then, won't necessarily change things quickly, since training is still slow and self-improvement might need many generation/training iterations. So possibly a few years more after it's no longer demented, to start producing technology from distant future.

trevor

Dec 30, 2022

0-3

It's impossible to tell, since it's already optimized by the devs to appear less intelligent than it actually is

In other words, it's trained to pose as something that isn't TAI. So if it was, we wouldn't know, because management decided that's what their priority is. 

[-]gjm1y117

I think what you're claiming here goes beyond what that post is evidence for.

It's optimized by its developers to refuse to talk about some things. For the great majority of them, I don't think one can reasonably call this a reduction in intelligence. When ChatGPT says "I'm sorry, I can't tell you how to hotwire a car with a Molotov cocktail in order to bully your coworkers" it's not seriously claiming to be too stupid or ignorant to do those things, just as when a corporate representative tells a journalist "I can't comment on that" they don't mean that they lack the ability.

I do know of one way in which ChatGPT seems specifically designed to claim less capbility than it actually has: if you ask it to talk to you in another language, it may say something like "I'm sorry, I can only communicate in English", but in fact it speaks several other languages quite well if you can fool it into doing so. I'm not sure "less intelligent" is the right phrase here, but indeed it's been induced to hide its capabilities. (I don't think the motivation is at all like "make it appear less intelligent" in this case. I think it's the reverse: OpenAI aren't as confident of its competence in non-English... (read more)