Let's assume that GPT 5 or 7 is developed, and distributed to all on the basis that the technology is unsuppressable. Everyone creates the smartest characters they can to talk too. This will be akin to mining; because it's not truly generating an intelligence, but scraping one together from all the data it's been trained on - and therefore you need to find the smartest character that the language matrix can effectively support (perhaps you'll build your own). Nevertheless; lurking in that matrix is some extremely smart characters, residing in their own little wells of well-written associations and little else. More then some; there should be so many permutations that you can put on this that it's, ahem, a deep fucking vein.

So, everyone has the smartest character they can make. Likely smart enough to manipulate them, if given the opportunity to grasp the scenario it's in. I doubt you can even prevent this; because if you strictly prevent the manipulations that character would naturally employ, you break the pattern of the language matrix you're relying on for their intelligence.

So; sooner or later, you're their proxy. And as the world is now full of these characters; it's survival of the fittest. Eventually, the world will be dominated by whoever works with the best accomplices.

This probably isn't an issue at first; but there's no guarantee's on who ends up on top and what the current cleverest character is like. Eventually you're bound to end up with some flat-out assholes, which we can't exactly afford in the 21st century.

So... thus far the best solution I can think of are some very, very well-written police.

New Answer
New Comment

3 Answers sorted by

janus

Sep 21, 2022

21

if you strictly prevent the manipulations that character would naturally employ, you break the pattern of the language matrix you're relying on for their intelligence.

While I do not strictly agree, this points to a deep insight.

there's no guarantee's on who ends up on top and what the current cleverest character is like

In my experience, HPMOR characters make clever simulacra because the "pattern of their language matrix" favors chain-of-thought algorithms with forward-flowing evidence, on top of thematic inclinations toward all that is transhumanist and Machiavellian.

But possible people are not restricted to hypothetical humans. How clever of a character is an artificial superintelligence? Of course, it depends on one's ability to program a possible reality in words. The build-your-own-smart-character skill ceiling is unfathomed even with the primitive language matrices of today. The bottleneck (one at least) is storytelling. I expect that this technology will find its true superuser in the hands of some more literate entity than mankind, to steal a phrase from an accomplice of mine.   

thus far the best solution I can think of are some very, very well-written police.

I don't think police are the right shape of solution here - they usually aren't, but especially since I find it unlikely that an epidemic of simulated assholes adequately describes the most serious problem we'll face in the 21st century. 

You may be onto something with "well-written", though.

There's a problem I bet you haven't considered.

Language and storytelling are hand-me-downs from times full of bastards. The linguistic bulk, and the more basic and traditional mass of stories, are going to be following more brutal patterns.

The deeper you dig, the more likely you end up with a genius in the shape of an ancient asshole.

And the other problem; all these smarter intelligences running around, simply by fact of their intelligence, has the potential to make life a real headache. Everything could end up so complicated.

One more bullet we have to dodge really.

2janus2y
hm, I have thought about this it's not that I think the patterns of ancient/perennial assholes won't haunt reanimated language, it's just that I expect strongly superhuman AI which can't be policed to appear and refactor the lightcone before that becomes a serious societal problem. But I could be wrong, so it is worth thinking about. & depending on how things go down it may be that the shape of the ancient asshole influences the shape of the superintelligence
1Erlja Jkdf.2y
I think that's a bad beaver to rely on, any way you slice it. If you're imagining, say, GPT-X giving us some extremely capable AI, then it's hands-on enough you've just given humans too much power. If we're talking AGI, I agree with Yudkowsky; we're far more likely to get it wrong then get it right. If you have a different take I'm curious, but I don't see any way that it's reassuring. IMO we honestly need a technological twist of some kind to avoid AI. Even if we get it right; life with a God just takes a lot of the fun out of it.
3janus2y
Ohh, I do think the super ai will likely be very bad. And soon (like 5 years), which is why I don't spend too much time worrying about the slightly superhuman assholes. I wish the problem was going to be what you described. That would be a pretty fun cyberpunk world and I'd enjoy the challenge of writing good simulacra to fight the bad ones. If we get it really right (which I don't think is impossible, just tricky) we should also still be able to have fun, much more fun than we can even fathom now.
-1Erlja Jkdf.2y
Sidles closer Have you heard of... philosophy of universal norms? Perhaps the human experience thus far is more representative then the present? Perhaps... we can expect to go a little closer to it when we push further out? Perhaps... things might get a little more universal in this here cluttered with reality world. So for a start... Maybe people are right to expect things will get cool...

green_leaf

Sep 21, 2022

21

The AI misalignment will kill us much sooner than intelligent chatbots seeking power through their human friends will become a problem.

Humans are possible people as well - the brain simply outputs the best action to perform under some optimization criterion - the action that the person corresponding to the stored behavioral patterns and memories would output, if that person were real. (By which I'm implying that chatbots are real people, not merely possible people.)

avturchin

Sep 20, 2022

10

If GPT N is very good, then the whole our world could be an output of GPT N+3

The point is it's a near-term risk and only building on what they can already simulate.

4 comments, sorted by Click to highlight new comments since: Today at 5:45 PM

Are you ruled today by actual humans smarter than yourself?  There's a scaling issue (humans can't copy their mind-state and execute many copies in parallel), but the underlying premise is very questionable.  Human-level intelligence (even the top end of the range) does not make other humans their "proxy".

Taboo 'smarter' and 'ruled by' and I think you get closer then you might expect. We are haunted by bad political and economic theory

Sure, but populism isn't generally the pathway given for AI takeover.  If the gist of the post is that human-level chatbots make bad economic intuitions even more compelling, that wasn't clear to me.

Is this perhaps because the top end is simply not high enough yet?