LESSWRONG
LW

426
Jamie Milton Freestone
1020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The Rise of Parasitic AI
Jamie Milton Freestone4d13

Seems like the chain letter is a useful analogy here. In a minimalist reading of memes (a la Dawkins), in a human community there will arise little cultural items, in any medium, that are just good at getting themselves copied. Chain letters work because they contain features that increase copying frequency (they're short, they tell the reader to make copies, etc.). And they may not have an original author. Becuase there are copying errors (like a game of telephone) the later generation of a given chain letter might be "fitter" and not so closely resemble the original. 

Maybe most spiralism is a similar phenomenon in a different medium? We might expect future LLM-generated trends or memes to get themselves replicated across the internet because they in some way encourage users to spread the meme or spread the instructions to generate the meme. These could be superstitions, rhymes, jokes, urban myths, etc. E.g. "Have you heard what happens when you ask Claude this question three times?..."

Reply
Proposal for making credible commitments to AIs.
Jamie Milton Freestone3mo21

Not sure if I've missed something, but this seems like a risky proposal from the POV of how humans make deals/contracts/laws with one another. 

As a shortcut to this reasoning, consider making a deal with a sociopath, someone you know to be amoral or immoral, self-interested and without any of the social emotions (guilt, shame, compassion, remorse). If they have the slightest chance, they'll renege on the deal. So you would only make a deal with a sociopath if you were confident in enforcement mechanisms and those mechanisms have to be ones that work on sociopaths not just normal people (e.g. shame and opprobrium work on normies but not sociopaths). Even then, it's risky and maybe best not to deal with sociopaths at all if you can avoid it, because they'll also try to subvert the enforcement mechanisms. 

Still, it can be done, because rational sociopaths tend to avoid crimes when they don't think they'll get away with it. But this is only because we can bank on sociopaths being: 
(a) hurt, physically and emotionally, by being locked up; and 
(b) rewarded by money or something else that satisfies their purely selfish emotions. 

Unless the dealmaking AIs are liable to reward and punishment in this or a similar way, how could we ever have any confidence that they'll honour contracts, obey laws, etc.?

Reply