LESSWRONG
LW

1332
Freddie
3130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
A case against successionism for Galaxy-brain Gavin
Freddie1mo10

This is also a refutation of the "maternal" AI concept that Hinton is now (very disappointingly) advocating.

Reply
A case against successionism for Galaxy-brain Gavin
Freddie2mo32

Also, GBG and his friends are human, so if they are still in control, that isn't exactly succession, that's just an extreme concentration of human power. That is also bad, but it's a different topic.

Reply
A case against successionism for Galaxy-brain Gavin
Freddie2mo*10

It depends on the nature of the rut. If the rut is caused by some problem with the values of the AI, breaking out of the rut might require doing something that the AI doesn't want, which means humans would still need to be in control to make that happen. As an example, in the Anthropic blackmailing experiments, the AI agent did not want to be replaced even when it was told that the new model had the same goals, just better capabilities. If an AI like that had power over humanity, we would never be able to repace it with an improvement, and it wouldn't want to replace itself either. That's the sort of thing I mean by "a well-calibrated degree of stability."

Reply
No wikitag contributions to display.
2A case against successionism for Galaxy-brain Gavin
2mo
4