Posts

Sorted by New

Wiki Contributions

Comments

It seems to me that you're passing comments in bad faith here. Connor repeatedly stressed in podcast that Conjecture would not do capabilities research and that they would not have had plans for developing products had they not been funding constrained.

You make pretty big accusations in the parent comment too, all that not supported by an iota of evidence but an out-of-context quote from podcast picked by you.

[This comment is no longer endorsed by its author]Reply

From the paper:

Technical AGI safety (Bostrom, 2017) may also become more challenging when considering generalist agents that operate in many embodiments. For this reason, preference learning, uncertainty modeling and value alignment (Russell, 2019) are especially important for the design of human-compatible generalist agents. It may be possible to extend some of the value alignment approaches for language (Kenton et al., 2021; Ouyang et al., 2022) to generalist agents. However, even as technical solutions are developed for value alignment, generalist systems could still have negative societal impacts even with the intervention of well-intentioned designers, due to unforeseen circumstances or limited oversight (Amodei et al., 2016). This limitation underscores the need for a careful design and a deployment process that incorporates multiple disciplines and viewpoints.

“The smartest ones are the most criminally capable.” [·]