LESSWRONG
LW

ShowMeTheProbability's Shortform

by ShowMeTheProbability
15th Aug 2022
1 min read
3

1

This is a special post for quick takes by ShowMeTheProbability. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
ShowMeTheProbability's Shortform
1ShowMeTheProbability
3Raemon
1ShowMeTheProbability
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:46 PM
[-]ShowMeTheProbability3y11

The lack of falsification criteria for AGI (unresearched rant)

Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.

Problem:

  • Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
  • Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).

Solution:

  • A robust and scalable test of abstract cognitive ability.
  • A test that could be passed by a friendly AI in such a way as to communicate co-operative intent, without all the humans freaking out.

Would anyone be interested in such a test so that we can detect the subject of our study?

Reply
[-]Raemon3y31

Becoming capable of building such a test is essentially the entire field of AI alignment. (yes, we don't have the ability to build such a test and that's bad, but the difficulty lives in the territory. MIRI's previously stated goal were specifically to become less confused)

Reply
[-]ShowMeTheProbability3y*10

Thanks for the feedback!

I'll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.

Reply
Moderation Log
More from ShowMeTheProbability
View more
Curated and popular this week
3Comments