LESSWRONG
LW

69
Chapin Lenthall-Cleary
66Ω34130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
If your AGI definition excludes most humans, it sucks.
Chapin Lenthall-Cleary2mo*32

>AGI discussed in e.g. superintelligence

The fact that this is how you're clarifying it shows my point. While I've heard people talking about "AGI" (meaning weak ASI) having significant impacts, it's seldom discussed as leading to x-risks. To give maybe the most famous recent example, AI 2027 ascribes the drastic changes and extinction specifically to ASI. Do you have examples of mind of credible people specifically talking about x-risks from AGI? Bostrom's book refers to superintelligence, not general intelligence, as the superhuman AI.

Also, I find the argument that we should continue to use muddled terms to avoid changing their meaning not that compelling, at least when it's easy to clarify meanings where necessary.

Reply
If your AGI definition excludes most humans, it sucks.
Chapin Lenthall-Cleary3mo1-4

Yeah, spikiness has been an issue, but the floor is starting to get mopped up. The "unable to correctly cite a reference" thing isn't quite fair anymore, though even current SOTA systems aren't reliable in that regard. 

The point about needing background is good, though the AI might need specialized training/instructions for specialized tasks in the same way a human would. There's no way it could know a particular organization's classified operating procedures from the factory, for example. Defining (strong) AGI as being able to perform every computer task at the level of the median human who has been given appropriate training (once it's had the same training a human would get, if necessary) seems sensible. (I guess you could argue that that's technically covered under the simpler definition.)

Reply
If your AGI definition excludes most humans, it sucks.
Chapin Lenthall-Cleary3mo22

I totally agree that AGI is about that. That's why I cite novel problem-solving ability in the qualification. The major issue with Steve Byrnes's argument is that he demands this ability at a human-genius level for AGI. I elaborated upon this issue in this comment on one of his posts.


 

Reply
Nobody is Doing AI Benchmarking Right
Chapin Lenthall-Cleary3mo20

The data contamination and giving the AI labs a good benchmark are very real worries, and are basically the reason we haven't published Starburst yet (though we have sent it to a few people who wanted to try it). Providing a leaderboard and scores that can be used for projections, fortunately, doesn't have either of those risks, but still has benefits for use in forecasting, public awareness, etc. (obviously in proportion to how well-known it is). We may make a short version public, since that has no data contamination and less capability-boosting risk. Still undecided on that.

Regarding the paper, not exactly surprising to see that result for a 109M parameter model trained on a narrow task. 

We do already have a leaderboard. Maybe a dedicated website rather than Substack would be nicer.

Reply
Foom & Doom 1: “Brain in a box in a basement”
Chapin Lenthall-Cleary3mo30

I fully agree with you that AGI should be able to figure out things it doesn't know, and that this is a major blindspot in benchmarks. (I often give novel problem-solving as a requirement, which is very similar.) My issue is that there is a wide range of human abilities in this regard. Most/all humans can figure things out to some extent, but most aren't that good at it. If you give a genius an explanation of basic calculus and differential equation to figure out how to solve, it won't be that difficult. If you give the same task to an average human, it isn't happening. Describing AGI as being able to make a $1b/yr company or develop innovative science at a John von Neumann level is describing a faculty that most/all humans have, but at a level vastly above where most humans are. 

Most of my concern about AI (why I am, unlike you, most worried about improved LRMs) stems from the fact that current SOTA systems have ability to figure things out within the human range and fairly rapidly increasing across it. (Current systems do have limitations that few humans have in other faculties like time horizons and perception, but those issues are decreasing with time.) Also, even if we never reach ASI, AI having problem-solving on par with normal smart humans, especially when coupled with other faculties, could have massively bad consequences.

Reply
A Medium Scenario
Chapin Lenthall-Cleary3mo30

The whole no-one-can-agree-on-what-AGI-is thing is damn true, and a real problem. Cole and I have a joke that it's not AGMI (Artificial Gary Marcus Intelligence) unless it solve the hard problem of consciousness, multiply numbers of arbitrary length without error (which humans can't do perfectly with paper, and obviously really can't without), and various other things, all at once. A recent post with over 250 karma said that LLM's aren't AGI because they can't make billion-dollar businesses, which almost no humans can do, and no humans can do quickly.

As for the most likely way to get AGI, the case is quite strong for LRMs with additional RL around stuff like long-term memory and to reduce hallucinations, since those systems are, in many ways, nearly there, and there are no clear barriers to them making it the rest of the way.

Reply
Foom & Doom 1: “Brain in a box in a basement”
Chapin Lenthall-Cleary3moΩ352

LLMs are very impressive, but they’re not AGI yet—not by my definition. For example, existing AIs are nowhere near capable of autonomously writing a business plan and then founding a company and growing it to $1B/year revenue, all with zero human intervention. By analogy, if humans were like current AIs, then humans would be able to do some narrow bits of founding and running companies by ourselves, but we would need some intelligent non-human entity (angels?) to repeatedly intervene, assign tasks to us humans, and keep the larger project on track.

This is an insane AGI definition/standard. Very few humans can make billion-dollar businesses, and the few who can take years to do so. If that were the requirement for AGI, almost all humans wouldn't qualify. Indeed, if an AI could make a billion-dollar-a-year businesses upon demand, I'd wonder whether it was (weak) ASI.

(Not saying that current systems qualify as AGI, though I would say they're quite close to what I'd call weak AGI. They do indeed have severe issues with time horizons and long-term planning. But a reasonable AGI definition shouldn't exclude the vast majority of humans.)

Reply
A Medium Scenario
Chapin Lenthall-Cleary3mo10

I don't disagree with you that this probably wouldn't lead to extinction. That said, I'd expect birth rates to crater. The average teen already spends nearly 5 hours a day on current social media, and there's every reason to expect AI social media that can generate something close to the most addictive video possible to be far more effective. Add in AI "boyfriends"/"girlfriends" that are far more attractive, supportive, etc. than any human, and many of the biological impulses that drive people to make families are diverted away from making real relationships and families. Only people with fairly strong desires for making real families and luddites would have much chance of overcoming the strong influences against it.

My confidence here is decent, but not amazing. Assuming the scenario happens roughly as described otherwise, I'd give roughly a 70% chance of a massive (as in far worse than currently predicted) population crash happening, but not extinction on (direct) account of it. Obviously, this is heavily contingent upon how addictive AI social media could be, which could be less or far more than described.

Reply
A Medium Scenario
Chapin Lenthall-Cleary3mo10

As cesspool said, this isn't really meant to mean outright extinction, but I would expect a(n even more massive than already predicted) population collapse in this scenario. The line was basically talking about the end of human endeavor (which, to be fair, is somewhat intertwined with a population collapse in multiple ways). At a guess, I'd expect people having kids for religious/philosophical/moral reasons to become much rarer in this scenario, but not entirely go away. 

Reply
A Medium Scenario
Chapin Lenthall-Cleary3mo10

I appreciate it.

This scenario doesn't predict that LLMs can't be AGI. As depicted, the idea is that something based upon LLMs (with CoT, memory, tool use, etc.) is able to reach strong AGI (able to do anything most people can do on a computer), but is only able to reach the intelligence of the mid-to-high-90th percentile person. Indeed, I'd argue that current SOTA systems should basically be considered very weak/baby-AGI (with a bunch of non-intelligence-related limitations). 

The limitation depicted here, which I think is plausible but far from certain, is that high intelligence requires a massive amount of compute, which models don't have access to. There's more cause to suspect that, if this limit exists, this is a fundamental limitation than a limitation of LLM-based models. In the scenario, it's envisioned that researchers try non-LLM-based AIs too, but run up against the same fundamental limits that make ASI impossible without far more compute than is feasible.

Reply
Load More
2Anthropic Is Going All In On Ability Without Intelligence?
Q
2mo
Q
0
18If your AGI definition excludes most humans, it sucks.
3mo
7
18A Medium Scenario
3mo
12
20Nobody is Doing AI Benchmarking Right
3mo
12