LESSWRONG
LW

2310
ParrotRobot
603150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1ParrotRobot's Shortform
5mo
22
ParrotRobot's Shortform
ParrotRobot1mo10

Nice connection! I’d totally overlooked this. 

Reply
ParrotRobot's Shortform
ParrotRobot1mo90

A simple “null hypothesis” mechanism for the steady exponential rise in METR task horizon: shorter-horizon failure modes outcompete longer-horizon failure modes for researcher attention.

That is, with each model release, researchers solve the biggest failure modes of the previous system. But longer-horizon failure modes are inherently rarer, so it is not rational to focus on them until shorter-horizon failure modes are fixed. If the distribution of horizon lengths of failures is steady, and every model release fixes the X% most common failures, you will see steady exponential progress.

Image

It’s interesting to speculate about how the recent possible acceleration in progress could be explained under this framework. A simple formal model:

  • There is a sequence of error types e_1, e_2, e_3, etc.
  • The first s error types have already been solved, such that errorrate(e_1) = errorrate(e_s) = 0. The long tail of errors e_{s+1} etc has error frequencies decaying exponentially.
  • With each model release (t -> t+1), researchers can afford to fix n error types.
  • METR time horizon is inversely proportional to the total error rate sum(errorrate(e_i) for all i)

Under this model, there are only two ways progress can speed up: the distribution becomes shorter-tailed (maybe AI systems have become inherently better at generalizing, such that solving the most frequent failure modes now generalizes to many more failures), or the time it takes to fix a failure mode has decreased (perhaps because RLVR offers a more systematic way to solve any reliably measurable failure mode).

(Based on a tweet I posted a few weeks ago)

Reply
Four ways learning Econ makes people dumber re: future AI
ParrotRobot2mo100

A concrete suggestion for economists who want to avoid bad intuitions about AI but find themselves cringing at technologists’ beliefs about economics: learn about economic history.

It’s a powerful way to broaden one’s field of view with regard to what economic structures are possible, and the findings do not depend on speculation about the future, or taking Silicon Valley people seriously at all.

I tried my hand at this in this post, but I’m not an economist. A serious economist or economic historian can do much better.

Reply
ParrotRobot's Shortform
ParrotRobot2mo10

Edited!

Reply
ParrotRobot's Shortform
ParrotRobot2mo30

This is a consequence of decreasing returns to scale! Without decreasing returns to scale, humans could buy some small territory before their labor is obsolete, and they could run a non-automated economy just on that small territory, and the fact that the territory is small would be no problem, since there are no decreasing returns to scale.

Reply
ParrotRobot's Shortform
ParrotRobot2mo10

Wow, I didn’t read. Their argument does make sense. And it’s intuitive. Arguably this is sort of happening with AI datacenter investment, where companies like Microsoft are reallocating their limited cash flow away from employees (i.e., laying people off) so they can afford to build AI data centers.

A funny thing about their example is that labor would be far better off if they “walled themselves off” in autarky. In their example, wages fall to zero because of capital flight — because there is an “AK sector” that can absorb an infinite amount of capital at high returns, it is impossible to invest in labor-complementary capital unless wages are zero. So my intuition that humans could “always just leave the automated society” still applies, their example just rules it out by assumption.

Reply
ParrotRobot's Shortform
ParrotRobot2mo*80

Here’s a random list of economic concepts that I wish tech people were more familiar with of. I’ll focus on concepts, not findings, and on intuition rather than exposition.

  • The semi-endogenous growth model: There is a tug of war between diminishing returns to R&D and growth in R&D capacity from economic growth. For the rate of progress to even stay the same, R&D capacity must continually grow.
  • Domar aggregation: With few assumptions, overall productivity growth depends on sector-specific productivity growth in proportion to sectors’ revenues. If a sector is 11% of GDP, the economy is “11% bottlenecked” on it.
  • Why wages increase exponentially with education level: This is empirically observed to be roughly true (the Mincer equation), but why? A simple explanation: the opportunity cost of education is proportional to the wage you can earn with your current level of education. So to be worthwhile, obtaining one more year of education needs to increase your wage by a certain percentage, no matter your current level of education. Each year of education earning people 10% more will look like an exponential.
    • This is basically “P = MC”, but applied to human capital.
  • Automation only decreases wages if the economy becomes “decreasing returns to scale”. This post has a good explanation. Intuition: if humans don’t have to compete with automated actors for things that humans can’t produce (e.g., land or energy), humans could always just leave the automated society and build a 2025-like economy somewhere else.
Reply
ParrotRobot's Shortform
ParrotRobot2mo30

Is it valuable for tech & AI people to try to learn economics? I very much enjoy doing so, but it certainly hasn’t led to direct benefits or directly relevant contributions. So what is the point? (I think there is a point.)

It’s good to know enough to not be tempted to jump to conclusions about AI impact. I’m a big fan of the kind of arguments that the Epoch & Mechanize founders post on Twitter. A quick “wait, really?” check can dispel assumptions that AI must immediately have a huge impact, or conversely that AI can’t have an unprecedentedly fast impact. This is good for sounding smart. But not directly useful (unless I’m talking to someone who is confused).

I also feel like economic knowledge helps give meaning to the things I personally work on. The most basic version of this is when I familiarize myself with quantitative metrics of impact of past technologies (“comparables”), and try to keep up with how the stuff I work on tracks. I think it’s the same joy that some people get by watching sports and trying to quantify how players and teams are performing.

Reply
ParrotRobot's Shortform
ParrotRobot2mo32

In that world, I think people wouldn’t say “we have AGI”, right? Since it would be obvious to them that most of what humans do (what they do at that time, which is what they know about) is not yet doable by AI.

Your preferred definition would leave the term AGI open to a scenario where 50% of current tasks get automated gradually using technology similar to current technology (i.e., normal economic growth). It wouldn’t feel like “AGI arrived”, it would feel like “people gradually built more and more software over 50 years that could do more and more stuff”.

Reply
ParrotRobot's Shortform
ParrotRobot2mo10

I’m worried that the recent AI exposure versus jobs papers (1, 2) are still quite a distance from an ideal identification strategy of finding “profession twins” that differ only in their exposure to AI. Occupations that are differently exposed to AI are different in countless other ways that are correlated with the impact of other recent macroeconomic shocks.

Reply
Load More
9Explosive growth from substitution: the case of the Industrial Revolution
3mo
1
1Economists should track the speed and magnitude of AI implementation projects
5mo
0
1ParrotRobot's Shortform
5mo
22