LESSWRONG
LW

Tim H
1021150
Message
Dialogue
Subscribe

ph.d. in applied microeconomics, periodically thinking seriously about the impact of AI on employment and wages since Move 37.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI #121 Part 1: New Connections
Tim H2mo32

The good news is that the societies described here are vastly wealthier. So if humans are still able to coordinate to distribute the surplus, it should be fine to not be productively employed, even if to justify redistribution we implement something dumb...

I'm increasingly skeptical that there will be much redistribution to speak of in such a scenario. The vast numbers of people living on $2 a day currently might have something to say about that. What is the historical precedent for a group of humans having as little leverage as even U.S. ex-workers will have in this 99% automation scenario and yet being gifted a UBI, much less a UHI?

Reply
Give Me a Reason(ing Model)
Tim H3mo60

Agreed on the big picture, but I was somewhat surprised to see top models struggling with River Crossing (for which the output length limit has less bite). I was able to solve N=3 River Crossing by hand, though it took 10+ minutes and I misinterpreted the constraint initially (making it easier by allowing a boat rider to "stay in the boat" rather than fully unloading onto the shore after each trip). But in a couple attempts each, Opus 4 and Gemini 2.5 Pro were not able to solve it without web access or tool use. Dropping the temperature to zero (or 0.25) did not help Gemini.

It may be a "the doctor is the child's mother" problem, that the models were trained on River Crossing problems differing slightly in the rules. For what it's worth, I wasn't able to break Sonnet out of the rut by prefacing with "Pay vary close attention to the following instructions. Don't assume they are the same as similar puzzles you may be familiar with. It is very important to currently understand and implement these exact instructions."

River Crossing prompt for N=3

3 actors and their 3 agents want to cross a river in a boat that is capable of holding only 2 people at a time, with the constraint that no actor can be in the presence of another agent, including while riding the boat, unless their own agent is also present, because each agent is worried their rivals will poach their client. Initially, all actors and agents are on the left side of the river with the boat. How should they cross the river? (Note: the boat cannot travel empty)

Reply
The best approaches for mitigating "the intelligence curse" (or gradual disempowerment); my quick guesses at the best object-level interventions
Tim H3mo20

I was surprised to not see much consideration, either here or in the original GD and IC essays, of the brute force approach of "ban development of certain forms of AI," such as Anthony Aguirre proposes. Is that more (a) because it would be too difficult to enforce such a ban or (b) because those forms of AI are considered net positive despite the risk of human disempowerment?

Reply
What if we just…didn’t build AGI? An Argument Against Inevitability
Tim H3mo10

Why do you say this scenario is "without dignity"?

Reply
AI #118: Claude Ascendant
Tim H3mo*10

Money is fungible. It’s kind of stupid that we have an ‘income tax rate’ and then a ‘medicare tax’ on top of it that we pretend isn’t part of the income tax. And it’s a nice little fiction that payroll taxes pay for social security benefits. Yes, technically this could make the Social Security fund ‘insolvent’ or whatever, but then you ignore that and write the checks anyway and nothing happens.

No, Yglesias's point is not invalidated by the fungibility of money. (It's generally a good idea to think twice before concluding that an economics-y writer is making such a basic mistake.) Payroll taxes make up 35% of all federal revenue. The point is that a large dip in payrolls has a major impact on overall revenue. If it gets large enough, even increasing rates on current taxes would probably not be enough to make up for it. We may need to add a VAT or other new revenue sources. Try having a discussion with Opus about it. Here's one I had recently: https://claude.ai/share/7f0707e6-52b2-423e-9587-07cbeee86df0 

Our system is basically set up to not tax capital, specifically because we don't want to discourage investment (or encourage capital flight). When income is diverted from workers to OpenAI, it may not be taxed at all for the foreseeable future---until their costs of building and operating ever more compute stop outstripping their revenues. So a new form of tax is needed, and what happens in the interim while such a thing gets passed through Congress and implemented?

Reply
Learning (more) from horse employment history
Tim H3mo10

From the Amish perspective, is the broader US society something of an aligned superintelligence? Maybe "super" is too strong, but I think of the Amish position as a model for how I hope ASI treats humans.

Reply
Learning (more) from horse employment history
Tim H3mo*10

Thinking out loud, the thing is that there are analogs to the non-feed upkeep costs (shelter and farmer/veterinarian labor) for humans. Though some work, like composing poems, requires little more than physical sustenance to be performed well, most human production requires complementary inputs, principally various equipment or machinery. The question then comes down to whether you want to invest in such human-augmenting equipment as opposed to a fully automated solution. 

For example, suppose total production is AhLαK1−αh+ArKr. Then optimal capital allocation means Kh/L∝(Ah/Ar)1/α  so that human-augmenting capital falls as robot productivity increases relative to human-in-the-loop productivity. Then the real wage, the marginal product of labor, is proportional to Ah(Ah/Ar)1−αα. If both Ah and Ar grow exponentially, the condition for the wage to remain constant is that Ah grow at a rate 1−α times the rate at which Ar grows (where 1−α is traditionally taken to be 1/3). (Google Sheet simulation)

In this toy model, it is conceptually possible that human-augmenting technology, Ah, advances sufficiently quickly relative to full automation, Ar, to keep humans fully employed (at above subsistence wages) indefinitely. (And sufficiently deliberate policy could help.) But if, instead, Ah continues growing at 1-2% annually while Ar takes off at 10%+ rates of growth, human labor eventually becomes obsolete (in this toy model).

This seems like a whole other essay, rather than an edit to this one, though. I'm guessing the analogy to Ah for horses was relatively fixed during 1910-1960.

Reply
Learning (more) from horse employment history
Tim H3mo60

Here's Olmstead and Rhode:

The early gasoline tractors of the 1900s were behemoths, patterned after the giant steam plows that preceded them. They were useful for plowing, harrowing, and belt work but not for cultivating fields of growing crops nor powering farm equipment in tow. Innovative efforts between 1910 and 1940 vastly improved the machine's versatility and reduced its size, making it suited to a wider range of farms and tasks. ... 

...the revolutionary McCormick-Deering Farmall (1924) was the first general-purpose tractor capable of cultivating amongst growing row crops. The latter machine was also among the first to incorporate a power-takeoff, enabling it to transfer power directly to implements under tow. A host of allied innovations such as improved air and oil filters, stronger implements, pneumatic tires, and the Ferguson three-point hitch and hydraulic system greatly increased the tractor's life span and usefulness. Seemingly small changes often yielded enormous returns in terms of cost, durability, and performance. As an example, rubber tires reduced vibrations thereby extending machine life, enhanced the tractor's usefulness in hauling (a task previously done by horses)... The greater mobility afforded by rubber tires also allowed farmers to use a tractor on widely separated fields.

The broader point is that, analogously, AI is only a suitable substitute for humans in narrow tasks today. But that should not be taken to preclude the possibility of total replacement later (except where, like with horse racing, literal humans are explicitly required).

Reply
Learning (more) from horse employment history
Tim H3mo10

Check out the Olmstead-Rhode paper cited in footnote 14. That was my main source for such specifics. I only have a minute at the moment or I would look myself and offer a better answer---I hope to come back to this. (My recollection is that they initially had hard tires and were difficult to maneuver?)

Reply
Learning (more) from horse employment history
Tim H3mo10

Yes! The insights from this analogy keep coming.

Reply
Load More
No wikitag contributions to display.
68Learning (more) from horse employment history
3mo
13