LESSWRONG
LW

1352
Josh You
3070520
Message
Dialogue
Subscribe

data analyst at Epoch AI

@justjoshinyou13 on twitter

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
We are likely in an AI overhang, and this is bad.
Josh You23d70

I'd flip it around and ask whether Gabriel thinks the best models from 6, 12, or 18 months ago could be performing at today's level with maximum elicitation.

Reply
Max Harms's Shortform
Josh You24d10

I think the linked tweet is possibly just misinterpreting what the authors meant by "transistor operations"?  My reading is that "1000" binds to "operations"; the actual number of transistors in each operation is unspecified. That's how they get the 10,000x number - if a CPU runs at 1 GHz, neurons run at 100 Hz, then even if it takes 1000 clock cycles to do the work of neuron, the CPU can still do it 10,000x faster.

Hmm I see it. I thought it was making a distinct argument from the one Ege was responding to here, but if you're right it's the same one.

Then the claim is that an AI run on some (potentially large) cluster of GPUs can think far faster than any human in serial speed. You do lose the rough equivalency between transistors and neurons: a GPU, which is roughly equal to a person in resource costs, happens to have about the same number of transistors as a human brain has neurons. It's potentially a big deal that AI has a much faster maximum serial speed than humans, but it's far from clear that such an AI can outwit human society.

Reply
Trends in Economic Inputs to AI
Josh You1mo20

OpenAI can probably achieve Meta/Google-style revenue just from monetizing free users, since they're already one of the biggest platforms in the world, with a clear path to increasing eyeballs through model progress+new modalities and use cases, and building up an app ecosystem (e.g. their widely rumored web browser). An anonymous OpenAI investor explains the basic logic:

The investor argues that the math for investing at the $500 billion valuation is straightforward: Hypothetically, if ChatGPT hits 2 billion users and monetizes at $5 per user per month—“half the rate of things like Google or Facebook”—that’s $120 billion in annual revenue.

However, this might take a long time to fully realize, perhaps like 10 years? 

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Josh You1mo92

Google DeepMind uses Nvidia very sparingly if at all. AlphaFold 3 was trained using A100s but that's the only recent use of Nvidia by GDM I've heard of. I think Google proper, outside GDM, primarily uses TPUs over GPUs for internal workloads, but I'm less sure about that.

Google does buy a lot of Nvidia chips for its cloud division, to rent out to other companies.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Josh You1mo82
  • xAI and Meta still use Nvidia. Almost every non-frontier-lab, non-Chinese AI chip consumer uses Nvidia
  • And Alphabet, Amazon, and Broadcom, the companies that design TPU and Trainium, have the 4th, 5th, and 7th biggest market caps in the world. 

I think it's possible that the market is underpricing how big a deal Anthropic and Google DeepMind, and other frontier labs that might follow in their footsteps, are for overall AI chip demand. But it's not super obvious.

Reply
Bogdan Ionut Cirstea's Shortform
Josh You1mo60

I'm saying it would be challenging for Nvidia to preserve its high share of AI compute production in the first place while trying to execute this strategy. Nvidia is fabless, and its dominance will erode if labs/hyperscalers/Broadcom create satisfactory designs and are willing to place sufficient large orders with TSMC.

Reply1
Bogdan Ionut Cirstea's Shortform
Josh You1mo40

Nvidia already has an AI cloud division that is not negligible but small compared to the big players. But they appear to not even own their own chips: they lease from Oracle.

Reply
Bogdan Ionut Cirstea's Shortform
Josh You1mo*92

I am skeptical of this because they can't just scale up data centers on a dime. And signaling that they are trying to become the new biggest hyperscaler would be risky for their existing sales: big tech and frontier labs will go even harder for custom chips than they are now.

To make this happen Nvidia would probably need to partner with neoclouds like CoreWeave that have weaker affiliations with frontier labs. Nvidia is actively incubating neoclouds and does have very strong relationships here, to be sure, but the neoclouds still have fewer data centers and less technical expertise than the more established hyperscalers.

And I think algorithms and talent are very important.

Reply
ryan_greenblatt's Shortform
Josh You1mo10

Personally it will be impossible for me to ignore the part of me that wonders "is this AGI/ASI stuff actually, for real, coming, or will it turn out to be fake." Studying median timelines bleeds into the question of whether AGI by my natural lifespan is 90% likely or 99.5% likely, and vice versa. So I will continue thinking very carefully about evidence of AGI progress.

Reply
My AGI timeline updates from GPT-5 (and 2025 so far)
Josh You2mo151

But I've been increasingly starting to wonder if software engineering might not be surprisingly easy to automate when the right data/environments are used at much larger scale

I've had similar thoughts: I think there's still low-hanging fruit in RL, and in scaffolding and further scaling of inference compute. But my general take is that the recent faster trend of doubling every ~4 months is already the result of picking the low-hanging RL fruit for coding and SWE, and fast inference scaling. So this kind of thing will probably lead to a continuation of the fast trend, not another acceleration.

Another source of shorter timelines, depending on what timeline you mean, is the uncertainty from translating time horizon to real-world AI research productivity. Maybe models with an 80% time horizon of 1 month or less are already enough for a huge acceleration of AI R&D, with the right scaffold/unhobbling/bureaucracy that can take advantage of lots of parallel small experiments or other work, or good complementarities between AI and human labor, 

Reply
Load More