LESSWRONG
LW

aog
1591Ω33212000
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
6aog's Shortform
4mo
21
Center for AI Safety Blog Posts
AI Safety Newsletter
No wikitag contributions to display.
Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall
aog5d20

Knowing the TAM would clearly be useful for deciding whether or not to continue investing in compute scaling, but trying to estimate the TAM ahead of time is very speculative, whereas the revenues from yesterday's investments can be observed before deciding whether to invest today for more revenue tomorrow. Therefore I think investment decisions will be driven in part by revenues, and that people trying to forecast future investment decisions should make forecasts about future revenues, so that we can track whether those revenue forecasts are on track and what that implies for future investment forecasts. 

I haven't done the revenue analysis myself, but I'd love to read something good on the revenue needed to justify different datacenter investments, and whether the companies are on track to hit that revenue. 

Reply
Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall
aog5d20

But by 2030 we would get to $770bn, which probably can't actually happen if AI doesn't cross enough capability thresholds by then.

What revenue and growth rate of revenue do you think would be needed to justify this investment? Has there been any good analysis of this question? 

Reply
Research Priorities for Hardware-Enabled Mechanisms (HEMs)
aog4mo40

Thanks for the heads up. I’ve edited the title and introduction to better indicate that this content might be interesting to someone even if they’re not looking for funding. 

Reply
aog's Shortform
aog4mo20

Yeah I think that’d be reasonable too. You could talk about these clusters at many different levels of granularity, and there are tons I haven’t named. 

Reply
aog's Shortform
aog4mo53

If we can put aside for a moment the question of whether Matthew Barnett has good takes, I think it's worth noting that this reaction reminds me of how outsiders sometimes feel about effective altruism or rationalism:

I guess I feel that his posts tend to be framed in a really strange way such that, even though there's often some really good research there, it's more likely to confuse the average reader than anything else and even if you can untangle the frames, I usually don't find worth it the time.

The root cause may be that there is too much inferential distance, too many differences of basic worldview assumptions, to easily have a productive conversation. The argument contained in any given post might rely on background assumptions that would take a long time to explain and debate. It can be very difficult to have a productive conversation with someone who doesn't share your basic worldview. That's one of the reasons that LessWrong encourages users to read foundational material on rationalism before commenting or posting. It's also why scalable oversight researchers like having places to talk to each other about the best approaches to LLM-assisted reward generation, without needing to justify each time whether that strategy is doomed from the start. And it's part of why I think it's useful to create scenes that operate on different worldview assumptions: it's worth working out the implications of specific beliefs without needing to justify those beliefs each time. 

Of course, this doesn't mean that Matthew Barnett has good takes. Maybe you find his posts confusing not because of inferential distance, but because they're illogical and wrong. Personally I think they're good, and I wouldn't have written this post if I didn't. But I haven't actually argued that here, and I don't really want to—that's better done in the comments on his posts. 

Reply
aog's Shortform
aog4mo*5822

Shoutout to Epoch for creating its own intellectual culture. 

Views on AGI seem suspiciously correlated to me, as if many people's views are more determined by diffusion through social networks and popular writing, rather than independent reasoning. This isn't unique to AGI. Most individual people are not capable of coming up with useful worldviews on their own. Often, the development of interesting, coherent, novel worldviews benefits from an intellectual scene. 

What's an intellectual scene? It's not just an idea. Usually it has a set of complementary ideas, each of which make more sense with the others in place. Often there’s a small number of key thinkers who come up with many new ideas, and a broader group of people who agree with the ideas, further develop them, and follow their implied call to action. Scenes benefit from shared physical and online spaces, though they can also exist in social networks without a central hub. Sometimes they professionalize, offering full-time opportunities to develop the ideas or act on them. Members of a scene are shielded from pressure to defer to others who do not share their background assumptions, and therefore feel freer to come up with new ideas that would be unusual to outsiders, but make sense within the scene's shared intellectual framework. These conditions seem to raise the likelihood of novel intellectual progress. 

There are many examples of intellectual scenes within AI risk, at varying levels of granularity and cohesion. I've been impressed with Davidad recently for putting forth a set of complementary ideas around Safeguarded AI and FlexHEGs, and creating opportunities for people who agree with his ideas to work on them. Perhaps the most influential scenes within AI risk are the MIRI / LessWrong / Conjecture / Control AI / Pause AI cluster, united by high p(doom) and focus on pausing or stopping AI development, and the Constellation / Redwood / METR / Anthropic cluster, focused on prosaic technical safety techniques and working with AI labs to make the best of the current default trajectory. (Though by saying these clusters have some shared ideas / influences / spaces, I don't mean to deny the fact that most people within those clusters disagree on many important questions.) Rationalism and effective altruism are their own scenes, as are the conservative legal movement, social justice, new atheism, progress studies, neoreaction, and neoliberalism. 

Epoch has its own scene, with a distinct set of thinkers, beliefs, and implied calls to action. Matthew Barnett has written the most about these ideas publicly, so I'd encourage you to read his posts on these topics, though my understanding is that many of these ideas were developed with Tamay, Ege, Jaime, and others. Key ideas include long timelines, slow takeoff, eventual explosive growth, optimism about alignment, concerns about overregulation, concerns about hawkishness towards China, advocating the likelihood of AI sentience and desirability of AI rights, debating the desirability of different futures, and so on. These ideas motivate much of Epoch's work, as well as Mechanize. Importantly, the people in this scene don't seem to mind much that many others (including me) disagree with them. 

I'd like to see more intellectual scenes that seriously think about AGI and its implications. There are surely holes in our existing frameworks, and it can be hard for people operating within them to spot. Creating new spaces with different sets of shared assumptions seems like it could help. 

Reply22
Daniel Kokotajlo's Shortform
aog6mo10

Curious what you think of arguments (1, 2) that AIs should be legally allowed to own property and participate in our economic system, thus giving misaligned AIs an alternative prosocial path to achieving their goals. 

Reply
Daniel Kokotajlo's Shortform
aog6mo20

How do we know it was 3x? (If true, I agree with your analysis) 

Reply
Daniel Kokotajlo's Shortform
aog6mo20

Do you take Grok 3 as an update on the importance of hardware scaling? If xAI used 5-10x more compute than any other model (which seems likely but not necessarily true?), then the fact that it wasn’t discontinuously better than other models seems like evidence against the importance of hardware scaling. 

Reply
nikola's Shortform
aog7mo*73

I’m surprised they list bias and disinformation. Maybe this is a galaxy brained attempt to discredit AI safety by making it appear left-coded, but I doubt it. Seems more likely that x-risk focused people left the company while traditional AI ethics people stuck around and rewrote the website.

Reply
Load More
21Digital sentience funding opportunities: Support for applied work and research
3mo
0
17Research Priorities for Hardware-Enabled Mechanisms (HEMs)
4mo
2
6aog's Shortform
4mo
21
15Benchmarking LLM Agents on Kaggle Competitions
1y
4
30Adversarial Robustness Could Help Prevent Catastrophic Misuse
Ω
2y
Ω
18
9Unsupervised Methods for Concept Discovery in AlphaZero
2y
0
15MLSN: #10 Adversarial Attacks Against Language and Vision Models, Improving LLM Honesty, and Tracing the Influence of LLM Training Data
2y
1
25Hoodwinked: Evaluating Deception Capabilities in Large Language Models
2y
3
7Learning Transformer Programs [Linkpost]
2y
0
28Full Automation is Unlikely and Unnecessary for Explosive Growth
2y
3
Load More