LESSWRONG
LW

Jsevillamol
1816Ω198441532
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5Jsevillamol's Shortform
Ω
4y
Ω
2
Trends in Machine Learning
Grok 4 Various Things
Jsevillamol2mo250

including the new Tier 4 questions


Quick comment, this is not correct. As of this time, we have not evaluated Grok 4 on FrontierMath Tier 4 questions. Our preliminary evaluation was conducted only with Tier 1-3 questions.

Reply
Empirical Evidence Against "The Longest Training Run"
Jsevillamol2mo30

Someone referred me back to this post for comment, so I want to share a couple of updates on how we think about training run lengths at Epoch.

First, we now have better data. Across notable models, we have seen training run lengths get longer by around 30%/year in the last decade. This would naively imply we would see 3x longer training runs by the end of the decade. Recent large training runs often take up to 90 days (eg Llama 3), so this would naively lead to nine-month training runs by the end of the decade.

Second, I still believe the argument given in the original post is coherent and makes for a compelling upper bound, after accounting for uncertainty on the relevant trends.

This is not the only consideration that goes into deciding how long to train for. In practice, my understanding is that developers are mostly weighing the improvement they see during training versus the costs of a delayed release in terms of attention and market share. But I still expect for the upper bound of ~a year to be roughly binding, at least while hardware and algorithmic improvements continue progressing as fast as in recent years.

Reply
jacquesthibs's Shortform
Jsevillamol5mo100

For clarity, at the moment of writing I felt that was a valid concern.

Currently I think this is no longer compelling to me personally, though I think at least some of our stakeholders would be concerned if we published work that significantly sped up AI capabilities and investment, which is a perspective we keep in mind when deciding what to work on. 

I never thought that just because something speed up capabilities it means it is automatically something we shouldn't work on. We are willing to make trade offs here in service of our core mission of improving the public understanding of the trajectory of AI. And in general we make a strong presumption in favour of freedom of knowledge.

Reply
Jonathan Claybrough's Shortform
Jsevillamol7mo*345

I'm talking from a personal perspective here as Epoch director.

  • I personally take AI risks seriously, and I think they are worth investigating and preparing for.
  • I co-started Epoch AI to get evidence and clarity on AI and its risks and this is still a large motivation for me.
  • I have drifted towards a more skeptical position on risk in the last two years. This is due to a combination of seeing the societal reaction to AI, me participating in several risk evaluation processes, and AI unfolding more gradually than I expected 10 years ago.
  • Currently I am more worried about concentration in AI development and how unimproved humans will retain wealth over the very long term than I am about a violent AI takeover.
  • I also selfishly care about AI development happening fast enough that my parents, friends and myself could benefit from it, and I am willing to accept a certain but not unbounded amount of risk from speeding up development. Id currently be in favour of slightly faster development, specially if it could happen in a less distributed way. I feel very nervous about this however, as I see my beliefs as brittle.

 

I'm also going to risk also sharing more internal stuff without coordinating on it, erring on the side of over sharing. There is a chance that other management at Epoch won't endorse these takes.

  • At management level, we are choosing to not talk about risks or work on risk measurement publicly. If I try to verbalize it, it's due to a combination of us having different beliefs on AI risk, which makes communicating from a consensus view difficult, believing that talking about risk would alienate us from stakeholders skeptical of AI Risk, and the evidence about risk being below what we are comfortable writing about.
  • My sense is that OP is funding us primarily to gather evidence relevant to their personal models. Eg two senior people at OP particularly praised our algorithmic progress paper because it directly informs their models. They do also care about us producing legible evidence on key topics for policy, such as the software singularity or post training enhancements. We have had complete editorial control and I feel confident in rejecting topic suggestions that come from OP staff when they don't match my vision of what we should be writing about (and have done so in the past).
  • In term of overall beliefs, we have a mixture of people very worried and skeptical of risk. I think the more charismatic and outspoken people at Epoch err towards being more skeptical of risks, but no one at Epoch is dismissive of it.
  • Some stakeholders I've talked to have expressed this view that they wish for Epoch AI to gain influence and then communicate publicly about AI risk. I don't feel comfortable with that strategy, one should expect Epoch AI to keep a similar level of communication about risk as we gain influence. We might be willing to talk more about risks if we gather more evidence of risk, or if we build more sophisticated tools to talk about it, but this isn't the niche we are filling or that you should expect us to fill.
  • The podcast is actually a good example here. We talk toward the end about the share of the economy owned by biological humans becoming smaller over time, which is an abstraction we have studied and have moderate confidence in. This is compatible with an AI takeover scenario, but also a peaceful transition to an AI dominated society. This is the kind of communication about risks you can expect from Epoch, relying more on abstractions we have studied than stories we don't have confidence in.
  • The overall theory of change of Epoch AI is that having reliable evidence on AI will help raise the standards of conversation and decision making elsewhere. To be maximally clear, we are willing to make some tradeoffs like publishing work like FrontierMath and our distributed training paper that plausibly speed up AI development in service of that mission.
Reply
Liability regimes for AI
Jsevillamol1y72

The ability to pay liability is important to factor in and this illustrates it well. For the largest prosaic catastrophes this might well be the dominant consideration

For smaller risks, I suspect in practice mitigation, transaction and prosecution costs are what dominates the calculus of who should bear the liability, both in AI and more generally.

Reply
Towards more cooperative AI safety strategies
Jsevillamol1y70

What's the FATE community? Fair AI and Tech Ethics?

Reply2
Parameter counts in Machine Learning
Jsevillamol1yΩ120

We have conveniently just updated our database if anyone wants to investigate this further!
https://epochai.org/data/notable-ai-models

Reply
We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"
Jsevillamol1y154

Here is a "predictable surprise" I don't discussed often: given the advantages of scale and centralisation for training, it does not seem crazy to me that some major AI developers will be pooling resources in the future, and training jointly large AI systems.

Reply
Bayesian inference without priors
Jsevillamol1y156

I've been tempted to do this sometime, but I fear the prior is performing one very important role you are not making explicit: defining the universe of possible hypothesis you consider.

In turn, defining that universe of probabilities defines how bayesian updates look like. Here is a problem that arises when you ignore this: https://www.lesswrong.com/posts/R28ppqby8zftndDAM/a-bayesian-aggregation-paradox

Reply1
Revisiting algorithmic progress
Jsevillamol2y20

shrug 

I think this is true to an extent, but a more systematic analysis needs to back this up.

For instance, I recall quantization techniques working much better after a certain scale (though I can't seem to find the reference...).  It also seems important to validate that techniques to increase performance apply at large scales. Finally, note that the frontier of scale is growing very fast, so even if these discoveries were done with relatively modest compute compared to the frontier,  this is still a tremendous amount of compute!

Reply
Load More
Scoring Rules
4y
(+164/-40)
Writing (communication method)
4y
(-43)
71Power laws in Speedrunning and Machine Learning
2y
1
35Announcing Epoch’s dashboard of key trends and figures in Machine Learning
Ω
2y
Ω
7
16Epoch Impact Report 2022
3y
0
35Literature review of TAI timelines
3y
7
12Injecting some numbers into the AGI debate - by Boaz Barak
3y
0
21AI Forecasting Research Ideas
3y
2
35Some research ideas in forecasting
3y
2
71The longest training run
Ω
3y
Ω
12
76A time-invariant version of Laplace's rule
3y
13
97Announcing Epoch: A research organization investigating the road to Transformative AI
Ω
3y
Ω
2
Load More