LESSWRONG
LW

1423
Jacob_Hilton
1725Ω49510830
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6Jacob_Hilton's Shortform
Ω
6mo
Ω
33
How Well Does RL Scale?
Jacob_Hilton6d20

Nice observation, and I agree with your calculation that linear episode length growth would account for a worse scaling exponent by a factor of 2 (or more generally, episode length growing with exponent k would account for a worse scaling exponent by a factor of k+1).

Note also that this suggests a potential remedy, namely controlling episode length, but there is less incentive to apply this when data is more of a constraint than compute.

Reply
How Well Does RL Scale?
Jacob_Hilton7d12-1

Thanks for this insightful analysis!

But it fits with the extreme information inefficiency of RL training, which (compared to next-token-prediction) receives less than a ten-thousandth as much information to learn from per FLOP of training compute.

If I am interpreting this correctly, there is a subtle mathematical error here: if RL requires a constant factor of 10,000 more compute than pretraining, this only shifts the graph of performance against log(compute), it doesn't change its slope. For RL to have a shallower slope, the information efficiency would have to decrease more quickly over the course of training for RL than for pretraining.

I think there are few potential reasons why information efficiency might decrease more quickly over the course of training for RL than for pretraining, but it is not so clear-cut:

  • Increased accuracy: you get fewer bits of information from a more biased coin flip than a fairer one, so information efficiency decreases as you approach 100% accuracy. But it's not clear whether this applies more to pretraining or to RL. Note also that in both cases the effect can potentially be alleviated by a curriculum.
  • Longer episodes: assuming RL just has a single binary reward at the end of each episode, information density decreases as episodes get longer. Since harder tasks require longer chains of thought, this one seems to clearly count against RL.
  • Overfitting: if there is a mismatch between the training distribution used for RL and the distribution used to benchmark the model, one might expect the density of information relevant to the benchmark to decrease as the model overfits to the training distribution. I think this one also counts against RL right now, but can be alleviated by improving data quality and quantity.

In particular, I think the fact that overfitting can be mitigated with better data cuts against your empirical observations. Since, as you correctly note, RL compute started from a very small base, it was initially much cheaper to scale up compute than to scale up data. But as RL compute becomes more expensive, it will become comparatively more cost-effective to scale up data. Once spending on both is being scaled up at a similar rate (as is economically inevitable as long as spending continues to increase), we should expect to see some regression towards the pretraining slope in my opinion.

Overall, I think the effect you spotted is real (due to things like episode length), but ultimately won't turn out to be as extreme as you estimated here. Quantitatively, I would guess that RL will look more like a power of 1.5-2 worse than pretraining rather a power of 3 worse, and there could be certain training regimes (e.g. fixed episode length) where they are closer than that.

Reply2
Jacob_Hilton's Shortform
Jacob_Hilton8d82

It also makes the quantitative prediction that a doubling in compute (or compute efficiency) leads to a 2/3 win probability, or around 120 Elo points. (Credit to the Hex paper for this observation.) Under 18-month doublings (per one version of Moore's law), this would be around 800 Elo points per decade, which looks like a bit of an overestimate but similar to the fastest observed rate of progress.

Reply1
Jacob_Hilton's Shortform
Jacob_Hilton8d40

The gradient of σ(−βx) at x=0 is −14β, which corresponds to a maximally negative slope of −ln(2)4β≈−17%×β per doubling, where β≈1 is the rightmost column in my table.

Reply
Jacob_Hilton's Shortform
Jacob_Hilton8d42

Yes, unless I messed up, METR's code runs a logistic regression of log2(task duration) against success probability, so my model predicts a raw fitted coefficient (the second column in the table) close to -ln(2) ≈ -0.69.

Reply
Jacob_Hilton's Shortform
Jacob_Hilton9d*484

Task duration as a Bradley–Terry score: an alternative to the constant hazard rate model

@Toby_Ord writes about the constant hazard rate model for task duration: a long task can be thought of as a sequence of many short subtasks of fixed difficulty, each of which must be completed to complete the overall task. This explains the approximately sigmoidal relationship between log(task horizon length) and the probability that a given model successfully completes the overall task.

I think this is a useful conceptual framing that explains the data reasonably well. But there is at least one alternative that explains the data about as well, which is to think of the task duration as being similar to a Bradley–Terry score, i.e., an exponential of an Elo rating.

The underlying intuition is that, in addition to having a larger number of subtasks, a longer task also has a higher probability of having a particularly hard subtask. We can crudely approximate the difficulty of a long task by the difficulty of its hardest subtask.

Concretely, consider any fixed random number distribution (e.g. uniform over [0,1]), representing the difficulty of a subtask. Assign to each task T a positive integer nT, and to each model M a positive integer nM. To decide whether M successfully completes T, we draw nT random numbers from our distribution for the task, and nM random numbers for the model. We then say that the task is completed if the model's largest number exceeds the task's largest number. Thus the probability of completion is

nMnM+nT=σ(lognM−lognT),

where σ(x)=11+e−x is the sigmoid function. This explains the sigmoidal relationship observed in Figure 5 of METR's paper.

Toby's model produces an exponential relationship, which is similar but slightly different to a sigmoid on a log scale. He argues that his relationship is preferred because it has only one free parameter instead of two. However, our model allows us to determine what one of the parameters of the sigmoidal relationship should be, by assuming that nT is proportional to the task duration. This predicts that the (negated) coefficient of the sigmoidal relationship should be around 1, assuming the natural log is applied to the task duration. At the very least, for a fixed task distribution, the coefficients should be similar for different models.

We can test this prediction by running the code used to produce Figure 5 to get the coefficient and intercept of the logistic regression.[1] Since the code applies a base-2 logarithm to the task duration, we can negate and divide the coefficient by the natural log of 2 to get the appropriately-scaled coefficient for our purposes:

AgentCoefficientInterceptCoefficient / (-log(2))
Claude 3 Opus-0.551.480.80
Claude 3.5 Sonnet (New)-0.522.550.76
Claude 3.5 Sonnet (Old)-0.552.310.80
Claude 3.7 Sonnet-0.704.131.01
GPT-2-0.49-2.290.71
GPT-4 0125-0.641.550.92
GPT-4 0314-0.561.360.81
GPT-4 1106-0.541.680.78
GPT-4 Turbo-0.661.790.95
GPT-4o-0.571.820.82
davinci-002 (GPT-3)-0.65-1.790.94
gpt-3.5-turbo-instruct-0.78-0.561.13
human-0.392.550.56
o1-0.512.700.74
o1-preview-0.612.730.88

Even as the intercept varies considerably, the coefficient (divided by -log(2)) is relatively consistent and generally close to 1. It tends to be a little lower than 1, which is what you would expect if task duration measurements were noisy approximations to the value of nT, since this would flatten the slope of the sigmoid.

In reality, neither the constant hazard rate model nor the Bradley–Terry model is perfect. The constant hazard rate model fails to account for the fact that models can recover from small errors, while the Bradley–Terry model fails to account for the fact that models can fail because of subtasks that are easier than the hardest subtask.

The Bradley–Terry model has the advantage that it specifically explains why we might expect the relationship to be sigmoidal rather than approximately sigmoidal, and shows why we may need an extra free parameter to account for noisy measurements of task duration. It also more analogous to the scaling behavior of reinforcement learning observed previously in other settings, such as in Hex and Dota 2, where TrueSkill/Elo rating scales as a power law in compute. See in particular the toy model described in the Hex paper, which inspired the description I gave here:

The way in which performance scales with compute is that an agent with twice as much compute as its opponent can win roughly 2/3 of the time. This behaviour is strikingly similar to that of a toy model where each player chooses as many random numbers as they have compute, and the player with the highest number wins.

The ideal model would probably combine both aspects – that longer tasks have both more subtasks and harder subtasks. But this would have the downside of introducing more free parameters, and the data is likely to be too noisy to fit these in the near future. Overall, sticking to fitting two-parameter sigmoids is probably the way to go for now.

  1. ^

    After installing the eval-analysis-public repo, I obtained these numbers by running the following command: mkdir data/wrangled/logistic_fits; python -m src.wrangle.logistic --fig-name headline --runs-file data/external/all_runs.jsonl --output-logistic-fits-file data/wrangled/logistic_fits/headline.csv --release-dates data/external/release_dates.yaml --bootstrap-file data/wrangled/bootstrap/headline.csv. Thanks to Nate Rush for help with this.

Reply
Jacob_Hilton's Shortform
Jacob_Hilton1mo80

Agree about recent results not being driven by formalization, but I'd also guess having ground truth (e.g. numeric answers or reference solutions) remains pretty important, which doesn't scale to the superhuman regime.

Agree that evidence from humans means reaching superhuman capability through purely informal proof is possible in principle. But ML is less robust than humans by default, and AI is already more proficient with formal proof systems than most mathematicians. So informal-to-formal seems like a natural consequence of increased tool use. Not confident in this of course.

I expect easy-to-check software engineering tasks (and tasks that are conceptually similar to easy-to-check tasks) to be pretty close to math, and harder-to-check/fuzzier tasks to lag. Most tasks in the broad economy seem like they fall in the latter category. The economy will likely adapt to make lots of tasks better suited to AI, but that process may be slower than the capability lag anyway. AI R&D might be a different story, but I will leave that to another discussion.

Reply
Jacob_Hilton's Shortform
Jacob_Hilton1mo6941

Superhuman math AI will plausibly arrive significantly before broad automation

I think it's plausible that for several years in the late 2020s/early 2030s, we will have AI that is vastly superhuman at formal domains including math, but still underperforms humans at most white-collar jobs (and so world GDP growth remains below 10%/year, say – still enough room for AI to be extraordinarily productive compared to today).

Of course, if there were to be an intelligence explosion on that timescale, then superhuman math AI would be unsurprising. My main point is that superhuman math AI still seems plausible even disregarding feedback loops from automation of AI R&D. On the flip side, a major catastrophe and/or coordinated slowdown could prevent both superhuman math AI and broad automation. Since both of these possibilities are widely discussed elsewhere, I will disregard both AI R&D feedback loops and catastrophe for the purposes of this forecast. (I think this is a very salient possibility on the relevant timescale, but won't justify that here.)

My basic reasons for thinking vastly superhuman math AI is a serious possibility in the next 4–8 years (even absent AI R&D feedback loops and/or catastrophe):

  • Performance in formal domains is verifiable: math problems can be designed to have a unique correct answer, and formal proofs are either valid or invalid. Historically, in domains with cheap, automated supervision signals, only a relatively small amount of research effort has been required to produce superhuman AI (e.g., in board games and video games). There are often other bottlenecks than supervision, most notably exploration and curricula, but these tend to be more surmountable.
  • Recent historical progress in math has been extraordinarily fast: in the last 4 years, AI has gone from struggling with grade school math to achieving an IMO gold medal, with progress at times exceeding almost all forecasters' reasonable expectations. Indeed, much of this progress seems to have been driven by the ability to automatically supervise math, with reasoning models being trained using RL on a substantial amount of math data.
  • Superhuman math AI looks within reach without enormous expense: reaching superhuman ability in a domain requires verifying solutions beyond a human's ability to produce them, and so a static dataset produced by humans isn't enough. (In fact, a temporary slowdown in math progress in the near future seems possible because of this, although I wouldn't bet on it.) But the following two ingredients (plus sufficient scale) seem sufficient for superhuman math AI, and within reach:

    • Automatic problem generation: the ability to generate a diverse enough set of problems such that both (a) most realistic math of interest to humans is within distribution and (b) problem difficulty is granular enough to provide a good curriculum. Current LLMs with careful prompting/fine-tuning may be enough for this.
    • Reliable informal-to-formal translation: solution verifiers need to be robust enough to avoid too much reward hacking, which probably requires natural language problems and solutions to be formalized to some degree (a variety of arrangements seem possible here, but it's hard to see how something purely informal can provide sufficiently scalable supervision, and it's hard to see how something purely formal can capture mathematicians' intuitions about what problems are interesting). This is basically a coding problem, and doesn't seem too far beyond the capabilities of current LLMs. Present-day formalization efforts by humans are challenging, but in large part because of their laboriousness, which AI is excellent at dealing with.

    Note I'm not claiming that there will be discontinuous progress once these ingredients "click into place". Instead, I expect math progress to continue on a fast but relatively continuous trajectory (perhaps with local breakthroughs/temporary slowdowns on the order of a year or two). The above two ingredients don't seem especially responsible for current math capabilities, but could become increasingly relevant as we move towards and into the superhuman regime.

By contrast, some reasons to be skeptical that AI will be automating more than a few percent of the economy by 2033 (still absent AI R&D feedback loops and/or catastrophe):

  • Progress in domains in which performance is hard to verify has been slower: by comparison with the dramatic progress in math, the ability of an AI to manage a small business enterprise is relatively unimpressive. In domains with a mixture of formal and informal problem specifications, such as coding, progress has been similarly fast to math, or perhaps a little slower (as measured by horizon length), but my qualitative impression is that has been driven by progress on easy-to-verify tasks, with some transfer to hard-to-verify tasks. I expect to continue to see domains lag behind based on the extent to which performance is easy to verify.
  • Possible need for expensive long-horizon data: in domains with fuzzy, informal problem specifications, or requiring expensive or long-horizon feedback from the real world, we will continue to see improvements, since there will be transfer both from pretraining scaling and from more RL on verifiable tasks. But for tasks where this progress is slow despite the task being economically important, it will eventually be worth it to collect expensive long-horizon feedback. However, it might take several years to scale up the necessary infrastructure for this, unlike some clear routes to superhuman math AI, for which all the necessary infrastructure is essentially already in place. This makes a 2–5+ year lag seem quite plausible.
  • Naive revenue extrapolation: one way to get a handle on the potential timescale until broad automation is to extrapolate AI company revenues, which are on the order of tens of billions of dollars per year today, around 0.01% of world GDP. Even using OpenAI's own projections (despite their incentives to make overestimates), which forecast that revenue will grow by a factor of 10 over the next 4 years, and extrapolating them an additional 4 years into the future, gives an estimate of around 1% of world GDP by 2033. AI companies won't capture all the economic value they create, but on the other hand this is a very bullish forecast by ordinary standards.

What would a world with vastly superhuman math AI, but relatively little broad automation, look like? Some possibilities:

  • Radical effect on formal sciences: by "vastly superhuman math AI", I mean something like: you can give an AI a math problem, and it will respond within e.g. a couple of hours with a formal proof or disproof, as long as a human mathematician could have found an informal version of the proof in say 10 years. (Even though I just argued for the plausibility of this, it seems completely wild to comprehend, spelled out explicitly.) I think this would completely upend the formal sciences (math, theoretical computer science and theoretical physics) to say the least. Progress on open problems would be widespread but highly variable, since their difficulty likely ranges from "just out of reach to current mathematicians" to "impossible".
  • Noticeable speed-up of applied sciences: it's not clear that such a dramatic speed-up in the formal sciences would have that dramatic consequences for the rest of the world, given how abstract much of it is. Cryptography, formal verification and programming languages might be the most consequential areas, followed by areas like experimental physics and computational chemistry. However, in most of the experimental sciences, formal results are not the main bottleneck, so speed-ups would be more dependent on progress on coding, fuzzier tasks, robotics, and so on. Math-heavy theoretical AI alignment research would be significantly sped up, but may still face philosophical hurdles.
  • Broader economy: it's worth emphasizing that even if world GDP growth remains below 10%/year, that still leaves plenty of room for AI to feel "crazy", labor markets to be dramatically affected by ordinary standards, political discussion to be dominated by AI, etc. Note also that this period may be fairly short-lived (e.g., a few years).

Such a scenario is probably poor as an all-things-considered conditional forecast, since I've deliberately focused on a very specific technological change, but it hopefully adds some useful color to my prediction.

Finally, some thoughts on whether pursuing superhuman math AI specifically is a beneficial research direction:

  • Possibility for transfer: there is a significant possibility that math reasoning ability transfers to other capabilities; indeed, we may already be seeing this in today's reasoning models (though I haven't looked at ablation results). That being said, moving into the superhuman regime and digging into specialist areas, math ability will increasingly be driven by carefully-tuned specialist intuitions, especially if pursuing something like the informal-to-formal approach laid out above. Moreover, specialized math ability seems to have limited transfer in humans, and transfer in ML is generally considerably worse than in humans. Overall, this doesn't seem like a dominant consideration.
  • Research pay-offs: a different kind of "transfer" is that pursuit of superhuman math AI would likely lead to general ML research discoveries, clout, PR etc., making it easier to develop other AI capabilities. I think this is an important consideration, and probably the main reason that AI companies have prioritized math capabilities so far, together with tractability. However, pursuing superhuman math AI doesn't seem that different from other capabilities research in this regard, so the question of how good it is in this respect is mostly screened off by how good you think it is to work on capabilities in general (which could itself depend on the context/company).
  • Differential progress: the kinds of scientific progress that superhuman math AI would enable look more defense-oriented than average (e.g., formal verification), and I think the possibility of speeding up theoretical AI alignment research is significant (I work in this area and math AI is already helping).
  • Replaceability: there are strong incentives for AI companies and for individual researchers to pursue superhuman math AI anyway (e.g., the research pay-offs discussed above), which reduces the size (in either direction) of the marginal impact of an individual choosing to work in the area.

Overall, pursuing superhuman math AI seems mildly preferable to working on other capabilities, but not that dissimilar in its effects. It wouldn't be my first choice for most people with the relevant skillset, unless they were committed to working on capabilities anyway.

Reply3
Jacob_Hilton's Shortform
Jacob_Hilton3mo20

I'm happy to talk about a theoretical HCAST suite with no bugs and infinitely many tasks of arbitrarily long time-horizon tasks, for the sake of argument (even though it is a little tricky to reason about and measuring human performance would be impractical).

I think the notion of an "infinite time horizon" system is a poor abstraction, because it implicitly assumes 100% reliability. Almost any practical, complex system has a small probability of error, even if this probability is too small to measure in practice. Once you stop using this abstraction, the argument doesn't seem to hold up: surely a system that has 99% reliability at million-year tasks has lower than 99% reliability at 10 million-year tasks? This seems true even if a 10 million-year task is nothing more than 10 consecutive million-year tasks, and that seems strictly easier than an average 10 million-year task.

Reply
Jacob_Hilton's Shortform
Jacob_Hilton3mo*4717

Against superexponential fits to current time horizon measurements

I think is unreasonable to put non-trivial weight (e.g. > 5%) on a superexponential fit to METR's 50% time horizon measurements, or similar recently-collected measurements.

To be precise about what I am claiming and what I am not claiming:

  • I am not claiming that these measurements will never exhibit a superexponential trend. In fact, I think a superexponential trend is fairly likely eventually, due to feedback loops from AI speeding up AI R&D. I am claiming that current measurements provide almost no information about such an eventuality, and naively applying a superexponential fit gives a poor forecast.
  • I am not claiming that is very unlikely for the trend to be faster in the near future than in the near past. I think a good forecast would use an exponential fit, but with wide error bars on the slope of the fit. After all, there are very few datapoints, they are not independent of each other, and there is measurement noise. I am claiming that extrapolating the rate at which the trend is getting faster is unreasonable.
  • My understanding is that AI 2027's forecast is heavily driven by putting substantial weight on such a superexponential fit, in which case my claim may call into question the reliability of this forecast. However, I have not dug into AI 2027's forecast, and am happy to be corrected on this point. My primary concern is with the specific claim I am making rather than how it relates to any particular aggregated forecast.

Note that my argument has significant overlap with this critique of AI 2027, but is focused on what I think is a key crux rather than being a general critique. There has also been some more recent discussion of superexponential fits since the GPT-5 release here, although my points are based on METR's original data. I make no claims of originality and apologize if I missed similar points being made elsewhere.

The argument

METR's data (see Figure 1) exhibits a steeper exponential trend over the last year or so (which I'll call the "1-year trend") than over the last 5 years or so (which I'll call the "5-year trend"). A superexponential fit would extrapolate this to an increasingly steep trend over time. Here is my why I think such an extrapolation is unwarranted:

  • There is a straightforward explanation for the 1-year trend that we should expect to be temporary. The most recent datapoints are all reasoning models trained with RL. This is a new technique that scales with compute, and so we should expect there to be rapid initial improvements as compute is scaled from a low starting point. But this compute growth must eventually slow down to the rate at which older methods are growing in compute, once the total cost becomes comparable. This should lead to a leveling off of the 1-year trend to something closer to the 5-year trend, all else being equal.
    • Of course, there could be another new technique that scales with compute, leading to another (potentially overlapping) "bump". But the shape of the current "bump" tells us nothing about the frequency of such advances, so it is an inappropriate basis for such an extrapolation. A better basis for such an extrapolation would be the 5-year trend, which may include past "bumps".
  • Superexponential explanations for the 1-year trend are uncompelling. I have seen two arguments for why we might expect the 1-year trend to be the start of a superexponential trend, and they are both uncompelling to me.
    1. Feedback from AI speeding up AI R&D. I don't think this effect is nearly big enough to have a substantial effect on this graph yet. The trend is most likely being driven by infrastructure scaling and new AI research ideas, neither of which AI seems to be substantially contributing to. Even in areas where AI is contributing more, such as software engineering, METR's uplift study suggests the gains are currently minimal at best.
    2. AI developing meta-skills. From this post:
      "If we take this seriously, we might expect progress in horizon length to be superexponential, as AIs start to figure out the meta-skills that let humans do projects of arbitrary length. That is, we would expect that it requires more new skills to go from a horizon of one second to one day, than it does to go from one year to one hundred thousand years; even though these are similar order-of-magnitude increases, we expect it to be easier to cross the latter gap."
      It is a little hard to argue against this, since it is somewhat vague. But I am unconvinced there is such a thing as a "meta-skill that lets humans do projects of arbitrary length". It seems plausible to me that a project that takes ten million human-years is meaningfully harder than 10 projects that each take a million human-years, due to the need to synthesize the 10 highly intricate million-year sub-projects. To me the argument seems very similar to the following, which is not borne out:
      "We might expect progress in chess ability to be superexponential, as AIs start to figure out the meta-skills (such as tactical ability) required to fully understand how chess pieces can interact. That is, we would expect it to require more new skills to go from an ELO of 2400 to 2500, than it does to go from an ELO of 3400 to 3500."
      At the very least, this argument deserves to be spelled out more carefully if it is to be given much weight.
  • Theoretical considerations favor an exponential fit (added in edit). Theoretically, it should take around twice as much compute to train an AI system with twice the horizon length, since feedback is twice as sparse. (This point was made in the Biological anchors report and is spelled out in more depth in this paper.) Hence exponential compute scaling would imply an exponential fit. Algorithmic progress matters too, but that has historically followed an exponential trend of improved compute efficiency. Of course, algorithmic progress can be lumpy, so we shouldn't expect an exponential fit to be perfect.
  • Temporary explanations for the 1-year trend are more likely on priors. The time horizon metric has huge variety of contributing factors, from the inputs to AI development to the details of the task distribution. For any such complex metric, the trend is likely to bounce around based on idiosyncratic factors, which can easily be disrupted and are unlikely to have a directional bias. (To get a quick sense of this, you could browse through some of the graphs on AI Impact's Discontinuous growth investigation, or even METR's measurements in other domains for something more directly relevant.) So even if I wasn't able to identify the specific idiosyncratic factor that I think is responsible for the 1-year trend, I would expect there to be one.
  • The measurements look more consistent with an exponential fit. I am only eyeballing this, but a straight line fit is reasonably good, and a superexponential fit doesn't jump out as a privileged alternative. Given the complexity penalty of the additional parameters, a superexponential fit seems unjustified based on the data alone. This is not surprising given the small number of datapoints, many of which are based on similar models and are therefore dependent. (Edit: looks like METR's analysis (Appendix D.1) supports this conclusion, but I'm happy to be corrected here if there is a more careful analysis.)

What do I predict?

In the spirit of sticking my neck out rather than merely criticizing, I will make the following series of point forecasts which I expect to outperform a superexponential fit: just follow an exponential trend, with an appropriate weighting based on recency. If you want to forecast 1 year out, use data from the last year. If you want to forecast 5 years out, use data from the last 5 years. (No doubt it's better to use a decay rather than a cutoff, but you get the idea.) I obviously have very wide error bars on this, but probably not wide enough to include the superexponential fit more than a few years out.

As an important caveat, I'm not making a claim about the real-world impact of an AI that achieves a certain time horizon measurement. That is much harder to predict than the measurement itself, since you can't just follow straight lines on graphs.

Reply32
Load More
6Jacob_Hilton's Shortform
Ω
6mo
Ω
33
121A bird's eye view of ARC's research
Ω
1y
Ω
12
104Backdoors as an analogy for deceptive alignment
Ω
1y
Ω
2
162Formal verification, heuristic explanations and surprise accounting
Ω
1y
Ω
11
126ARC is hiring theoretical researchers
Ω
2y
Ω
12
23The effect of horizon length on scaling laws
Ω
3y
Ω
2
103Scaling Laws for Reward Model Overoptimization
Ω
3y
Ω
13
227Common misconceptions about OpenAI
Ω
3y
Ω
154
37How much alignment data will we need in the long run?
Ω
3y
Ω
15
57Deep learning curriculum for large language model alignment
Ω
3y
Ω
3
Load More