Nathan Helm-Burger

AI alignment researcher, ML engineer. Masters in Neuroscience.

I believe that cheap and broadly competent AGI is attainable and will be built soon. This leads me to have timelines of around 2024-2027. Here's an interview I gave recently about my current research agenda. I think the best path forward to alignment is through safe, contained testing on models designed from the ground up for alignability trained on censored data (simulations with no mention of humans or computer technology). I think that current ML mainstream technology is close to a threshold of competence beyond which it will be capable of recursive self-improvement, and I think that this automated process will mine neuroscience for insights, and quickly become far more effective and efficient. I think it would be quite bad for humanity if this happened in an uncontrolled, uncensored, un-sandboxed situation. So I am trying to warn the world about this possibility. 

See my prediction markets here:

 https://manifold.markets/NathanHelmBurger/will-gpt5-be-capable-of-recursive-s?r=TmF0aGFuSGVsbUJ1cmdlcg 

I also think that current AI models pose misuse risks, which may continue to get worse as models get more capable, and that this could potentially result in catastrophic suffering if we fail to regulate this.

I now work for SecureBio on AI-Evals.

relevant quotes: 

"There is a powerful effect to making a goal into someone’s full-time job: it becomes their identity. Safety engineering became its own subdiscipline, and these engineers saw it as their professional duty to reduce injury rates. They bristled at the suggestion that accidents were largely unavoidable, coming to suspect the opposite: that almost all accidents were avoidable, given the right tools, environment, and training." https://www.lesswrong.com/posts/DQKgYhEYP86PLW7tZ/how-factories-were-made-safe 

 

"The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense. A great deal of new political thinking will be necessary if utter disaster is to be averted." - Bertrand Russel, The Bomb and Civilization 1945.08.18

 

"For progress, there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment." - John von Neumann

 

"I believe that the creation of greater than human intelligence will occur during the next thirty years.  (Charles Platt has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a         relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)" - Vernor Vinge, Singularity

Wikitag Contributions

Comments

Sorted by

Came here to say basically the same thing Zach Stein-Perlman said in his comment, "This isn't taking AGI seriously." The definition of AGI, in my mind, is that it will be an actor at least on par with humans. And it is highly likely to be radically superhuman in at least a few important ways (e.g. superhuman speed, rapid replication, knowledge-merging).

I think it's highly unlikely that there is anything but a brief period (<5 years) where there is superhuman AGI but humanity is fully in control. We will quickly move to a regime where there are independent digital entities, and then to one where the independent digital entities have more power than all of humanity and it's controlled AGIs have put together. Either these new digital superbeings love us and we flourish (including potentially being uplifted into their modality via BCI, uploading and intelligence enhancement) or we perish.

Also, both human and deep neural net agents are somewhat stochastic, so they may be randomly intermittently exploitable.

I've been thinking about this, especially since Rohon has been bringing it up frequently in recent months.

I think there are potentially win-win alignment-and-capabilities advances which can be sought. I think having a purity-based "keep-my-own-hands-clean" mentality around avoiding anything that helps capabilities is a failure mode of AI safety reseachers.

Win-win solutions are much more likely to actually get deployed, thus have higher expected value.

Some ask, "what should the US gov have done instead?"

Here's an answer I like to that question, from max_paperclips:

https://x.com/max_paperclips/status/1909085803978035357

https://x.com/max_paperclips/status/1907946171290775844

As for the Llama 4 models... It's true that it's too soon to be sure, but the pattern sure looks like they are on trend with the previous Llama versions 2 and 3. I've been working with 2 and 3 a bunch. Evals and fine-tuning and various experimentation. Currently I'm working with the 70B Llama3 r1 distill plus the 32B Qwen r1 distill. The 32B Qwen r1 is so much better it's ridiculous. So yeah, it's possible that Llama4 will be a departure from trend, but I doubt it.

Contrast this with the Gemini trend. They started back at 1.0 with disproportionately weak models given the engineering and compute they had available. My guess is that this was related to poor internal coordination, and there was the merger of DeepMind with Google Brain that probably contributed to this. But if you look at the trend of 1.0 to 1.5 to 2.0... there's a clear trend of improving more per month than other groups were. Thus, I was unsurprised when 2.5 turned out to be a leading frontier model. Llama team has shown no such "catchup" trend, so Llama4 turning out to be as strong as they claim would surprise me a lot.

Yes, that's what I'm arguing. Really massive gains in algorithmic efficiency, plus gains in decentralized training and peak capability and continual learning, not necessarily all at once though. Maybe just enough that you then feel confident to continue scraping together additional resources to pour into your ongoing continual training. Renting GPUs from datacenters all around the world (smaller providers like Vast.ai, Runpod, Lambda Labs, plus marginal amounts from larger providers like AWS and GCP, all rented in the name of a variety of shell companies). The more compute you put in, the better it works, the more money you are able to earn (or convince investors or governments to give you) with the model-so-far, the more compute you can afford to rent....

Not necessarily exactly this story, just something in this direction.

By the way, I don't mean to imply that Meta AI doesn't have talented AI researchers working there. The problem is more that the competent minority are so diluted and hampered by bureaucratic parasites that they can't do their jobs properly.

Why does ai-2027 predict China falling behind? Because the next level of compute beyond the current level is going to be hard for DeepSeek to muster. In other words, that DeepSeek will be behind in 2026 because of hardware deficits in late 2025. If things moved more slowly, and the critical strategic point hit in 2030 instead of 2027, I think it's likely China would have closed the compute gap by then.

I agree with this take, but I think it misses some key alternative possibilities. The failure of the compute-rich Llama models to compete with the compute poorer but talent and drive rich Alibaba and DeepSeek shows that even a substantial compute lead can be squandered. Given that there is a lot of room for algorithmic improvements (as proven by the efficiency of the human brain), this means that determined engineering plus willingness to experiment rather than doubling-down on currently working tech (as it seems like Anthropic, Google DM, and OpenAI are likely to do) may give enough of a breakthrough to hit the regime of recursive self-improvement before or around the same time as the compute-rich companies. Once that point is hit, a lead can be gained and maintained through reckless acceleration....

Adopting new things as quickly as the latest model predicts that they work, without pausing for cautious review, and you can move a lot faster than a company proceeding cautiously.

How much faster?

How much compute advantage does the recklessness compensate for?

How reckless will the underdogs be?

These are all open questions in my mind, with large error bars. This is what I think ai-2027 misses in their analysis.

I just want to comment that I think Minsky's community of mind is a better overall model of agency than predictive coding. I think predictive coding does a great job of describing the portions of the brain responsible for perceiving and predicting the environment. It also does pretty well at predicting and refining the effects of one's actions on the environment. It doesn't do well at all with describing the remaining key piece: goal setting based on expected value predictions by competing subagents.

I think there's a fair amount of neuroscience evidence pointing towards human planning processes being made up of subagents arguing for different plans. These subagents are themselves made up of dynamically fluctuating teams of sub-sub-agents according to certain physical parameters of the cortex. So, the sub-agents are kinda like competing political parties, that can fracture or join dynamically to adapt to different contexts.

Also, it's important to keep in mind that actually the subagents don't just receive maximum reward for being accurate. They actually receive higher rewards for things turning out unexpectedly better than was predicted. This slightly complicated surprise-enhanced-reward mechanism is common across mammals and birds, was discovered by behaviorists quite a while back (see: reinforcement schedules, for optimizing unpredictability to maximize behavior change. Also, see surprisal and dopamine). So yeah, not just 100% predictive coding, despite that claim persistently being made by the most enthusiastic predictive coding adherents. They argue for that, but I think their arguments are trying to turn a system that 90% agrees with them into one that 100% agrees with them by adding in a bunch of confusing epicycles that don't match the data well.

Load More