LESSWRONG
LW

1792
307th
4051300
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Anthropic's leading researchers acted as moderate accelerationists
307th13d41

Wow we have a lot of the same thinking!

I've also felt like people who think we're doomed are basically spending a lot of their effort on sabotaging one of our best bets in the case that we are not doomed, with no clear path to victory in the case where they are correct (how would Anthropic slowing down lead to a global stop?)

And yeah I'm also concerned about competition between DeepMind/Anthropic/SSI/OpenAI - in theory they should all be aligned with each other but as far as I can see they aren't acting like it.

As an aside, I think the extreme pro-slowdown view is something of a vocal minority. I met some Pause AI organizers IRL and brought up the points I brought in my original comment, expecting pushback, but they agreed, saying they were focused on neutrally enforced slowdowns e.g. government action.

Reply
Anthropic's leading researchers acted as moderate accelerationists
307th15d30

My point was that even though we already have an extremely reliable recipe for getting an LLM to understand grammar and syntax, we are not anywhere near a theoretical guarantee for that. The ask for a theoretical guarantee seems impossible to me, even on much easier things that we already know modern AI can do.

When someone asks for an alignment guarantee I'd like them to demonstrate what they mean by showing a guarantee for some simpler thing - like a syntax guarantee for LLMs. I'm not familiar with SLT but I'll believe it when I see it.

Reply
Anthropic's leading researchers acted as moderate accelerationists
307th15d10

Just a note here that I'm appreciating our conversation :)  We clearly have very different views right now on what is strategically needed but digging your considered and considerate responses.

Thank you! Same here :)

How do you account for the problem here that Nvidia's and downstream suppliers' investment in GPU hardware innovation and production capacity also went up as a result of the post-ChatGPT race (to the bottom) between tech companies on developing and releasing their LLM versions?

I frankly don't know how to model this somewhat soundly. It's damn complex.

I think it's definitely true that AI-specific compute is further along than it would be if there hadn't been the LLM boom happening. I think the relationship is unaffected though - earlier LLM development means faster timelines but slower takeoff.

Personally I think slower takeoff is more important than slower timelines, because that means we get more time to work with and understand these proto-AGI systems. On the other hand to people who see alignment as more of a theoretical problem that is unrelated to any specific AI system, slower timelines are good because they give theory people more time to work and takeoff speeds are relatively unimportant. 

But I do think the latter view is very misguided. I can imagine a setup for training a LLM in a way that makes it both generally intelligent and aligned; I can't imagine a recipe for alignment that works outside of any particular AI paradigm, or that invents its own paradigm while simultaneously aligning it. I think the reason a lot of theory-pilled people such as people at MIRI become doomers is that they try to make that general recipe and predictably fail.

This not a very particular view – in terms of the possible lines of reasoning and/or people with epistemically diverse worldviews that end up arriving at this conclusion. I'd be happy to discuss the reasoning I'm working from, in the time that you have.

I think I'd like to have a discussion about whether practical alignment can work at some point, but I think it's a bit outside the scope of the current convo. (I'm referring to the two groups here as 'practical' and 'theoretical' as a rough way to divide things up).

Above and beyond the argument over whether practical or theoretical alignment can work I think there should be some norm where both sides give the other some credit. Because in practice I doubt we'll convince each other, but we should still be able to co-operate to some degree.

E.g. for myself I think theoretical approaches that are unrelated to the current AI paradigm are totally doomed, but I support theoretical approaches getting funding because who knows, maybe they're right and I'm wrong.

And on the other side, given that having people at frontier AI labs who care about AI risk is absolutely vital for practical alignment, I take anti-frontier lab rhetoric as breaking a truce between the two groups in a way that makes AI risk worse. Even if this approach seems doomed to you, I think if you put some probability on you being wrong about it being doomed then the cost-benefit analysis should still come up robustly positive for AI-risk-aware people working at frontier labs (including on capabilities).

This is a bit outside the scope of your essay since you focused on leaders at Anthropic who it's definitely fair to say have advanced timelines by some significant amount. But for the marginal worker at a frontier lab who might be discouraged from joining due to X-risk concerns, I think the impact on timelines is very small and the possible impact on AI risk is relatively much larger.

Reply
Anthropic's leading researchers acted as moderate accelerationists
307th16d30

They made a point at the time of expressing concern about AI risk. But what was the difference they made here?

I think you're right that releasing GPT-3 clearly accelerated timelines with no direct safety benefit, although I think there are indirect safety benefits of AI-risk-aware companies leading the frontier.

You could credibly accuse me of shifting the goalposts here, but in GPT-3 and GPT-4's case I think the sooner they came out the better. Part of the reason the counterfactual world where OpenAI/Anthropic/DeepMind had never been founded and LLMs had never been scaled up seems so bad to me is that not only do none of the leading AI companies care about AI risk, but also once LLMs do get scaled up, everything will happen much faster because Moore's law will be further along.

It does not hinge though on just that view. There are people with very different worldviews (e.g. Yudkowsky, me, Gebru) who strongly disagree on fundamental points – yet still concluded that trying to catch up on 'safety' with current AI companies competing to release increasingly unscoped and complex models used to increasingly automate tasks is not tractable in practice.

Gebru thinks there is no existential risk from AI so I don't really think she counts here. I think your response somewhat confirms my point - maybe people vary on how optimistic they are about alternative theoretical approaches, but the common thread is strong pessimism about the pragmatic alignment work frontier labs are best positioned to do.


I'm noticing that you are starting from the assumption that it is a tractibly solvable problem – particularly by "people who work closely with cutting edge AI and who are using the modern deep learning paradigm".

A question worth looking into: how can we know whether the long-term problem is actually solvable? Is there a sound basis for believing that there is any algorithm we could build in that would actually keep controlling a continuously learning and self-manufacturing 'AGI' to not cause the extinction of humans (over at least hundreds of years, above some soundly guaranteeable and acceptably high probability floor)?

I agree you won't get such a guarantee, just like we don't have a guarantee that a LLM will learn grammar or syntax. What we can get is something that in practice works reliably. The reason I think it's possible is that a corrigible and non-murderous AGI is a coherent target that we can aim at and that AIs already understand. That doesn't mean we're guaranteed success mind you but it seems pretty clearly possible to me.

Reply11
Anthropic's leading researchers acted as moderate accelerationists
307th16d50

I actually doubt there are other general learning techniques out there in math space at all, because I think we're already just doing "approximation of bayesian updating on circuits"

Interesting perspective! I think I agree with this in practice although not in theory (I imagine there are some other ways to make it work, I just think they're very impractical compared to deep learning).

I don't think I can make reliably true claims about anthropic's effects with the amount of information I have, but their effects seem suspiciously business-success-seeking to me, in a way that seems like it isn't prepared to overcome the financial incentives I think are what mostly kill us anyway.

Part of my frustration is that I agree there are tons of difficult pressures on people at frontier AI companies, and I think sometimes they bow to these pressures. They hedge about AI risk, they shortchange safety efforts, they unnecessarily encourage race dynamics. I view them as being in a vitally important and very difficult position where some mistakes are inevitable, and I view this as just another type of mistake that should be watched for and fixed.

But instead, these mistakes are used as just another rock to throw - any time they do something wrong, real or imagined, people use this as a black mark against them that proves they're corrupt or evil. I think that's both untrue and profoundly unhelpful.

Reply
Anthropic's leading researchers acted as moderate accelerationists
307th17d6330

I am in the camp that thinks that it is very good for people concerned about AI risk to be working at the frontier of development. I think it's good to criticize frontier labs who care and pressure them but I really wish it wasn't made with the unhelpful and untrue assertion that it would be better if Anthropic hadn't been founded or supported.

The problem, as I argued in this post, is that people way overvalue accelerating timelines and seem willing to make tremendous sacrifices just to slow things down a small amount. If you advocate that people concerned about AI risk avoid working on AI capabilities, the first order effect of this is filtering AI capability researchers so that they care less about AI risk. Slowing progress down is a smaller, second order effect. But many people seem to take it for granted that completely ceding frontier AI work to people who don't care about AI risk would be preferable because it would slow down timelines! This seems insane to me. How much time would possibly need to be saved for that to be worth it?

To try to get to our crux: I've found that caring significantly about accelerating timelines seems to hinge on a very particular view of alignment where pragmatic approaches by frontier labs are very unlikely to succeed, whereas some alternative theoretical work that is unrelated to modern AI has a high chance of success. I think we can see that here:

  • I skip details of technical safety agendas because these carry little to no weight. As far as I see, there was no groundbreaking safety progress at or before Anthropic that can justify the speed-up that their researchers caused. I also think their minimum necessary aim is intractable (controlling ‘AGI’ enough, in time or ever, to stay safe[4]).

I have the opposite view - successful alignment work is most likely to come out of people who work closely with cutting edge AI and who are using the modern deep learning paradigm. Because of this I think it's great that so many leading AI companies care about AI risk, and I think we would be in a far worse spot if we were in a counterfactual world where OpenAI/DeepMind/Anthropic had never been founded and LLMs had (somehow) not been scaled up yet.

Reply31
So You Want to Work at a Frontier AI Lab
307th3mo40

By "it will look like normal deep learning work" I don't mean it will be exactly the same as mainstream capabilities work - e.g. RLHF was both "normal deep learning work" and also notably different from all other RL at the time. Same goes for constitutional AI.

What seems promising to me is paying close attention to how we're training the models and how they behave, thinking about their psychology and how the training influences that psychology, reasoning about how that will change in the next generation.

 

It seems odd and unlikely to me that the same kind of work (normal deep learning) that looks like it causes a series of major problems (power-seeking, black boxes, emergent goals) when you do a moderate amount of it would wind up solving all of those same problems when you do a lot of it, but I'm not enough of a technical expert to be sure that that's wrong.

What are we comparing deep learning to here? Black box - 100% granted. 

But for the other problems - power-seeking and emergent goals - I think they will be a problem with any AI system and in fact they are much better in deep learning than I would have expected. Deep learning is basically short sighted and interpolative rather than extrapolative, which means that when you train it on some set of goals, it by default tries to pursue those goals in a short sighted way that makes sense. If you train it on poorly formed goals, you can still get bad behaviour, and as it gets smarter we'll have more issues, but LLMs are a very good base to start from - they're highly capable, understand natural language, and aren't power seeking.

 

In contrast, the doomed theoretical approaches I have in mind are things like provably safe AI. With these approaches you have two problems: 1), a whole new way of doing AI which won't work, and 2), the theoretical advantage - that if you can precisely specify what your alignment target is, it will optimize for it - is in fact a terrible disadvantage, since you won't be able to precisely specify your alignment target.

Because there are independent, non-technical reasons for people to want to believe that normal deep learning will solve alignment (it means they get to take fun, high-pay, high-status jobs at AI developers without feeling guilty about it)

This is what I mean about selective cynicism! I've heard the exact same argument about theoretical alignment work - "mainstream deep learning is very competitive and hard; alignment work means you get a fun nonprofit research job" - and I don't find it convincing in either case.

Reply
So You Want to Work at a Frontier AI Lab
307th3mo30

> In order to do useful superalignment research, I suspect you sometimes need to warn about or at least openly discuss the serious threats that are posed by increasingly advanced AI, but the business model of frontier labs depends on pretending that none of those threats are actually serious.

I think this is overly cynical. Demis Hassabis, Sam Altman, and Dario Amodei all signed the statement on AI risk:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

They don't talk about it all the time but if someone wants to discuss the serious threats internally, there is plenty of external precedent for them to do so.

> frontier labs are only pretending to try to solve alignment 

This is probably the main driver of our disagreement. I think hands-off theoretical approaches are pretty much guaranteed to fail, and that successful alignment will look like normal deep learning work. I'd guess you feel the opposite (correct me if I'm wrong), which would explain why it looks to you like they aren't really trying and it looks to me like they are.

Reply
So You Want to Work at a Frontier AI Lab
307th3mo40

I think if you do concede that superalignment is tractable at a frontier lab, it is pretty clear that joining and working on alignment will have far more benefits than any speedup. You could construct probabilities such that that's not true, I just don't think those probabilities would be realistic.

I also think that people who argue against working in a frontier lab are burying the lede. It is often phrased as a common sense proposition anyone who agrees in the possibility of X-risk should agree with. Then you get into the discussion and it turns out that the entire argument is premised on extremely controversial priors that most people who believe in X-risk from AI do not agree with. I don't mind debating those priors but it seems like a different conversation - rather than "don't work at a frontier lab" your headline should be "frontier labs will fail at alignment while nonprofits can succeed, here's why".

Reply
So You Want to Work at a Frontier AI Lab
307th3mo-3-10

The frontier labs have certainly succeeded at aligning their models. LLMs have achieved a level of alignment people wouldn't have dreamed of 10 years ago.

Now labs are running into issues with the reasoning models, but this doesn't at all seem insurmountable.

Reply
Load More
125I Would Have Solved Alignment, But I Was Worried That Would Advance Timelines
2y
33