LESSWRONG
LW

Swimmer963 (Miranda Dixon-Luinenburg)
7703541337131
Message
Dialogue
Subscribe

I started posting on Less Wrong in 2011, learned about effective altruism, and four years later landed in the Bay Area. I was an ICU nurse in my past life, did several years of EA direct work in operations roles, and in 2022 spent a year writing for Vox Future Perfect. 

You can find my fiction here: https://archiveofourown.org/users/Swimmer963

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
17Swimmer963's Shortform
6y
18
My understanding of Anthropic strategy
Swimmer963 (Miranda Dixon-Luinenburg) 2y52

I do think it's fair to consider the work on GPT-3 a failure of judgement and a bad sign about Dario's commitment to alignment, even if at the time (also based on LinkedIn) it sounds like he was also still leading other teams focused on safety research. 

(I've separately heard rumors that Dario and the others left because of disagreements with OpenAI leadership over how much to prioritize safety, and maybe partly related to how OpenAI handled the GPT-3 release, but this is definitely in the domain of hearsay and I don't think anything has been shared publicly about it.) 

Reply
My understanding of Anthropic strategy
Swimmer963 (Miranda Dixon-Luinenburg) 2y40

Edited first line, which hopefully clarifies this better. 

Reply
My understanding of Anthropic strategy
Swimmer963 (Miranda Dixon-Luinenburg) 2y40

It's deliberate that this post covers mostly specifics that I learned from Anthropic staff, and further speculation is going to be in a separate later post. I wanted to make a really clear distinction between "these are things that were said to me about Anthropic by people who have context" (which is, for the most part, people in favor of Anthropic's strategy), and my own personal interpretation and opinion on whether Anthropic's work is net positive, which is filtered through my worldview and which I think most people at Anthropic would disagree with. 

Part two is more critical, which means I want to write about it with a lot of effort and care, so I expect I'll put it up in a week or two. 

Reply
My understanding of Anthropic strategy
Swimmer963 (Miranda Dixon-Luinenburg) 2y71

My sense is that it's been somewhere in between – on some occasions staff have brought up doubts, and the team did delay a decision until they were addressed, but it's hard to judge how much the end result was a different decision from what would have been made otherwise, versus just happening later. 

The sense I've gotten of the culture is compatible with (current) Anthropic being a company that would change their entire strategic direction if staff started coming in with credible arguments that "what if we shouldn't be advancing capabilities?", but I think this hasn't yet been put to the test – people who choose work at Anthropic are going to be selected for agreeing on the premises behind the Anthropic strategy – and it's hard to know for sure how it would go. 

Reply
My understanding of Anthropic strategy
Swimmer963 (Miranda Dixon-Luinenburg) 2y144

Your summary seems fine! 

Why do you need to do all of this on current models? I can see arguments for this, for instance, perhaps certain behaviors emerge in large models that aren’t present in smaller ones.

I think that Anthropic's current work on RL from AI Feedback (RLAIF) and Constitutional AI is based on large models exhibiting behaviors that don't work in smaller models? (But it'd be neat if someone more knowledgeable than me wanted to chime in on this!) 

My current best understanding is that running state of the art models is expensive in terms of infrastructure and compute, the next generation models will get even more expensive to train and run, and Anthropic doesn't have (and doesn't expect to realistically be able to get) enough philanthropic funding to work on the current best models let alone future ones – so they need investment and revenue streams, 

There's also a consideration that Anthropic wants to have influence in AI governance/policy spaces, where it helps to have a reputation/credibility as one of the major stakeholders in AI work.

Reply
What DALL-E 2 can and cannot do
Swimmer963 (Miranda Dixon-Luinenburg) 3y10

W h a t  that's wild, wow, I would absolutely not have predicted DALL-E could do that! (I'm curious whether it replicates in other instances.) 

Reply
What DALL-E 2 can and cannot do
Swimmer963 (Miranda Dixon-Luinenburg) 3y20

Tragically DALL-E still cannot spell, but here you go:

Reply
What DALL-E 2 can and cannot do
Swimmer963 (Miranda Dixon-Luinenburg) 3y30
Reply
What DALL-E 2 can and cannot do
Swimmer963 (Miranda Dixon-Luinenburg) 3y20

"A group of happy people does Circling and Authentic Relating in a park"

Reply
What DALL-E 2 can and cannot do
Swimmer963 (Miranda Dixon-Luinenburg) 3y20

"A Rube Goldberg machine made out of candy, Sigma 85mm f/1.4 high quality photograph"

Reply
Load More
Lawful Intelligence
5y
Likelihood ratio
5y
Anti-Epistemology
5y
Expected value
5y
Crocker's Rules
5y
History of Less Wrong
5y
Less Wrong/2006 Articles/Summaries
5y
(-9)
166My understanding of Anthropic strategy
2y
31
353What DALL-E 2 can and cannot do
3y
303
122Book review: The Checklist Manifesto
4y
13
31Should doctors be neutral?
4y
0
159Curing insanity with malaria
4y
8
112What is operations?
6y
9
17Swimmer963's Shortform
6y
18
71Reclaiming Eddie Willers
6y
20
53Micro feedback loops and learning
6y
13
25Examples of growth mindset or practice in fiction
10y
33
Load More