LESSWRONG
LW

480
ryan_greenblatt
21244Ω50075519698
Message
Dialogue
Subscribe

I'm the chief scientist at Redwood Research.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
15ryan_greenblatt's Shortform
Ω
2y
Ω
323
p.b.'s Shortform
ryan_greenblatt1h20

I'd guess swe bench verified has an error rate around 5% or 10%. They didn't have humans baseline the tasks, just look at them and see if they seem possible.

Wouldn't you expect thing to look logistic substantially before full saturation?

Reply
p.b.'s Shortform
ryan_greenblatt8h64

Wouldn't you expect this if we're close to saturating SWE bench (and some of the tasks are impossible)? Like, you eventually cap out at the max performance for swe bench and this doesn't correspond to an infinite time horizon on literally swe bench (you need to include more longer tasks).

Reply
Eric Neyman's Shortform
ryan_greenblatt1d70

I agree probably more work should go into this space. I think it is substantially less tractable than reducing takeover risk in aggregate, but much more neglected right now. I think work in this space has the capacity to be much more zero sum (among existing actors, avoiding AI takeover is zero sum with respect to the relevant AIs) and thus can be dodgier.

Reply4
Patrick Spencer's Shortform
ryan_greenblatt4d30

Seems relevant post AGI/ASI (human labor is totally obsolete and AIs have massively increased energy output) maybe around the same point as when you're starting to build stuff like Dyson swarms or other massive space based projects. But yeah, IMO probably irrelevant in the current regime (for next >30 years without AGI/ASI) and current human work in this direction probably doesn't transfer.

I think the case in favor of space-based datacenters is that energy efficiency of space-based solar looks better: you can have perfect sun 100% of the time and you don't have an atmosphere in the way. But, this probably isn't a big enough factor to matter in realistic regimes without insane amounts of automation etc.

Reply
Patrick Spencer's Shortform
ryan_greenblatt4d*70

In addition to hitting higher energy from a given area, you also can get the same energy 100% of the time (without issues with night or clouds). But yeah, I agree, and I don't see how you get 50x efficiency even if transport to space (and assembly/maintenance in space) were free.

Reply
Mikhail Samin's Shortform
ryan_greenblatt4d130

If your theory of change is convincing Anthropic employees or prospective Anthropic employees they should do something else, I think your current approach isn't going to work. I think you'd probably need to much more seriously engage with people who think that Anthropic is net-positive and argue against their perspective.

Possibly, you should just try to have less of a thesis and just document bad things you think Anthropic has done and ways that Anthropic/Anthropic leadership has misled employees (to appease them). This might make your output more useful in practice.

I think it's relatively common for people I encounter to think both:

  • Anthropic leadership is engaged in somewhat scumy appeasment of safety motivated employees in ways that are misleading or based on kinda obviously motivated reasoning. (Which results in safety motivated employees having a misleading picture of what the organization is doing and why and what people expect to happen.)
  • Anthropic is strongly net positive despite this and working on capabilities there is among the best things you can do.

An underlying part of this view is typically that moderate improvements in effort spent on prosaic safety measures substantially reduces risk. I think you probably strongly disagree with this and this might be a major crux.

Personally, I agreee with what Zach said. I think working on capabilities[1] at Anthropic is probably somewhat net positive but would only be the best thing to work on if you had very strong comparative advantage relative to all the other useful stuff (e.g. safety research). So probably most altruistic people with views similar to mine should do something else. I currently don't feel very confident that capabilities at Anthropic is net positive and could imagine swinging towards thinking it is net negative based on additional evidence


  1. Putting aside strongly differential specific capabilities work. ↩︎

Reply1
Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
ryan_greenblatt4d84

This seems non-crazy, but my current guess would be that the training for 4.5 (relative to 4) made both verbalized and unverbalized eval awareness more likely. E.g., I don't have much confidence that 4.5 isn't aware it is in an eval even if it doesn't mention this in the cot (especially if it sometimes mentions this for pretty similar inputs or sometimes mentions this for this input). Like, it's good that we know eval awareness is an issue in 4.5, but it's hard to be confident you've avoided this issue in some eval by looking for verbalized eval awareness!

(If we somehow became confident that 4.5 is eval aware if it verbalizes with some experiments/science that could change my perspective.)

If you aggressively optimize for finding training environments that train against misalignment and which don't increase verbalized eval awareness, it does seem plausible you find environments that make unverbalized eval awareness more likely. I'd propose also not doing this and instead trying to roughly understand what makes eval awareness increase and avoid this in general.

Reply
ryan_greenblatt's Shortform
ryan_greenblatt5d40

Also, can models now be prompted to trick probes? (My understanding is this doesn't work for relatively small open source models, but maybe SOTA models can now do this?)

Reply
shortplav
ryan_greenblatt5d60

Isn't there already a kinda reasonable solution via something like UDASSA? See e.g. here (and this response to Joe's objections here).

Reply11
nikola's Shortform
ryan_greenblatt5d40

Copying over most what I wrote about this on X/twitter:

It's great that Anthropic did a detailed risk report on sabotage risk (for Opus 4) and solicited an independent review from METR.

I hope other AI companies do similar analysis+reporting+transparency about risk with this level of rigor and care.

[...]

I think this sort of moderate-access third party review combined with a detailed (and thoughtful) risk report can probably provide a reasonably accurate picture of the current situation with respect to risk (if we assume that AI companies and their employees don't brazenly lie).

That said, it's not yet clear how well this sort of process will work when risk is large (or at least plausibly large) and thus there are higher levels of pressure. Selecting a bad/biased third-party reviewer for this process seems like a particularly large threat.

As far as I can tell, Anthropic did a pretty good job with this risk report (at least procedurally), but I haven't yet read the report in detail.

Reply
Load More
136What's up with Anthropic predicting AGI by early 2027?
1d
4
122Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
Ω
5d
Ω
19
88Is 90% of code at Anthropic being written by AIs?
13d
13
32Reducing risk from scheming by studying trained-in scheming behavior
Ω
19d
Ω
0
41Iterated Development and Study of Schemers (IDSS)
Ω
25d
Ω
1
125Plans A, B, C, and D for misalignment risk
Ω
1mo
Ω
68
244Reasons to sell frontier lab equity to donate now rather than later
1mo
33
56Notes on fatalities from AI takeover
1mo
61
47Focus transparency on risk reports, not safety cases
Ω
1mo
Ω
3
40Prospects for studying actual schemers
Ω
2mo
Ω
2
Load More
Anthropic (org)
10 months ago
(+17/-146)
Frontier AI Companies
a year ago
Frontier AI Companies
a year ago
(+119/-44)
Deceptive Alignment
2 years ago
(+15/-10)
Deceptive Alignment
2 years ago
(+53)
Vote Strength
2 years ago
(+35)
Holden Karnofsky
3 years ago
(+151/-7)
Squiggle Maximizer (formerly "Paperclip maximizer")
3 years ago
(+316/-20)