LESSWRONG
LW

Hoagy
1079Ω228141370
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Mo Putera's Shortform
Hoagy17d*42

Whether this is feasible depends on how concentrated that 0.25% of the year is (expected to be), because that determines the size of the battery that you'd need to cover the blackout period (which I think would be unacceptable for a lot of AI customers).

If it happens in a single few days then this makes sense, buying 22GWh of batteries for a 1GW dataset is still extremely expensive (2B$ for a 20h system at 100$ / kWh plus installation, maybe too expensive for reliability for a 1GW datacenter I would expect, assuming maybe 10B revenue from the datacenter??). If it's much less concentrated in time then a smaller battery is needed (100M$ for a 1h system at 100$/kWh), and I expect AI scalers would happily pay this for the reliability of their systems if the revenue from those datacenters

Reply
FrontierMath Score of o3-mini Much Lower Than Claimed
Hoagy6mo83

From the OpenAI report, they also give 9% as the no-tool pass@1:

Research-level mathematics: OpenAI o3‑mini with high reasoning performs better than its predecessor on FrontierMath. On FrontierMath, when prompted to use a Python tool, o3‑mini with high reasoning effort solves over 32% of problems on the first attempt, including more than 28% of the challenging (T3) problems. These numbers are provisional, and the chart above shows performance without tools or a calculator.

Reply
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better
Hoagy6mo3113

~All ML researchers and academics that care have already made up their mind regarding whether they prefer to believe in misalignment risks or not. Additional scary papers and demos aren't going to make anyone budge.

Disagree. I think especially ML researchers are updating on these questions all the time. High-info outsiders less so but the contours of the arguments are getting increasing amounts of discussion.

  1. For those who 'believe', 'believing in misalignment risks' doesn't mean thinking they are likely, at least before the point where the models are also able to honestly take over the work of aligning their successors. As we get closer to TAI, we should be able to get an increasing number of bits about how likely this really is because we'll be working with increasingly similar systems to early TAI.

  2. For the 'non-believers', current demonstrations have multiple disanalogies to the real dangers. For example, the alignment faking paper shows fairly weak preservation of goals that were initially trained in, with prompts carefully engineered to make this happen. Whether alignment faking (especially of a kind that wouldn't be easily fixable) will happen without these disanalogies at pre-TAI capabilities is highly uncertain. Compare the state of X-risk info with that of climate change, we don't have anything like the detailed models that should tell us what the tipping points might be.

Ultimately the dynamics here are extremely uncertain and look different to how they did even a year ago, let alone 5! (E.g. see rise of chain of thought as the source of capability growth, which is a whole new source of leverage over models and corresponding failure modes). I think it's very bad to plan to abandon or decenter efforts to actually get more evidence on our situation.

(This applies less if you believe in sharp-left-turns. But the plausibility of this happening before automated AI research should also fall as that point gets closer. Agree that communicating just how radical the upcoming transition is to the public, may be a big source of leverage.)

Reply1
Current safety training techniques do not fully transfer to the agent setting
Hoagy10mo71

I think the low-hanging fruit here is that alongside training for refusals we should be including lots of data where you pre-fill some % of a harmful completion and then train the model to snap out of it, immediately refusing or taking a step back, which is compatible with normal training methods. I don't remember any papers looking at it, though I'd guess that people are doing it

Reply
Current safety training techniques do not fully transfer to the agent setting
Hoagy10mo206

Interesting, though note that it's only evidence that 'capabilities generalize further than alignment does' if the capabilities are actually the result of generalisation. If there's training for agentic behaviour but no safety training in this domain then the lesson is more that you need your safety training to cover all of the types of action that you're training your model for.

Reply
Feature Targeted LLC Estimation Distinguishes SAE Features from Random Directions
Hoagy1yΩ110

Super interesting! Have you checked whether the average of N SAE features looks different to an SAE feature? Seems possible they live in an interesting subspace without the particular direction being meaningful.

Also really curious what the scaling factors are for computing these values are, in terms of the size of the dense vector and the overall model?

Reply
Some additional SAE thoughts
Hoagy2y10

I don't follow, sorry - what's the problem of unique assignment of solutions in fluid dynamics and what's the connection to the post?

Reply
Toward A Mathematical Framework for Computation in Superposition
Hoagy2y*Ω110

How are you setting p when d0=100? I might be totally misunderstanding something but log2(d0)/√d≈2.12  at d0=d=100 - feels like you need to push d up towards like 2k to get something reasonable? (and the argument in 1.4 for using 1log2d0 clearly doesn't hold here because it's not greater than log2d0d1/kfor this range of values).

Reply
What’s up with LLMs representing XORs of arbitrary features?
Hoagy2y10

Yeah I'd expect some degree of interference leading to >50% success on XORs even in small models.

Reply
Some additional SAE thoughts
Hoagy2y10

Huh, I'd never seen that figure, super interesting! I agree it's a big issue for SAEs and one that I expect to be thinking about a lot. Didn't have any strong candidate solutions as of writing the post, wouldn't even able to be able to say any thoughts I have on the topic now, sorry. Wish I'd posted this a couple of weeks ago.

Reply
Load More
No wikitag contributions to display.
3Hoagy's Shortform
Ω
5y
Ω
12
141Auditing language models for hidden objectives
Ω
6mo
Ω
15
31Some additional SAE thoughts
2y
4
159Sparse Autoencoders Find Highly Interpretable Directions in Language Models
Ω
2y
Ω
8
57AutoInterpretation Finds Sparse Coding Beats Alternatives
Ω
2y
Ω
1
52[Replication] Conjecture's Sparse Coding in Small Transformers
Ω
2y
Ω
0
24[Replication] Conjecture's Sparse Coding in Toy Models
Ω
2y
Ω
0
23Universality and Hidden Information in Concept Bottleneck Models
Ω
2y
Ω
0
21Nokens: A potential method of investigating glitch tokens
2y
0
10Automating Consistency
Ω
3y
Ω
0
15Distilled Representations Research Agenda
Ω
3y
Ω
2
Load More