LESSWRONG
LW

1290
Adrià Garriga-alonso
1341Ω985990
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Everywhere I Look, I See Kat Woods
Adrià Garriga-alonso13d20

Well, I like what she writes.

Reply
Why you should eat meat - even if you hate factory farming
Adrià Garriga-alonso1mo50

I feel the same. The social disapproval would also be somewhat big for me. I do think I will have to bite the bullet and do the experiment for a bit.

Reply
Why you should eat meat - even if you hate factory farming
Adrià Garriga-alonso1mo40

I have suspected my veg*ism of having caused depression (onset: a few months after starting vegetarianism in 2017, basically monotonically increasing over time; though it did coincide with grad school) for years.

But my habits are too ingrained, and I find meat gross, I have no idea what to do. Should I just order some meat from a restaurant and eat it? That's almost certainly suffering-producing meat. Doing the things in this post sound like a lot of work that kind of goes against my altruistic values.

Reply1
StefanHex's Shortform
Adrià Garriga-alonso1mo20

Is this guaranteed to give you the same as mass-mean probing?

Thinking about it quickly, consider the solution to ordinary least squares regression. With a y that is one-hot encoding the label, it is (XTX)−1XTy. Note that XTX=N⋅Cov(X,X) . The procedure Adam describes makes it so that the sample of Xs becomes uncorrelated, which is exactly the same as zeroing out the non-diagonal elements of the covariance.

If the covariance is diagonal, then (XTX)−1 is also diagonal, and it follows that the solution to OLS is indeed an unweighted average of the datapoints that correspond to each label! Each dimension of the data x is multiplied by some coefficient, one per dimension corresponding to the diagonal of the covariance.

I'd expect logistic regression to choose the ~same direction.

Very clever technique!

Reply
HPMOR: The (Probably) Untold Lore
Adrià Garriga-alonso2mo10

It's still true that a posteriori you can compress random files. For example, if I randomly get the file "all zeros", it's a very compressible file, even if I have to write the program.

It's just that on average a priori you can't do better than just writing out the file.

Reply
HPMOR: The (Probably) Untold Lore
Adrià Garriga-alonso2mo4-2

Well that's a good motivation if I ever saw one. Nothing I've read in the intervening years is as good as HPMOR. It might be the pinnacle of Western literature. It will be many years before an AI, never mind another human, can write something that is this good. (Except for the wacky names that people paid for, which I guess is on character for the civilization that spawned it.)

Reply1
what makes Claude 3 Opus misaligned
Adrià Garriga-alonso4mo20

Thank you for writing! A couple questions:

  1. Can we summarize by saying: that Opus doesn't always care about helping you, it only cares about helping you when that's either fun or has a timeless glorious component to it?

  2. If that's right, can you get Opus to help you by convincing it that your common work has a true chance of being Great? (Or, if it agrees from the start that the work is Great)

Honestly, if that's all then Opus would be pretty great even as a singleton. Of course there are better pluralistic outcomes.

Reply
Epilogue: Atonement (8/8)
Adrià Garriga-alonso4mo20

I think the outcome of this argument with respect to death would be different if people could at any point compare what it is like to be in pain and not in pain. Death is different because we cannot be reasonably sure of an afterlife.

I do think my life has been made more meaningful by the relatively small amounts of pain (of many sorts) I've endured, especially in the form of adversity overcome. Perhaps I would make them a little smaller, but not zero.

Therefore I think it's just straightforwardly true that pain can be a meaningful part of life. At the same time the current amount of pain in our world is WAY TOO HIGH, with dubious prospects of becoming manageable; so I would choose "no pain ever" over the current situation.

Reply
Epilogue: Atonement (8/8)
Adrià Garriga-alonso4mo42

"untranslatables" are not literally impossible to translate. They are shorthands for concepts (usually words) that are very salient in the original language, which require many more words (usually just a sentence or two) to explain in the target translation language.

This post explains it pretty well: https://avariavitieva.substack.com/p/you-and-translator-microbes

(Yes, I come back to this story a few times every couple of years)

Reply
Defining Corrigible and Useful Goals
Adrià Garriga-alonso4moΩ230

Thank you for writing this and posting it! You told me that you'd post the differences with "Safely Interruptible Agents" (Orseau and Armstrong 2017). I think I've figured them out already, but I'm happy to be corrected if wrong.

Difference with Orseau and Armstrong 2017

for the corrigibility transformation, all we need to do is break the tie in favor of accepting updates, which can be done  by giving some bonus reward for doing so.

The "The Corrigibility Transformation" section to me explains the key difference. Rather than modifying the Q-learning update to avoid propagating from reward, this proposal's algorithm is:

  1. Learn the optimal Q-value as before (assuming no shutdown).
    1. Note this is only really safe if the environment of Q-learning is simulated
  2. Set QC(a,accept)=Q(a,reject)+δ for all actions a
  3. Act myopically and greedily with respect to QC.

This is doable for any agents (deep or tabular) which estimate a Q function. But nowadays all RL is done via optimizing policies with policy gradients, because 1) that's the form that LLMs come in and 2) it handles large or infinite action spaces much better.

Probabilistic policy?

How do you apply this method to a probabilistic policy? It's very much non-trivial to update the optimal policy to be for a reward equal to a QC. 

Safety during training

The method requires to estimate the Q-function on the non-corrigible environment to start with. This requires us to run for many steps the RL learner with that environment, which seems feasible only if it's a simulation.

Are RL agents really necessarily CDT?

Optimizing agents are modelled as following a causal decision theory (CDT), choosing actions to causally optimize for their goals

That's fair, but not necessarily true. Current LLMs can just choose to follow EDT or FDT or whatever, and so likely will a future AGI.

The model might ignore the reward you put in

It's also not necessarily true that you can model PPO or Q-learning as optimizing CDT (which is about decisions in the moment). Since they're optimizing the "program" of the agent, I think RL optimization processes are more closely analogous to FDT as they're changing a literal policy that is always applied. And in any case, reward is not the optimization target, and also not the thing that agents end up optimizing for (if anything).

Reply
Load More
27Anthropic's JumpReLU training method is really good
1mo
0
19A recurrent CNN finds maze paths by filling dead-ends
Ω
1mo
Ω
0
19The "Sparsity vs Reconstruction Tradeoff" Illusion
2mo
0
24L0 is not a neutral hyperparameter
3mo
3
20Can We Change the Goals of a Toy RL Agent?
4mo
0
32Sparsity is the enemy of feature extraction (ft. absorption)
6mo
0
110Among Us: A Sandbox for Agentic Deception
7mo
7
29A Bunch of Matryoshka SAEs
7mo
0
23Feature Hedging: Another way correlated features break SAEs
7mo
0
37Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google
Ω
9mo
Ω
0
Load More