LESSWRONG
LW

Arjun Pitchanathan
260150
Message
Dialogue
Subscribe

Currently a MATS scholar working with Vanessa Kosoy on the learning-theoretic agenda for AI Alignment.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"Buckle up bucko, this ain't over till it's over."
Arjun Pitchanathan3d20

Thanks! (I knew enough about Avatar to know what you wrote in your last paragraph, but the rest is new to me)

Reply
"Buckle up bucko, this ain't over till it's over."
Arjun Pitchanathan3d70

got more exposition on what you mean with the different elements in this context?

Reply
evhub's Shortform
Arjun Pitchanathan16d10

In the case that we live in a simulation, should our reality be treated as "fictional" or "non-fictional"?

Reply
evhub's Shortform
Arjun Pitchanathan16d10

What is the actual difference between a "fictional" and "non-fictional" scenario here? I'm not convinced that it's a failure of general intelligence to not agree with us on this. (It's certainly a failure of alignment.)

Reply
Aristotelian Optimization: The Economics of Cameralism
Arjun Pitchanathan1mo20

I have not read the post, but am confused as to why it is at -3 karma. Would some of the downvoters care to explain their reasoning?

Reply
$500 Bounty Problem: Are (Approximately) Deterministic Natural Latents All You Need?
Arjun Pitchanathan1mo*30

Epistemic status: Quick dump of something that might be useful to someone. o3 and Opus 4 independently agree on the numerical calculations for the bolded result below, but I didn't check the calculations myself in any detail.

When we say "roughly", e.g. 2ϵ or 3ϵ would be fine; it may be a judgement call on our part if the bound is much larger than that. 

Let Y∼Ber(p). With probability r, set Z:=X, and otherwise draw Z∼Ber(p). Let Y∼Ber(1/2). Let A=X⊕Y and B=Y⊕Z. We will investigate latents for (A,B). 

Set Λ:=Y, then note that the stochastic error ϵ:=I(A;Y|B)) because Y induces perfect conditional independence and symmetry of A and B. Now compute the deterministic errors of Λ:=Y, Λ:=0, Λ:=A, which are equal to H(Y∣A),I(A;B),H(A|B) respectively. 

Then it turns out that with p:=0.9,r:=0.44, all of these latents have error greater than 5ϵ, if you believe this claude opus 4 artifact (full chat here, corroboration by o3 here). Conditional on there not being some other kind of latent that gets better deterministic error, and the calculations being correct, I would expect that a bit more fiddling around could produce much better bounds, say 10ϵ or more, since I think I've explored very little of the search space. 

e.g. one could create more As and Bs by either adding more Ys, or more Xs and Zs. Or one could pick the probabilities p,r out of some discrete set of possibilities instead of having them be fixed.

Reply
Natural Abstractions: Key Claims, Theorems, and Critiques
Arjun Pitchanathan2mo30

Yes, thanks!

Reply
Natural Abstractions: Key Claims, Theorems, and Critiques
Arjun Pitchanathan2moΩ110

representation T of a variable X for variable Y

Hm, I don't understand what Y is supposed to be here.

Reply
Which things were you surprised to learn are metaphors?
Arjun Pitchanathan2mo10

Isn't it the case that when you sing a high note, you feel something higher in your mouth/larynx/whatever , and when you sing a low note, you feel something lower? Seems difficult to tell whether I actually do need to do that or I've just conditioned myself to, because of the metaphor.

Reply
Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI
Arjun Pitchanathan3mo41

If you're reading the text in a two-dimensional visual display, you are giving yourself an advantage over the LLM. You should actually be reading it in a one-dimensional format with new-line symbols.

(disclosure, I only skimmed your COT for like a few seconds)

Reply
Load More
No posts to display.