LESSWRONG
LW

884
gsastry
62Ω70160
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Thoughts on the impact of RLHF research
gsastry3y10

(While I appreciate many of the investigations in this paper and think it is good to improve our understanding, I don’t think they let us tell what’s up with risk.) This could be the subject of a much longer post and maybe will be discussed in the comments.

Do you mean they don't tell us what's up with the difference in risks of the measured techniques, or that they don't tell us much about AI risk in general? (I'd at least benefit from learning more about your views here)

Reply
Geometric Rationality is Not VNM Rational
gsastry3y40

See also: https://www.lesswrong.com/posts/qij9v3YqPfyur2PbX/indexical-uncertainty-and-the-axiom-of-independence for an argument against independence

Reply
Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4]
gsastry3y10

Agreed on (1) and (2). I'm still interested in the counterfactual value of theoretical research in security. One reason is that the "reasoning style" of ELK seems quite similar to that of cryptography – and at least we have some track record with the development of computer security.

Reply
Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4]
gsastry3y30

The military and information assurance communities, which are used to dealing with highly adversarial environments, do not search for solutions that render all failures an impossibility.

In information security, practitioners do not look for airtight guarantees of security, but instead try to increase security iteratively as much as possible. Even RSA, the centerpiece of internet encryption, is not provably completely unbreakable (perhaps a superintelligence could find a way to efficiently factor large numbers).

I take your point, and I like the analogy to computer security. But it does seem like cryptography has had a good record of producing innovations that stem from aiming at rigorous guarantees under certain assumptions, and has been extremely valuable to improving the state of computer security.

Do you think that this claim is overstated, or just that we should additionally rely on other approaches?

Reply
Argument, intuition, and recursion
gsastry5y10

Can you recommend some other posts in that reference class?

Reply
Comment on decision theory
gsastry7yΩ3100

I agree with both your claims, but maybe with less confidence than you (I also agree with DanielFilan's point below).

Here are two places I can imagine MIRI's intuitions here coming from, and I'm interested in your thoughts on them:

(1) The "idealized reasoner is analogous to a Carnot engine" argument. It seems like you think advanced AI systems will be importantly disanalogous to this idea, and that's not obvious to me.

(2) 'We might care about expected utility maximization / theoretical rationality because there is an important sense in which you are less capable / dumber / irrational if e.g. you are susceptible to money pumps. So advanced agents, since they are advanced, will act closer to ideal agents.'

(I don't have much time to comment so sorry if the above is confusing)

Reply
Comment on decision theory
gsastry7yΩ490

I'm not sure what it means for this work to "not apply" to particular systems. It seems like the claim is that decision theory is a way to understand AI systems in general and reason about what they will do, just as we use other theoretical tools to understand current ML systems. Can you spell this out a bit more? (Note that I'm also not really sure what it means for decision theory to apply to all AI systems: I can imagine kludgy systems where it seems really hard in some sense to understand their behavior with decision theory, but I'm not confident at all)

Reply
Mechanistic Transparency for Machine Learning
gsastry7y30

I'm not sure if this will be helpful or if you've already explored this connection, but the field of abstract interpretation tries to understand the semantics of a computer program without fully executing it. The theme of "trying to understand what a program will do by just examining its source code" is also present in program analysis. If we can understand neural networks as typed functional programs maybe there's something worth thinking about here.

Reply
Mythic Mode
gsastry8y150

Like some other commenters, I also highly recommend Impro if this post resonates with you.

Readers who are very interested in a more conceptual analysis of what decision making "is" in the narrative framework may want to check out Tempo (by Venkatesh Rao, who writes at Ribbonfarm). Rao takes as axiomatic the memetically derived idea that all our choices are between life scripts that end in our death, and looks at how to make these choices. It's more of an analytical book on strategy (with exercises) than a poetic exemplar of Mythic Mode, but it seems very related to me. In particular, I think it helps with a core question of Mythic Mode: how do you get useful work out of this narrative way of thinking without being led astray? I don't claim to have an answer, but reading Tempo has certainly been useful for this question.

Reply
Idea: Monthly Community Thread
gsastry8y10

I'm still confused on where to post stuff that I would think of posting in the old LW's Open Threads. For example, "What are the best pieces of writing/advice on dealing with 'shoulds'?" would be one thing that I'd want to post in an Open Thread. I have other various little questions/requests like this.

Reply
Load More