Wiki Contributions

Comments

RGRGRG5moΩ110

To confirm - the weights you share, such as 0.26 and 0.23 are each individual entries in the W matrix for:
y=Wx ?

RGRGRG6mo10

This is a casual thought and by no means something I've thought hard about - I'm curious whether b is a lagging indicator, which is to say, there's actually more magic going on in the weights and once weights go through this change, b catches up to it.

Another speculative thought, let's say we are moving from 4* -> 5* and |W_3| is the new W that is taking on high magnitude.  Does this occur because somehow W_3 has enough internal individual weights to jointly look at it's two (new) neighbors' W_i`s  roughly equally?

Does the cos similarity and/or dot product of this new W_3 with its neighbors grow during the 4* -> 5* transition (and does this occur prior to the change in b?)

RGRGRG6mo10

Question about the gif - to me it looks like the phase transition is more like:

4++-  to unstable 5+-  to 4+-  to  5-
(Unstable 5+- seems to have similar loss to 4+-).

Why do we not count the large red bar as a "-" ?

RGRGRG7mo40

Do you expect similar results (besides the fact that it would take longer to train / cost more) without using LoRA?

RGRGRG7mo10

If I were to be accepted for this cycle, would I be expected to attend any events in Europe?  To be clear, I could attend all events in and around Berkeley.

RGRGRG7moΩ110

What city/country is PIBBSS based out of / where will the retreats be?  (Asking as a Bay Area American without a valid passport).

RGRGRG7mo100

For any potential funders reading this:  I'd be open to starting an interpretability lab and would love to chat.  I've been full-time on MI for about 4 months - here is some of my work: https://www.lesswrong.com/posts/vGCWzxP8ccAfqsrS3/thoughts-about-the-mechanistic-interpretability-challenge-2

I have a few PhD friends who are working for software jobs they don't like and would be interested in joining me for a year or longer if there were funding in place (even for just the trial period Marius proposes).

My very quick take is that interpretability has yet to understand small language models and this is a valuable direction to focus on next.  (more details here: https://www.lesswrong.com/posts/ESaTDKcvGdDPT57RW/seeking-feedback-on-my-mechanistic-interpretability-research ) 

 

For any potential cofounders reading this, I have applied to a few incubators and VC funds, without any success.  I think some applications would be improved if I had a co-founder.  If you are potentially interested in cofounding an interpretability startup and you live in the Bay Area, I'd love to meet for coffee and see if we have a shared vision and potentially apply to some of these incubators together.

RGRGRG8mo10

I really like your ambitious MI section and I think you hit on a few interesting questions I've come across elsewhere:

Two researchers interpreted a 1-layer transformer network and then I interpreted it differently - there isn't a great way to compare our explanations (or really know how similar vs different our explanations are).

With papers like the Hydra effect that demonstrate similar knowledge can be spread throughout a network, it's not clear to if we want to/how to analyze impact - can/should we jointly ablate multiple units across different heads at once?

I'm personally unsure how to split my time between interpreting small networks vs larger ones.  Should I focus 100% on interpreting 1-2 layer TinyStories LMs or is looking into 16+ layer LLMs valuable at this time?

Answer by RGRGRGSep 08, 202320

Most weekdays, I set the goal of myself of doing twelve focused blocks of 24 minutes of object level work (my variant on Pomodoro).  Once I complete these blocks, I can do whatever I want - whether it be stop working for the rest of the day, more object level work, meta work, or anything else.

If you try something like this, I'd recommend setting a goal of doing 6(?) such blocks and then letting yourself do as much or as little meta as you want; and then potentially gradually working up to 10-13 blocks.

RGRGRG8mo52

Over the last 3 months, I've spent some time thinking about mech interp as a for profit service.  I've pitched to one VC firm, interviewed for a few incubators/accelerators including ycombinator, sent out some pitch documents, co-founder dated a few potential cofounders, and chatted with potential users and some AI founders).

There are a few issues:

First, as you mention, I'm not sure if mech interp is yet ready to understand models.  I recently interpreted a 1-layer model trained on a binary classification function https://www.lesswrong.com/posts/vGCWzxP8ccAfqsrS3/thoughts-about-the-mechanistic-interpretability-challenge-2 and am currently working on understanding a 1-layer language model (TinyStories-1Layer-21M). TinyStories is (much?) harder than the binary classification network (which took 24 focused days of solo research).  This isn't to say I or someone else won't have an idea how 1 layer models work a few months from now.  Once this happens, we might want to interpret multi-layer models before being ready to interpret models that are running in production.

Second, outsiders can observe that mech interp might not be far enough along to build a product around.  The feedback I received from the VC firm and YC was that my ideas weren't far enough along.

Third, I personally have not yet been able to find someone I'm excited to be cofounders with.  Some people have different visions in terms of safety (some people just don't care at all).  Other people who I share a vision with, I don't match with for other reasons.

Fourth, I'm not certain that I've yet found that ideal first customer - some people seem to think it's nice to have, but frequently with language models, if you get a bad output, you can just run it again (keeping a human in the loop).  To be clear, I haven't given up on finding that ideal customer, and it could be something like government or that customer might not exist until AI models do something really bad.

Fifth, I'm unsure if I actually want to run a company.  I love doing interp research and think I am quite good at it (among other things, having a software background, a PhD in Robotics, and solving puzzles).  I consider myself a 10x+ engineer.  At least right now, it seems like I can add more value by doing independent research rather than running a company.

For me, the first issue is the main one.  Once interp is farther along, I'm open to put more time into thinking about the other issues.  If anyone reading this is potentially interested in chatting, feel free to DM me.

Load More