Posts

Sorted by New

Wiki Contributions

Comments

eca1y10

Really appreciate the response :)

Totally acknowledge the limitations you outlined.

I was aiming to construct an example which would illustrate how the loss metric would break in a black box setting (where X and Y are too gnarly to vis). In that case you have no clue that your model implements sin(x), and so I dont see how that could be the goal. In the black box setting you do get access to distance between scrubbed y and y_true (loss) and distance between scrubbed_y and original_y (my proposal, lets call it output distance). When you look at loss, it is possible for causal scrubbing to yield an explanation of the model’s performance which, from my perspective, is an obviously bad one in that it it causes the function implemented by the model to be radically different.

If that is one of classes of “importantly false or i complete hypotheses” then why not check the predicted ys against each other and favor hypotheses that have both close outputs and low loss?

(I think these converge to the same thing as the original model’s loss go to zero, but prior to that driving output distance to zero is the only way to get an equivalent function to the original network, I claim)

eca1yΩ010

Great stuff! Excited to see this extended and applied. I hope to dive deeper into this series and your followup work.

Came to the appendix for 2.2 on metrics, still feel curious about the metric choice.

I’m trying to figure out why this is wrong: “loss is not a good basis for a primary metric even though its worth looking at and intuitive, because it hides potentially large+important changes to the X-> Y mapping learned by the network that have equivalent loss. Instead, we should just measure how yscrubbed_i has changed from yhat_i (original model) at each xi we care about.” I think I might have heard people call this a “function space” view (been a while since I read that stuff) but that is confusing wording with your notation of f.

Dumb regression example. Suppose my training dataset is scalar (x,y) pairs that almost all fall along y=sin(x). I fit a humungo network N and when i plot N(x) for all my xs I see a great approximation of sin(x). I pick a weird subset of my data where instead of y=sin(x), this data is all y=0 (as far as I can tell this is allowed? I don’t recall restrictions on training distribution having to match) and use it to compute my mse loss during scrubbing. I find a hypothesis that recovers 100% of performance! But I plot and it looks like cos(x), which unless I’m tired has the same MSE from the origin in expectation.

I probably want to know if I my subnetwork is actually computing a very different y for the same exact x, right? Even if it happens to have a low or even equal or better loss?

(I see several other benefits of comparing model output against scrubbed model output directly, for instance allowing application to data which is drawn from your target distribution but not labelled)

Even if this is correct, I doubt this matters much right now compared to the other immediate priorities for this work, but I’d hope someone was thinking about it and/ or I can become less confused about why the loss is justified