LESSWRONG
LW

nthr
9010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc
nthr3y100

I'm confused about the instances of deep-learning vs. non-deep-learning that you're comparing. Do you have any concrete examples in mind, where there's a deep-learning model and a non-deep-learning model addressing the same task, and where the non-deep-learning model might be more interpretable at first blush, but where you argue it's not?

Your non-deep-learning example is a small causal diagram, which might be a “model,” but what does it… do? (Maybe I'm just ignorant here.) Meanwhile the deep learning example you allude to is a neural network which more concretely does something: it’s an image recognition algorithm for trees. So I’m not sure how to compare the interpretability of these two examples.  They seem like totally different creatures.

I would have thought you wanted to compare (say) a deep-learning image recognition algorithm with some more old-fashioned, non-deep-learning image recognition algorithm.  Maybe the old-fashioned algorithm looks for trees by finding long vertical edges with brown interiors.  The old-fashioned algorithm probably doesn’t work very well.  But it does seem more interpretable, right? 

Reply
No posts to display.