Interpretability (ML & AI)

niplav (+40)
niplav (+594)
Arjun Yadav (+18/-18)
Multicore (+424)

A prominent subfield of interpretability of neural networks is mechanistic interpretability, which attempts to understandhow neural networks perform the tasks they perform, for example by finding circuits in transformer models. This can be contrasted to subfieds of interpretability which seek to attribute some output to a part of a specific input, such as clarifying which pixels in an input image caused a computer vision model to output the classification "horse".

See Also

Research

Present Present-day machine learning systems are typically not very transparent or interpretable. You can use a model'model's output, but the model can'can't tell you why it made that output. This makes it hard to determine the cause of biases in ML models.

Transparency and interpretability is the ability for the decision processes and inner workings of AI and machine learning systems to be understood by humans or other outside observers.

Present day machine learning systems are typically not very transparent or interpretable. You can use a model's output, but the model can't tell you why it made that output. This makes it hard to determine the cause of biases in ML models.