Crossposted from my medium
Recently, I became interested in the topic of Explainable Artificial Intelligence. I was mostly inspired by the Machine Learning Street Talk episode with Christoph Molnar. I decided I need to learn the basics of the field and I found this 13 lectures course available for free on YouTube: Reliable and Interpretable Artificial Intelligence originally held in Autumn Semester 2020 at ETH Zurich by Martin Vechev
In this review I will try to answer the question:
I really enjoyed the course and consider it a good investment in my situation (working full time but wanting to transition to AI research with a focus on AI safety). Each lecture was insightful, and every point was very thoroughly explained with multiple rephrasings and examples. I was able to watch some parts of it in increased playback speed, occasionally slowing down when I felt I’m losing the thread. I logged in total 15h for watching the course content and around 3h for Anki cards creation and review in my personal time tracking app.
I expected operations like propagating generalized convex polyhedra through multiple layers of a neural net to require quite sophisticated mathematics — turns out, basic linear algebra was enough to understand the concept.
Although the course assumes the basic ML concepts are known to the student when the given concept is needed usually a quick recap is provided; for example for RELU activation, CNN architecture, etc.
To sum up, the course is perfectly approachable for a machine learning enthusiast, still working on a good overview of the basics.
The course covers the popular topics of adversarial examples, robustness, and also smoothing classifiers and visualization techniques for computer vision systems.
I think certification methods based on various convex relaxation techniques might be overrepresented in the curriculum. Possibly because this is the particular area of research interest of the lecturer Martin Vechev. If your goal is to get a good overview of machine learning interpretability in a time-effective manner, you might want to consider skipping lectures 5–7.
However, the important topics of geometrical robustness (i. e. defense from adversarial examples based on geometrical operations on the input such as rotation or translation rather than L2 perturbations) and certified defense (training the network not only to be robust but to be provably robust) might have been difficult to grasp without a proper, comprehensive introduction of more complicated convex relaxation techniques like DeepPoly — so entirely skipping those lectures might not be a smart move after all.
For reference, below I provide a more detailed syllabus:
Watching the whole lecture was definitely a good investment into getting started in Interpretable Machine Learning. Gained knowledge allows me to keep up with seminar talks like this one. I feel confident that I would be able to pick up a random new article in the field and more or less understand the content — but I didn’t really try it yet.
As the next steps to get familiar with the field I’m thinking of systematically going through the very promising book by Christoph Molnar (including all practical examples if I have time).
I didn’t find any interesting courses on Coursera on this topic, but I’m considering taking the one covering GANs. Mostly because it has graded practical exercises in PyTorch if I understand the descriptions correctly.
Do you have any positive experiences with AI-related courses or books available publicly (both paid and free)?