I've been studying tensor decompositions and approximate tensor formats for half a year. Since I've learned about tensor networks, I've noticed that I can draw them to figure out how to code some linear operations on tensors.
Once I used this to figure out how to implement backward method of some simple neural network layer (not something novel, it was for the sake of learning how deep learning frameworks work). Another time I needed to figure out how to implement forward method for a Conv2d layer with weights tensor in CP format. After drawing its output as a tensor network diagram, it was clear that I could just do a sequence of 3 Conv2d layers: pointwise, depthwise, pointwise.
I am not saying that you should learn tensor networks, it's probably a lot of buck for not too large bang unless you want to work with tensor decompositions and formats.
In light of reading Hazard's Shortform Feed -- which I really enjoy -- based on Raemon's Shortform feed, I'm making my own. There be thoughts here. Hopefully, this will also get me posting more.