ML-related math trick: I find it easier to imagine a 4D tensor, say of dimensions B×F×M×N, as a big matrix with dimensions B×F within which are nested matrices of dimensions M×N. The nice thing about this is, at least for me, it makes it easier to imagine applying operations over the B×F matrices in parallel, which is something I've had to thing about a number of times doing ML-related programming, e.g. trying to figure out how write the code to apply a 1D convolution-like operation to an entire batch in parallel.
Unfortunately I don't know any quantum stuff. I learned them for machine learning purposes.
A monograph by Cichocki et al. (part 1, part 2) is an overview of how tensor decompositions, tensor formats, and tensor networks can be used in machine learning and signal processing. I think it lacks some applications, including acceleration and compression of neural networks by compression of weights of layers using tensor decompositions (this also sometimes improves accuracy, probably by reducing overfit).
Tensor decompositions and Applications by Kolda, Bader 2009
In light of reading Hazard's Shortform Feed -- which I really enjoy -- based on Raemon's Shortform feed, I'm making my own. There be thoughts here. Hopefully, this will also get me posting more.