I reviewed this paper. It is a good paper, but more hype than substance. Or at least I think so. TLDR;
I make 4 major critques of the paper
1. MLPs (Multi-Layer Perceptrons) have learnable activation functions as well. This depends on the what you call an activation function. 2. The content of the paper does not justify the name, Kolmogorov-Arnold networks (KANs). The theorem Kolmogorov-Arnold Theorem is basically unused in the paper. It should have been called spline activation function deep neural networks. 3. KANs are MLPs with spline-b... (read more)
I reviewed this paper. It is a good paper, but more hype than substance. Or at least I think so. TLDR;
I make 4 major critques of the paper
1. MLPs (Multi-Layer Perceptrons) have learnable activation functions as well. This depends on the what you call an activation function.
2. The content of the paper does not justify the name, Kolmogorov-Arnold networks (KANs). The theorem Kolmogorov-Arnold Theorem is basically unused in the paper. It should have been called spline activation function deep neural networks.
3. KANs are MLPs with spline-b... (read more)