Posts

Sorted by New

Wiki Contributions

Comments

Hi all, I am a bit new in deep learning filed and actually I use CNN and MLP to do side channel attacks. To do this attack I train models with power or electromagnetic signals leaking from small devices. It means that the training data I use are too noisy. The observation that brought me here was that when I increased the training sample size (from 50000, to 450000 with 50000 step size) my model performance did not increase or even decreased(we define model performance with a different metric, not the accuracy or other common ML performance metrics). So in the first glance I thought it could be a result of "sample-wise non-monotonicity" and it means that if I choose a deep model that is big enough for my sample size then I can run away from this non- monotonicity. Unfortunately it was not the case and even when I chose a model that has 5 times more parameters than my biggest training samples it still did not show increase in performance after increasing the training sample size. Now the point that comes to my mind is that maybe it is because my data is too noisy. In the Nakkiran et. al paper I saw more or less the impact of lable noise. But I am intrested to hear your oppinion about data noise.