Wiki Contributions


>Some day soon "self-driving" will refer to "driving by yourself", as opposed to "autonomous driving".

Interestingly enough, that's what it was used to mean the first time the term appeared in popular culture, in the film Demolition Man (1993).

We have no idea how to make a useful, agent-like general AI that wouldn't want to disable its off switch or otherwise prevent people from using it.

Global crackdown on the tech industry?

>The aliens sent their message using a continuous transmission channel, like the frequency shift of a pulsar relative to its average or something like that. NASA measured this continuous value and stored the result as floating point data.


Then it makes no sense for them to publish it in binary without mentioning the encoding, or making it part of the puzzle to begin with.

Your result is virtually identical to the first-ranking unambiguously permutation-invariant method (MLP 256-128-100). HOG+SVM does even better, but it's unclear to me whether that meets your criteria.

Could you be more precise about what kinds of algorithms you consider it fair to compare against, and why?

The issue with MNIST is that everything works on MNIST, even algorithms that utterly fail on a marginally more complicated task. It's a solved problem, and the fact that this algorithm solves it tells you nothing about it.

If the code is too rigid or poorly performant to be tested on larger or different tasks, I suggest F-MNIST (fashion MNIST), which uses the exact same data format, has the same number of categories and amount of data points, but is known to be far more indicative of the true performance of modern machine learning approaches.

Square error has been used instead of absolute error in many diverse optimization problems in part because its derivative is proportional to the magnitude of the error, whereas the derivative of the absolute error is constant. When you're trying to solve a smooth optimization problem with gradient methods, you generally benefit from loss functions with a smooth gradient than tends towards zero along with the error.

Load More