ZankerH

Comments

A Simple Introduction to Neural Networks

Square error has been used instead of absolute error in many diverse optimization problems in part because its derivative is proportional to the magnitude of the error, whereas the derivative of the absolute error is constant. When you're trying to solve a smooth optimization problem with gradient methods, you generally benefit from loss functions with a smooth gradient than tends towards zero along with the error.

Becoming stronger together

Sounds like you need to work on that time preference. Have you considered setting up an accountability system or self-blackmailing to make sure you're not having too much fun?

This is why anti-semitism exists.

Open thread, June 26 - July 2, 2017

Yes, with the possible exception of moral patients with a reasonable likelihood of becoming moral agents in the future.

Open thread, June 26 - July 2, 2017

Meat tastes nice, and I don't view animals as moral agents.

Open thread, June 5 - June 11, 2017

Define "optimal". Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.

Open thread, June 5 - June 11, 2017

In which case, best I can do is 10 lines

MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
Open thread, June 5 - June 11, 2017

Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.

Open thread, June 5 - June 11, 2017

Preliminary solution based on random search

MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1

I've hit on a bunch of similar solutions, but 2 * (1 + 34^2) seems to be the common thread.

Open thread, June 5 - June 11, 2017

Define "shortest". Least lines? Smallest file size? Least (characters * nats/char)?

Load More