Wiki Contributions


Well, this is nice to see! Perhaps a little late, but still good news...

I wouldn't touch this stuff with someone else's bargepole. It looks like it takes the willpower out of starvation, and as the saying goes, you can starve yourself thin, but you can't starve yourself healthy.

I could be convinced, by many years of safety data and a well understood causal mechanism for both obesity and the action of these drugs, that that's wrong and that they really are a panacea. But I am certainly not currently convinced!

The question that needs answering about obesity is 'why on earth are people with enormous excess fat reserves feeling hungry?'. It's like having a car with the boot full of petrol in jerry cans but the 'fuel low' light is blinking. 

depends on facts about physics and psychology


It does, and a superintelligence will understand those facts better than we do.

My basic argument is that the there are probably mathematical limits on how fast it is possible to learn.


Doubtless there are! And limits to how much it is possible to learn from given data.

But I think they're surprisingly high, compared to how fast humans and other animals can do it. 

There are theoretical limits to how fast you can multiply numbers, given a certain amount of processor power, but that doesn't mean that I'd back the entirety of human civilization to beat a ZX81 in a multiplication contest.

What you need to explain is why learning algorithms are a 'different sort of thing' to multiplication algorithms. 

Maybe our brains are specialized to learning the sorts of things that came in handy when we were animals. 

But I'd be a bit surprised if they were specialized to abstract reasoning or making scientific inferences.

All of RL’s successes, even the huge ones like AlphaGo (which beat the world champion at Go) or its successors, were not easy to train. For one thing, the process was very unstable and very sensitive to slight mistakes. The networks had to be designed with inductive biases specifically tuned to each problem.

And the end result was that there was no generalization. Every problem required you to rethink your approach from scratch. And an AI that mastered one task wouldn’t necessarily learn another one any faster.


I had the distinct impression that AlphaZero (the version of AlphaGo where they removed all the tweaks) could be left alone for an afternoon with the rules of almost any game in the same class as go, chess, shogi, checkers, noughts-and-crosses, connect four, othello etc, and teach itself up to superhuman performance.

In the case of chess, that involved rediscovering something like 400 years of human chess theorizing, to become the strongest player in history including better than all previous hand-constructed chess programs.

In the case of go, I am told that it not only rediscovered a whole 2000 year history of go theory, but added previously undiscovered strategies. "Like getting a textbook from the future", is a quote I have heard. 

That strikes me as neither slow nor ungeneral.

And there was enough information in the AlphaZero paper that it was replicated and improved on by the LeelaChessZero open-source project, so I don't think there can have been that many special tweaks needed?

This is great. Strong upvote!

Are you claiming that a physically plausible superintelligence couldn't infer the physical laws from a video, or that AIXI couldn't?

Those seem to be different claims and I wonder which of the two you're aiming at?

For example, you might be much smarter than me and a meteorologist, but you'd find it hard to predict the weather in a year's time better than me if it's a single-shot-contest.

Sure, but I'd presumably be quite a lot better at predicting the weather in two days time.

Load More