Posts

Sorted by New

Wiki Contributions

Comments

I agree that it would be dangerous.

What I'm arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It's not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).

Right, but the problem with this counter example is that it isn't actually possible. A counter example that could occur would be much more convincing.

Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe... I'd be happy considering it to be extremely intelligent.

Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn't know what to put in the table. But if we're in infinite theory land, then why not just run AIXI on your infinite computer?

Back in reality, the lookup table approach isn't going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.

Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.

This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.

If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.

It's not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life -- flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won't be all that low.

Indeed, with so much data SI will have built a model of language, and how this maps to mathematics and distributions, and in particular there is a good chance it will have seen a description of quantum mechanics. So if it's also been provided with information that these will be quantum coin flips, it should predict basically perfectly, including modelling the probability that you're lying or have simply set up the experiment wrong.

This is a tad confused.

A very simple measure on the binary strings is the uniform measure and so Solomonoff Induction will converge on it with high probability. This is easiest to think about from the Solomonoff-Levin definition of the universal prior where you take a mixture distribution of the measures according to their complexity -- thus a simple thing like a uniform prior gets a very high prior probability under the universal distribution. This is different from the sequence of bits itself being complex due to the bits being random. The confusing thing is when you define it the other way by sampling programs, and it's not at all obvious that things work out the same... indeed it's quite surprising I'd say.

I'd suggest reading the second chapter of "Machine Super Intelligence", I think it's clearer there than in my old master's thesis as I do more explaining and less proofs.

The way it works for me is this:

First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result.

The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus, I imagine that it's somebody else's proof and my job is to show why it's broken.

After a while of my trying to break it, I'll then show it to somebody kind who won't laugh at me if it's wrong, but is pretty careful at checking these things. Then another person... slowly my fear of having screwed up lifts. Then I'm ready to submit to publish.

So in short: I'm motivated to get proofs right (I have yet to have a published proof corrected, not counting blog posts) out of a fear of looking bad. What motivates me to publish at all is the feeling for satisfaction that I draw from the achievement. In my moderate experience of mathematicians, they often seem to have similar emotional forces at work.

Glial cell are actually about 1:1. A few years ago a researcher wanted to cite something to back up the usual 9:1 figure, but after asking everybody for several months nobody knew where the figure came from. So, they did a study themselves and did a count and found it to be 1:1. I don't have the reference on me, it was a talk I went to about a year ago (I work at a neuroscience research institute).

I have asked a number of neuroscientists about the importance of glia and have always received the same answer: the evidence that they are functionally important is still "very weak". They might be wrong, but given that some of these guys could give hour long lectures on exactly why they think this, and know the few works that claim otherwise... I'm inclined to believe them.

Here's my method: (+8 for me)

I have a 45 minute sand glass timer and a simple abacus on my desk. Each row on the abacus corresponds to one type of activity that I could be doing, e.g. writing, studying, coding, emails and surfing,... First, I decide what type of activity I'd like to do and then start the 45 minute sand glass. I then do that kind of activity until it ends. At which point I count it on my abacus and have at least a 5 minute break. There are no rules about what I have to do, I do what ever I want. But I always do it in focused 45 minute units.

If you try this, do it exactly as I describe, at least to start with, as there are reasons for each of the elements. Let me explain some of them. Firstly the use of a physical timer and abacus. Having them sitting on your desk in view makes them a lot more effective than using something like a digital timer and spreadsheet on your computer. When you look up you see the sand running out. When you take a break you see a colourful physical bar graph of your time allocation -- it's there looking at you.

45 minutes is important because it's long enough to get a reasonable amount done if you work in a focused way, but it's short enough not to be discouraging, unlike an hour. Even with something I don't particularly want to do, sitting down and doing just 45 minutes of it is a bearable concept, knowing that at the end I'll have a break and then do something else if I want to. Also if you look at human mental performance, it doesn't make much sense trying to do more than 45 minutes hard work at a time. Better to have a break for 5 to 15 minutes and then start again. As I think 15 minute breaks are essential, at the end of the week the total number of units counted on my abacus are my total number of at-work-activity hours for the week.

Having no rule about what you have to do is also important. If you put rules in place you will start avoiding using the system. The only thing is that when you start a unit of 45 minutes you have to go through with it. But you're free not to start one if you don't want to. You might then think that you'd always just do the kind of work that you like doing, rather than units of the stuff you avoid but should be doing. Interestingly, no, indeed often I find that the reverse starts to happen, even though I'm not really aiming for that. The reason is the principle that what you measure and keep in mind you naturally tend to control. Thus you don't actually need any rules, in fact they are harmful as they make you dislike and avoid the system.

Another force at work is that momentum often builds enthusiasm. Thus you think that you'll just do 45 minutes on some project due in a week that you'd rather not be doing at all, and after that unit of time you actually feel like doing another one just to finish some part of it off.

So yeah, the only real rule is that when the sand glass is running you have to stay hard at work on the task, which isn't too bad as it's only so many minutes more before you're taking a break and once again free.

UPDATE: So it seems that what I'm doing is a variant on the "Pomodoro technique" (and probably quite a few others). The differences being that I prefer 45 minutes, I think that's a better chunk of time to get things moving, and I like a physical aspects of a sand glass timer and an abacus. I perhaps should add that when I was doing intense memorisation study before an exam I'd use a 20 minutes on 10-20 minutes off cycle as that matches human memory performance better. But for general tasks 45 minutes seems good to me.

Load More