Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.


Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions


I think the marginal value of OpenAI competence is now negative. We are at a point where they have basically no chance to succeed at alignment, and further incompetence makes it more likely for the company to not get anything dangerous. Making any AGI at all requires competence and talent, and an environment that isn't a political cesspool. 

You can make  work out, if you are prepared to make your mathematics even more deranged. 

So lets look at  

Think of the  not as  but as some infinitesimal  times some unknown function .

If that function is  then we get  which is finite, so multiplied by  it becomes infinitesimal. 

If  then we get  and as we know  because 

So this case is the same as before. 

But for  we get  which doesn't converge. The infinite largeness of this sum cancels with the infinitesimally small size of  (Up to an arbitrary finite constant). 


Great. Now lets apply the same reasoning to 

. First note that this is infinite, it's , so undefined. Can we make this finite. Well think of  as actually being  and in this case, take 

For the final term, the smallness of epsilon counteracts having to sum to infinity. For the first and middle term, the sum is 

Which is  


So we have 

The first term is negligible. So 

Note that the  can be ignored, because we have  for arbitrary (finite) C as before. 

Now  is big, but it's probably less infinite than  somehow. Let's just group it into the  and hope for the best. 

advanced ancient technology is such a popular theme


Well one reason is it's a good way to produce plot relevant artefacts. It's hard to have dramatic battles over some object when a factory is churning out more. 

True. But for that you need there to exist another mind almost identical to yours except for that one thing. 

In the question "how much of my memories can I delete while retaining my thread of subjective experience?" I don't expect there to be an objective answer. 

The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.

Who knows what "meditation" is really doing under the hood.

Lets set up a clearer example. 

Suppose you are an uploaded mind, running on a damaged robot body. 

You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world. 

You run the script, and then you die. That's it. The computer running nul ops "merges" with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it's hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news. 


Basically, I think forgetting like that without just deleting your mind isn't something that really happens. I also feel like, when arbitrary mind modifications are on the table, "what will I experience in the future" returns Undefined. 

Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else. 

Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program. 

In many important tasks in the modern economy, it isn't possible to replace on expert with any number of average humans. A large fraction of average humans aren't experts. 

A large fraction of human brains are stacking shelves or driving cars or playing computer games or relaxing etc. Given a list of important tasks in the computer supply chain, most humans, most of the time, are simply not making any attempt at all to solve them. 

And of course a few percent of the modern economy is actively trying to blow each other up. 

You can play the same game in the other direction. Given a cold source, you can run your chips hot, and use a steam engine to recapture some of the heat. 

The Landauer limit still applies. 

>But GPT4 isn't good at explicit matrix multiplication either.

So it is also very inefficient. 

Probably a software problem. 

Humans suck at arithmetic. Really suck. From comparison of current GPU's to a human trying and failing to multiply 10 digit numbers in their head, we can conclude that something about humans, hardware or software, is Incredibly inefficient. 

Almost all humans have roughly the same sized brain. 

So even if Einsteins brain was operating at 100% efficiency, the brain of the average human is operating at a lot less.

ie intelligence is easy - it just takes enormous amounts of compute for training.

Making a technology work at all is generally easier than making it efficient. 

Current scaling laws seem entirely consistent with us having found an inefficient algorithm that works at all. 

Like chatGPT uses billions of floating point operations to do basic arithmetic mostly correctly. So it's clear that the likes of chatGPT are also inefficient. 

Now you can claim that chatGPT and humans are mostly efficient, but suddenly drop 10 orders of magnitude when confronted with a multiplication. But no really, they are pushing right up against the fundamental limits for everything that isn't one of the most basic computational operations. 

Load More