Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.

Sequences

Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions

Comments

Is it likely possible to find better RL algorithms, assisted by mediocre answers, then use RL algorithms to design heterogeneous cognitive architectures?

 

Given that humans on their own haven't yet found these better architectures, humans + imitative AI doesn't seem like it would find the problem trivial. 

And it's not totally clear that these "better RL" algorithms exist. Especially if you are looking at variations of existing RL, not the space of all possible algorithms. Like maybe something pretty fundamentally new is needed. 

There are lots of ways to design all sorts of complicated architectures. The question is how well they work. 

I mean this stuff might turn out to work.  Or something else might work. I'm not claiming the opposite world isn't plausible. But this is at least a plausible point to get stuck at. 

 

If you can do this and it works, the RSI continues with diminishing returns each generation as you approach an assymptope limited by compute and data.

Seems like there are 2 asymtotes here. 

Crazy smart superintelligence, and still fairly dumb in a lot of ways, not smart enough to make any big improvements. If you have a simple evolutionary algorithm, and a test suite, it could Recursively self improve. Tweaking it's own mutation rate and child count and other hyperparameters. But it's not going to invent gradient based methods, just do some parameter tuning on a fairly dumb evolutionary algorithm. 

 

Since robots build compute and collect data, it makes your rate of ASI improvement limited ultimately by your robot production. (Humans stand in as temporary robots until they aren't meaningfully contributing to the total)

This is kind of true. But by the time there are no big algorithmic wins left, we are in the crazy smart, post singularity regime. 

RSI

Is a thing that happens. But it needs quite a lot of intelligence to start. Quite possibly more intelligence than needed to automate most of the economy.

A lot of newcomers may outperform LLM experts as they find better RL algorithms from automated searching.

Possibly. Possibly not. Do these better algorithms exist? Can automated search find them? What kind of automated search is being used? It depends.

Let’s try this again. If we have AI that can automate most jobs within 3 years, then at minimum we hypercharge the economy, hypercharge investment and competition in the AI space, and dramatically expand the supply while lowering the cost of all associated labor and work. The idea that AI capabilities would get to ‘can automate most jobs,’ the exact point at which it dramatically accelerates progress because most jobs includes most of the things that improve AI, and then stall for a long period, is not strictly impossible, I can get there if I first write the conclusion at the bottom of the page and then squint and work backwards, but it is a very bizarre kind of wishful thinking. It supposes a many orders of magnitude difficulty spike exactly at the point where the unthinkable would otherwise happen.

 

Some points.

1) A hypercharged ultracompetitive field suddenly awash with money, full of non-experts turning their hand to AI, and with ubiquitous access to GPT levels of semi-sensible mediocre answers. That seems like almost the perfect storm of goodhearting science.  That seems like it would be awash with autogenerated CRUD papers that goodheart the metrics. And as we know, sufficiently intense optimization on a proxy will often make the real goal actively less likely to be achieved.  Sufficient papermill competition and real progress might become rather hard.

2) Suppose the AI requires 10x more data than a human to learn equivalent performance. Which totally matches with current models and their crazy huge amount of training data. Because it has worse priors and so generalizes less far.  For most of the economy, we can find that data. Record a large number of doctors doing operations or whatever. But for a small range of philosopy/research related tasks, data is scarce and there is no large library of similar problems to learn on. 

3) A lot of our best models are fundamentally based around imitating humans. Getting smarter requires RL type algorithms instead of prediction type algorithms. These algorithms kind of seem to be harder, well they are currently less used.

 

This isn't a conclusive reason to definitely expect this. But it's multiple disjunctive lines of plausible reasoning. 
 

So how much does the regulatory issue matter?

 

One extra regulation here is building codes insisting all houses have kitchens. If people could buy/rent places without kitchens for the appropriate lower price, eating out would make more sense. 

Regulation forces people to own/rent kitchens, whether or not they want to use them. 

Part of the question is, why isn't there somewhere I can buy school dinner quality food at school dinner prices? 

lower the learning rate when the sim is less confident the real world estimation is correct

Adversarial examples can make an image classifier be confidently wrong.

 

Because it's what humans want AI for, and due to the relationships between the variables, it is possible we will not ever get uncontrollable superintelligence before first building a lot of robots, ICs, collecting revenue, and so on.  

 

You are talking about robots, and a fairly specific narrow "take the screws out" AI. 

Quite a few humans seem to want AI for generating anime waifus. And that is also a fairly narrow kind of AI. 

Your "log(compute)" term came from a comparison which was just taking more samples. This doesn't sound like an efficient way to use more compute. 

Someone, using a pretty crude algorithmic approach, managed to get a little more performance for a lot more compute. 

If we have the technical capacity to get into the red zone, and enough chips to make getting there easy. Then hanging out in the orange zone, coordinating civilization not to make any AI too powerful, when there are huge incentives to ramp the power up, and no one is quite sure where the serious dangers kick in...

That is, at least, an impressive civilization wide balancing act. And one I don't think we have the competence to pull off. 

It should not be possible for the ASI to know when the task is real vs sim.  (which you can do by having an image generator convert real frames to a descriptor, and then regenerate them so they have the simulation artifacts...)

This is something you want, not a description of how to get it, and one that is rather tricky to achieve. That converting and then converting back trick is useful. But sure isn't automatic success either. If there are patterns about reality that the ASI understands, but the simulator doesn't, then the ASI can use those patterns.

Ie if the ASI understands seasons, and the simulator doesn't, then if it's scorching sunshine one day and snow the next, that suggests it's the simulation. Otherwise, that suggests reality. 

And if the simulation knows all patterns that the ASI does, the simulator itself is now worryingly intelligent. 

robots are doing repetitive tasks that can be clearly defined.

If the task is maximally repetitive, then the robot can just follow the same path over and over. 

If it's nearly that repetitive, the robot still doesn't need to be that smart.

I think you are trying to get a very smart AI to be so tied down and caged up that it can do a task without going rouge. But the task is so simple that current dumb robots can often do it. 

For example : "remove the part from the CNC machine and place it on the output table".

Economics test again. Minimum wage workers are easily up to a task like that. But most engineering jobs pay more than minimum wage. Which suggests most engineering in practice requires more skill than that. 

I mean yes engineers do need to take parts out of the CNC machine. But they also need to be able to fix that CNC machine when a part snaps off inside it and starts getting jammed in the workings. And the latter takes up more time in practice. Or noticing that the toolhead is loose, and tightning and recalibrating it. 

 

The techniques you are describing seem to be next level in fairly dumb automation. The stuff that some places are already doing (like boston dynamics robot dog level hardware and software), but expanded to the whole economy. I agree that you can get a moderate amount of economic growth out of that. 

I don't see you talking about any tasks that require superhuman intelligence.

Response to the rest of your post.

By the way, these comment boxes have built in maths support.

Press Ctrl M for full line or Ctrl 4 for inline

You might notice you get better and better at the game until you start using solutions that are not possible in the game, but just exploit glitches in the game engine.  If an ASI is doing this, it's improvement becomes negative once it hits the edges of the sim and starts training on false information.  This is why you need neural sims, as they can continue to learn and add complexity to the sim suite

Neural sims probably have glitches too. Adversarial examples exist.

Note the log here : this comes from intuition.  In words, the justification is that immediately when a robot does a novel task, there will be lots of mistakes and rapid learning.  But then the mistakes take increasingly larger lengths of time and task iterations to find them, it's a logistic growth curve approaching an asymptote for perfect policy.

This sounds iffy. Like you are eyeballing and curve fitting, when this should be something that falls out of a broader world model. 

Every now and then, you get a new tool. Like suppose your medical bot has 2 kinds of mistakes, ones that instant kill, and ones that mutate DNA. It quickly learns not to do the first one. And slowly learns not to do the second when it's patients die of cancer years later. Except one day it gets a gene sequencer. Now it can detect all those mutations quickly. 

 

I find it interesting that most of this post is talking about the hardware. 

Isn't this supposed to be about AI? Are you expecting a regieme where

  1. Most of the worlds compute is going into AI.
  2. Chip production increases by A LOT (at least 10x) within this regieme.
  3. Most of the AI progress in this regieme is about throwing more compute at it. 

 

 

.everything in the entire industrial chain you must duplicate or the logistic growth bottlenecks on the weak link.  

Everything is automated.  Humans are in there for maintenance and recipe improvement.

Ok. And there is our weak link. All our robots are going to be sitting around broken. Because the bottleneck is human repair people. 

It is possible to automate things. But what you seem to be describing here is the process of economic growth in general. 

Each specific step in each specific process is something that needs automating. 

You can't just tell the robot "automate the production of rubber gloves". You need humans to do a lot of work designing a robot that picks out the gloves and puts them on the hand shaped metal molds to the rubber can cure. 

Yes economic growth exists. It's not that fast. It really isn't clear how AI fits into your discussion of robots.

First of all. SORA.

I sensed you were highly skeptical of my "neural sim" variable until 2 days ago.

No. Not really. I wasn't claiming that things like SORA couldn't exist. I am claiming that it's hard to turn them towards the task of engineering a bridge say.

Current SORA is totally useless for this. You ask it for a bridge, and it gives you some random bridge looking thing, over some body of water. SORA isn't doing the calculations to tell if the bridge would actually hold up. But lets say a future much smarter version of SORA did do the calculations. A human looking at the video wouldn't know what grade of steel SORA was imagining. I mean existing SORA probably isn't thinking of a particular grade of steel, but this smarter version would have picked a grade, and used that as part of it's design. But it doesn't tell the human that, the knowledge is hidden in it's weights.

Ok, suppose you could get it to show a big pile of detailed architectural plans, and then a bridge. All with super-smart neural modeling that does the calculations. Then you get something that ideally is about as good at looking at the specs of a random real world bridge. Plenty of random real world bridges exist, and I presume bridge builders look at their specs. Still not that useful. Each bridge has different geology, budget, height requirements etc.

 

Ok, well suppose you could start by putting all that information in somehow, and then sampling from designs that fit the existing geology, roads etc. 

Then you get several problems.

The first is that this is sampling plausible specs, not good specs. Maybe it shows a few pictures at the end to show the bridge not immediately collapsing. But not immediately collapsing is a low bar for a bridge. If the Super-SORA chose a type of paint that was highly toxic to local fish, it wouldn't tell you. If the bridge had a 10% chance of collapsing, it's randomly sampling a plausible timeline. So 90% of the time, it shows you the bridge not collapsing. If it only generates 10 minutes of footage, you don't know what might be going on in it's sim while you weren't watching. If it generates 100 years of footage from every possible angle, it's likely to record predictions of any problems, but good luck finding the needle in the haystack. Like imagine this AI has just given you 100 years of footage. How do you skim through it without missing stuff.

Another problem is that SORA is sampling in the statistical sense. Suppose you haven't done the geology survey yet. SORA will guess at some plausible rock composition. This could lead to you building half the bridge, and then finding that the real rock composition is different.

You need a system that can tell you "I don't know fact X, go find it out for me". 

If the predictions are too good, well the world it's predicting contains Super-SORA. This could lead to all sorts of strange self fulfilling prophecy problems. 

OK, so maybe this is a cool new way to look at at certain aspects of GPT ontology... but why this primordial ontological role for the penis? I imagine Freud would have something to say about this. Perhaps I'll run a GPT4 Freud simulacrum and find out (potentially) what.

 

My guess is that humans tend to use a lot of vague euphemisms when talking about sex and genitalia. 

In a lot of contexts, "Are they doing it?" would refer to sex, because humans often prefer to keep some level of plausible deniability.

Which leaves some belief that vagueness implies sexual content. 

In more "slow takeoff" scenarios. Your approach can probably be used to build something that is fairly useful at moderate intelligence.  So for a few years in the middle of the red curve, you can get your factories built for cheap. Then it hits the really steep part, and it all fails. 

I think the "slow" and "fast" models only disagree in how much time we spend in the orange zone before we reach the red zone. Is it enough time to actually build the robots?

I assign fairly significant probabilities to both "slow" and "fast" models. 

I added the below. I believe most of your objections are simply wrong because this method

If you are mostly learning from imitating humans, and only using a small amount of RL to adjust the policy, that is yet another thing.

I thought you were talking about a design built mainly around RL.

If it's imitating humans, you get a fair bit of safety, but it will be about as smart as humans. It's not trying to win, it's trying to do what we would do. 

A neural or hybrid sim. It came from predicting future frames from real robotics data.

Ok. So you take a big neural network, and train it to predict the next camera frame. No Geiger counter in the training data? None in the prediction. Your neural sim may well be keeping track of the radiation levels internally, but it's not saying what they are. If the AI's plan starts by placing buckets over all the cameras, you have no idea how good the rest of the plan is. You are staring at a predicted inside of a bucket.

nothing special, design it like a warehouse.

Except there is something special. There always is. Maybe this substation really better not produce any EMP effects, because sensitive electronics are next door. So the whole building needs a faraday cage built into the walls. Maybe the location it's being built at is known for it's heavy snow, so you better give it a steep sloping roof. Oh and you need to leave space here for the cryocooler pipes. Oh and you can't bring big trucks in round this side, because the fuel refinement facility is already there. Oh and the company we bought cement from last time has gone bust. Find a new company to buy cement from, and make sure it's good quality. Oh and there might be a population of bats living nearby. Don't use any tools that produce lots of ultrasound. 

It cannot desync because the starting state is always the present frame.

Lets say someone spills coffee in a laptop. It breaks. Now to fix it, some parts need replaced. But which parts? That depends on exactly where the coffee dribbled inside it. Not something that can be predicted. You must handle the uncertainty. Test parts to see if they work. Look for damage marks. 

 

I think this system as you are describing now is something that might kind of work. I mean the first 10 times it will totally screw up. But we are talking about a semismart but not that smart AI trained on a huge number of engineering examples. With time it could become mostly pretty competent. With humans keeping patching it every time it screws up.

One problem is that you seem to be working on a "specifications" model. Where people first write flawless specifications, and then build things to those specs. In practice there is a fair bit of adjusting. The specs for the parts, as written beforehand, aren't flawless, at best they are roughly correct. The people actually building the thing are talking to each other, trying things out IRL and adjusting the systems so they actually work together. 

"ok I finished the prototype stellarator, you saw every step. Build another, ask for help when needed"

And the AI does exactly the same thing again. Including manufacturing the components that turned out not to be needed, and stuffing them in a cupboard in the corner. Including using the cables that are 2x as thick as needed because the right grade of cable wasn't available the first time. 

"Ok I want a stellarator.". You were talking about 1000x labor savings. And deciding which of the many and various fusion designs to work on is more than 0.1% of the task by itself. I mean you can just pick out of a hat, but that's making things needlessly hard for yourself.

Load More