I wrote this late at night, so to clarify and expand a little bit;- "Work on more than one time scale" I think is actually an interesting idea to dwell on for a second. Like, when a person is trying to solve a problem, they will often pace back and forth, or talk, etc. They don't have to do everything in one pass, somehow the complex computation which lets them see and move around can work on a very fast time scale, while other problem solving is going on simultaneously, and only starts to effect motor outputs later on. That's interesting. The spinal cord doing processing independent of the brain thing I mentioned is evident in this older series of (rather horrible) experiments with cats: https://www.jstor.org/stable/24945006- On the 'smaller models with lower latency', we already now see models like Minstral-7b outperforming 30b parameter models because of improvements in data, architecture, and training. I expect this trend to continue. If the largest models are capable of operating a robot out of the box, I think you could take those outputs, and use them to train (or otherwise distill down) the larger model to a more manageable size, more specialised for the task.- On the 'LLMs could do the parts with higher latency', just yesterday I saw somebody do something like this with GPT-4V, where they periodically uploaded a photograph of what was in front of them, and got GPT-4V to output instructions on how to find the super market (walk further forward, turn right, etc). Kind of worked, that's the sort of thing I was picturing here, leaving much more responsive systems to handle the low latency work, like balance, gripping, etc.
I'm somewhat skeptical that running out of text data will meaningfully slow progress. Today's models are so sample inefficient compared with human brains that I suspect there are significant jumps possible there. Also, as you say;- Synthetic text data might well be possible (especially for domains where you can test the quality of the produced text externally (e.g. programming)- Reinforcement-learning-style virtual environments can also generate data (and not necessarily only physics based environments either -- it could be more like playing games or using a computer).- And multimodal inputs gives us a lot more data too, and I think we've only really scratched the surface of multimodal transformers today.
I am honestly very surprised it became a front page post too! It totally is just speculation.
I tried to be super clear that these were just babbled guesses, and I was mainly just telling people to try to do same, rather than trusting my starting point here.
The other thing that surprised me is that there haven't been too many comments saying "this part is off", or "you missed trend X!". I was kind of hoping for that!
Agree on lower depth models being possible, a few other possibilities:
Smaller models with lower latency could be used, possibly distilled down from larger ones.
Compute improvements might make it practical onboard (like with Tesla's self-driving hardware inside the chest of their andriod).
New architectures could work on more than one time scale -- kind of like humans do. E.g. when we walk, not all of the processing is done in the brain. Your spinal cord can handle a tonne of it autonomously. (Will find source tomorrow).
LLM-type models could do the parts that can accept higher latency, leaving lower level processes to handle themselves. Imagine for a household cleaning robot that a LLM based agent puts out high level thoughts like "Scan the room for dirty clothes. ... Fold them. ... Put them in the third draw", and existing low level stuff actually carried out the instructions. That's an exaggerated example, but you get the idea, it doesn't have to replace the PID controller!
I am extremely worried about safety, but I don't know as much about it as I do about what's on the edge of consumer / engineering trends, so I think my predictions here would be not useful to share right now! The main way it relates to my guesses here is if regulation successfully slows down frontier development within a few years (which I would support).I'm doing the ARENA course async online at the moment, and possibly moving into alignment research in the next year or two, so hoping to be able to chat more intelligently on alignment soonish.
I broadly agree. I think AI tools are already speeding up development today, and on reflection I don't actually think AI being more capable than humans at modeling the natural world would be a discontinuous point on the ramp up to superintelligence, actually. It would be a point where AI gets much harder to predict, though, which is probably why it was on my mind when I was trying to come up with predictions.
Thanks, fixed. I did mean 3.5 to 4, not 3 to 4.
Side note -- France isn't a great example for your point here "France, for example, is a very old, well-established and liberal democracy." because the Fifth Republic was only established in 1958. It's also notable for giving the president much stronger executive powers compared with the Fourth Republic!
In the spirit of doing low status things with high potential, I am working on a site to allow commissioning of fringe erotica and am looking to hire a second web developer.The idea is to build a place where people with niche interests can post bounties for specific stories. In my time moonlighting as an erotic author, I've noticed a lack of good sites to do freelance erotic writing work. I think the reason for this is that most people think porn is icky, so despite there being a huge market for extremely niche content, the platforms currently available are pretty abysmal. This is our opportunity.We're currently in beta and can pay a junior-level wage, with senior-level equity. If you're a web developer who wants to join a fully remote startup, please reach out. As with my other startups, I began this project with the goal of generating wealth to put towards alignment research.
Thanks Chris, but I think you linked to the wrong thing there, I can't see your post in the last 3 years of your history either!