Wiki Contributions

Comments

This emphasis on generality makes deployment of future models a lot easier. We first build a gpt4 ecosystem. When gpt5 comes out it will be easy to implement (e.g. autogpt can run just as easy on gpt4 as on gpt5). The adaptions that are necessary are very small and thus very fast deployment of future models is to be expected.

Fine-tuning, whether using RL or not, is the proverbial “cherry on the cake” and the pre-trained model captures more than 99.9% of the intelligence of the model. 

 

I am still amazed by the strength of general models. There is the no-free lunch theorem that people use to point out that we will probably have specialized AI's because they will be better. Current practice seems to contradict this. 

AI will probably displace a lot of cognitive workers in the near future. And physical labor might take a while to get below 25$/hr.

  • Most most tasks human level intelligence is not required. 
  • Most highly valued jobs have a lot of tasks that do not require high intelligence.
  • Doing 95% of all tasks could be a lot sooner (10-15 years earlier) than 100%. See autonomous driving (getting to 95% safe or 99,9999 safe is a big difference).
  • Physical labor by robots will probably remain expensive for a long time (e.g. a robot plumber). A robot ceo is probably cheaper in the future than the robot plumber. 
  • Just take gpt4 and fine tune it and you can automate a lot of cognitive labor already.
  • Deployment of cognitve work automation (a software update) is much faster that deployment of physical robots.

I agree that AI might not replace swim instructors by 2030. It is the cognitive work where the big leaps will be. 

An interesting development is the development of synthetic data. This is also a sort of algorithmic improvement, because the data is generated by algorithms. For example in the verify step by step paper there is a combination of synthetic data and human labelling. 

At first this seemed counter intuitive to me. The current model is being used to create data for the next model. Feels like bootstrapping. But it starts to make sense now. Better prompting (like CoT or ToT) is a method to get better data or a second model that is trained to pick the best answers from a thousand and that will get you data good enough to improve the model. 

Demis Hassabis said in his interview with Lex Fridman that they used synthetic data when developing AlphaFold. They had some output of AlphaFold that they had great confidence in. Then they fed the output as input and the model improved (this gives your more data with great confidence, repeat). 

Specific Resources (Access to a DGX data center): Even if an AI had access to such resources, it would still need to understand how to use them effectively, which would require capabilities beyond what GPT-4 or a hypothetical GPT-5 have.

To my knowledge resource management in data centers is done by AI's. It is the humans who cannot do this. The AI already can.

Algorithmic improvement has more FOOM potential. Hardware always has a lag. 

Hanson's chance on extinction is close to a 100%. He just thinks it's slower. He is optimistic about something that most would call a dystopia (a very interesting technological race that will conquer the stars before the grabby aliens do). A discussion between Yudkowsky and Hanson is about are we dying fast or slow. It is not really a doomer vs non-doomer debate from my perspective (still a very interesting debate btw, both have good arguments).

I do appreciate the Hanson perspective. It is well thought out and coherent. I just would not call it optimistic (because of the extinction). I have no ready example of a non-extinction perspective coherent view on the future. Does anybody have a good example of a coherent non-extinction view? 

If I understand you correctly you mean this transfer between machine learning and human learning. Which is an interesting topic. 

When a few years ago I learned about word2vec I was quite impressed. It felt a lot like how humans store information according to cognitive psychology. In cognitive psychology, a latent space or a word vector would be named as a semantic representation. Semantic representations are mental representations of the meaning of words or concepts. They are thought to be stored in the brain as distributed representations, meaning that they are not represented by a single unit of activation, but rather by a pattern of activation across many units. 

That was sort my "o shit this is going to be a thing" moment. I realized there are similarities between human and machine understanding. This is a way to build a world model.

Now I really can try the differences in gpt4 and Palm2. To learn how they think I give them the same question as my students and when they make mistakes I guide them like I would guide a student. It is interesting to see that within the chat they can learn to improve themselves with guidance. 

What I find interesting is that the understanding is sometimes quite different and there are also similarities. The answers and the responses to guidance are quite different from that of students. It is similar enough to give human like answers. 

Can this help us understand human learning? I think it can. Comparing human learning to machine learning makes the properties of human learning more salient (1+1=3). As an example I studied economics and Mathematics and oftentimes it felt like I did three times the learning because I did not only learn mathematics and economics but I also learned the similarities and differences between the two. 

The above is a different perspective on your question then my previews answer. I would appreciate feedback on whether I am on the right track here. I am very interested in the topic independent of the perspective taken on the topic. So we could also explore different perspectives. 

Answer by meijer1973Jun 05, 202320

I am in education (level about high school/AP macro economics)

possible implications:

  • upskilling : faster learning through better information, more help, AI tutoring etc. 
  • deskilling : students let the AI do the work (the learning, writing, homework etc.)
  • reskilling : develop new skillsets that are relevant to todays world 
  • relevance : in a world where AI does the work what is the relevance of education

The last is the most important I think. What is the place of education in todays world. What should a kid of fifteen years old learn to be prepared for what is coming? I don't know because I don't know what is coming.

One thing I do know. Learning from a machine is a paradox. Yes you can learn better and faster with the help of a machine. But if the machine can teach it to you, than the machine can probably do it. And why would we want to learn things that a machine can do? To learn the things a machine can not do, we need humans. But that only works if there are things a machine cannot do. 

The kid of fifteen wil be 25 in ten years. Ten years is a lot. I do not know what to tell them because I do not know. Love to hear more input on this.

Your model has some uncertainty, but you know the statistical distributions. For example, with probability 80% the world is in state X, with probability 20% it is in state Y.

Nice way of putting it. 

Load More