" How do you teach normies to use AI five years from now, for their own job? Altman says basically people learn on their own.
This hits at a particularly relevant point to me. There is a difference between teaching to fish and giving a fish. Helping to do something and doing it for the person. There is a huge financial opportunity right here in this question and I do not understand the incentives at play that have been preventing everyone from jumping on it. Please help me see what I am missing.
Take dating apps. They make the match for the user. Ok, why doesn't at least one make AI personalities and train you to interact in the real world? Help you find your own weaknesses and overcome them. If I have an AI that can match two people, and it knows what the most common desires and red flags are, then it can generate a profile of a user, here are your strengths and weaknesses against the market, here is a training plan to minimize your weaknesses and maximize your strengths. Here is information about your local environment and what your chosen partners are interested in, and then create a series of simulated experiences of approaching and interacting. Rinse repeat then go out in the real world and do it.
Same with any topic or skill. Instead of doing it, the AI can teach me to do it. Can give me simulated practice. Can take on multiple personas and let me interact with them so I learn to handle multiple types of challenges.
I have seen a very small number of people who have made debate bots for this purpose. But that's it. It seems like easy, obvious money and no one is doing it and I don't see why. It seems to be within cur ent capabilities. What is the disincentive that I am missing?
Well, at least it is something. Thank you! I have been told about an AI homeschooling tool, but don't have the link yet, so haven't had a chance to explore it.
Very sad how little work on this area is happening.
Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
The entire conversation takes place with an understanding that no one is to mention existential risk or the fact that the world will likely transform, without stating this explicitly. Both participants are happy to operate that way. I’m happy to engage in that conversation (while pointing out its absurdity in some places), but assume that every comment I make has an implicit ‘assuming normality’ qualification on it, even when I don’t say so explicitly.
On The Sam Altman Production Function
On Hiring Hardware People
On What GPT-6 Will Enable
Tyler isn’t going to let him off that easy. At this point, I don’t normally do this, but exact words seem important so I’m going to quite the transcript.
On government backstops for AI companies
A timely section title.
On monetizing AI services
On AI’s future understanding of intangibles
On Chip-Building
On Sam’s outlook on health, alien life, and conspiracy theories
Ooh, fun stuff.
On regulating AI agents
On new ways to interface with AI
On how normies will learn to use AI
On AI’s effect on the price of housing and healthcare
On reexamining freedom of speech
On humanity’s persuadability
I’m going to go full transcript here again, because it seems important to track the thinking: