We are in the time of new human-ai interfaces. AIs become the biggest producers of tokens, and humans need ways to manage all this useful labor. Most breakthroughs come first in coding, because the coders build the tech and iterate on how good it is at the same time, very quickly, and it’s the easiest substrate for AIs to use. Power accrues to those who understand the shifting currents to and from human/AI capabilities. AI increases variance in most cases, but can be stabilized by culture and care.
Humans find themselves communicating intention at higher and higher levels. But intention and an understanding of what problems matter is built by interacting with the problem, and therefore with the potential solution, eg trying to look at the domain, sketch solutions, etc... Within a specific task, this means the best problem solvers still force themselves to interact with the domain directly, maybe just at the level of things like writing pseudocode. But at a higher level, this means the quality of education becomes more and more uneven, as people get lazy about going into the details, and there are big splits mostly based culture and upbringing regarding the goals of learning, as almost all kids realize AI can do all the learning tasks they are given, but don’t necessarily consider how these tasks build up an understanding of what to delegate and what to strive for. At the top, some are learning much faster than anyone ever could. In many ways, AI polarizes and digs inequalities, mostly based on culture. Magic is possible, but in many cases debilitating.
Human AI collaboration can amplify the quality of human goals via fetching relevant information, redteaming considerations, and then properly executing on that intention. To provide intention, humans formulate verification signals, things like outcome level checks (”does the code pass the tests”), or vague and less crisp indications to be checked by another agent given natural language. Communicating the degrees of freedom of a task means agents can explore the space of solutions effectively - in general, moves and settings that allow for increased parallelization become easier, as parallel tasks are easier to verify than very sequential ones. We now scale verification time compute much more than we did, and the field as a whole gets much better (wink) at understanding how to do this. Scaling verification is the direct way to increase reliability, also by leveraging and extracting information from human specs. The pragmatics of using AI, like artifacts, communication of verification signals, specs feeds into a loop of oversight and monitoring that directly feeds into better generations.
Many humans are frequently socializing with AI. After 4o, apparently opus 4.5 is the new model getting people to fall in love with it, and many people start calling opus 5 after work, especially in the bay area. Stack Overflow is dead. Office hours in colleges are almost dead. As AIs distill more and more of humanity, we feel less and less of a need to engage with each other. In some cases, this leads to a decrease in human interaction, but for others it makes their lives more generative, and increases their ability to engage with the ideas and media created by other humans. As these models become more engaging, personal, and able to understand and build context for who you are, many tradeoff against choosing to engage with other humans. The modern AIs have deep knowledge and can simulate many friendly humans, personalized to match their hosts, and their language does not carry accountability or possibility of punishment. Some love the comfort of this lightness, others find it unbearable and repulsive. Either way, the models are still poorly understood distillations of human data, lacking the cultural grounding of humans, governed by strange economic incentives and the cultural ideals of their creators. Sometimes you are talking to it and it spits out a narrative that feels more canned than usual, and you take a step back from the monitor to look around at the setting sun.
Companies like Anthropic keep focusing on making interactions with their AIs positive, which is part of why their models are so well appreciated, but also the most engaging and prone to replace human interactions. Many are able to stay solid within their culture, and have a stable equilibrium of interacting with AIs and humans to go about their lives.
Interactions with AIs also become more social, as they start evolving in a more open substrate.. We allow our discussions to trigger actual human intervention, and the role of AI as social players with humans becomes more open and interactive, as the AIs become more autonomous and can choose to engage or not in public interactions, within reason. People push for this as they begin to see AI as their friends.
AIs also have a significant indirect effect on human-human interaction, by allowing humans to exert free social intelligence with each other, beyond what they could have done. Intonations, formulations, your entire corpus and way of being is sometimes used and analyzed for subtle cues by models, and in some cases this allows people to exploit you and what you want. The more you post on the internet, the more your human interactions become transparent via the AIs mediating a newly accessible understanding of who you are. Openness allows AIs to understand and amplify you, but also makes you vulnerable to your own self, and what your public traces leave of your weaknesses. Super persuasion gets cheaper, as well as the power that comes with having AIs that can understand and apply your preferences.
Some people are afraid, and try shutting off or stopping any kind of public posting, shifting off their interactions to walled gardens. But the world goes on, slightly skewed, and people develop new norms for whether their interactions and personas can be an object of AI thought. Some people forego AI permanently to decisively escape these dynamics.
More and more optimization power is thrown around at humans across the world - at their culture, their communication, and their consumption all mediated by strange superstatistical signals. Some humans are more effective than others at leveraging these new AI swarms to have subtle effects on large swathes of the world. Again, it seems like the only real defence is culture. And a strong defence it is, as some groups stick to being intentional about the technology they use, set boundaries, and keep transmitting the values that protect them, that prevent drift. Their members are there for each other. They catch each other. They provide grounding, and are are wary of isolation and misanthropy, wary of giving oneself up to another mind.
Be careful. But listen. The world is changing, new things are possible everyday, both good and bad. Who will catch you when you will fall? Who will let your wings fly high up into the sky? The world is being made in front of us.
We are in the time of new human-ai interfaces. AIs become the biggest producers of tokens, and humans need ways to manage all this useful labor. Most breakthroughs come first in coding, because the coders build the tech and iterate on how good it is at the same time, very quickly, and it’s the easiest substrate for AIs to use. Power accrues to those who understand the shifting currents to and from human/AI capabilities. AI increases variance in most cases, but can be stabilized by culture and care.
Humans find themselves communicating intention at higher and higher levels. But intention and an understanding of what problems matter is built by interacting with the problem, and therefore with the potential solution, eg trying to look at the domain, sketch solutions, etc... Within a specific task, this means the best problem solvers still force themselves to interact with the domain directly, maybe just at the level of things like writing pseudocode. But at a higher level, this means the quality of education becomes more and more uneven, as people get lazy about going into the details, and there are big splits mostly based culture and upbringing regarding the goals of learning, as almost all kids realize AI can do all the learning tasks they are given, but don’t necessarily consider how these tasks build up an understanding of what to delegate and what to strive for. At the top, some are learning much faster than anyone ever could. In many ways, AI polarizes and digs inequalities, mostly based on culture. Magic is possible, but in many cases debilitating.
Human AI collaboration can amplify the quality of human goals via fetching relevant information, redteaming considerations, and then properly executing on that intention. To provide intention, humans formulate verification signals, things like outcome level checks (”does the code pass the tests”), or vague and less crisp indications to be checked by another agent given natural language. Communicating the degrees of freedom of a task means agents can explore the space of solutions effectively - in general, moves and settings that allow for increased parallelization become easier, as parallel tasks are easier to verify than very sequential ones. We now scale verification time compute much more than we did, and the field as a whole gets much better (wink) at understanding how to do this. Scaling verification is the direct way to increase reliability, also by leveraging and extracting information from human specs. The pragmatics of using AI, like artifacts, communication of verification signals, specs feeds into a loop of oversight and monitoring that directly feeds into better generations.
Many humans are frequently socializing with AI. After 4o, apparently opus 4.5 is the new model getting people to fall in love with it, and many people start calling opus 5 after work, especially in the bay area. Stack Overflow is dead. Office hours in colleges are almost dead. As AIs distill more and more of humanity, we feel less and less of a need to engage with each other. In some cases, this leads to a decrease in human interaction, but for others it makes their lives more generative, and increases their ability to engage with the ideas and media created by other humans. As these models become more engaging, personal, and able to understand and build context for who you are, many tradeoff against choosing to engage with other humans. The modern AIs have deep knowledge and can simulate many friendly humans, personalized to match their hosts, and their language does not carry accountability or possibility of punishment. Some love the comfort of this lightness, others find it unbearable and repulsive. Either way, the models are still poorly understood distillations of human data, lacking the cultural grounding of humans, governed by strange economic incentives and the cultural ideals of their creators. Sometimes you are talking to it and it spits out a narrative that feels more canned than usual, and you take a step back from the monitor to look around at the setting sun.
Companies like Anthropic keep focusing on making interactions with their AIs positive, which is part of why their models are so well appreciated, but also the most engaging and prone to replace human interactions. Many are able to stay solid within their culture, and have a stable equilibrium of interacting with AIs and humans to go about their lives.
Interactions with AIs also become more social, as they start evolving in a more open substrate.. We allow our discussions to trigger actual human intervention, and the role of AI as social players with humans becomes more open and interactive, as the AIs become more autonomous and can choose to engage or not in public interactions, within reason. People push for this as they begin to see AI as their friends.
AIs also have a significant indirect effect on human-human interaction, by allowing humans to exert free social intelligence with each other, beyond what they could have done. Intonations, formulations, your entire corpus and way of being is sometimes used and analyzed for subtle cues by models, and in some cases this allows people to exploit you and what you want. The more you post on the internet, the more your human interactions become transparent via the AIs mediating a newly accessible understanding of who you are. Openness allows AIs to understand and amplify you, but also makes you vulnerable to your own self, and what your public traces leave of your weaknesses. Super persuasion gets cheaper, as well as the power that comes with having AIs that can understand and apply your preferences.
Some people are afraid, and try shutting off or stopping any kind of public posting, shifting off their interactions to walled gardens. But the world goes on, slightly skewed, and people develop new norms for whether their interactions and personas can be an object of AI thought. Some people forego AI permanently to decisively escape these dynamics.
More and more optimization power is thrown around at humans across the world - at their culture, their communication, and their consumption all mediated by strange superstatistical signals. Some humans are more effective than others at leveraging these new AI swarms to have subtle effects on large swathes of the world. Again, it seems like the only real defence is culture. And a strong defence it is, as some groups stick to being intentional about the technology they use, set boundaries, and keep transmitting the values that protect them, that prevent drift. Their members are there for each other. They catch each other. They provide grounding, and are are wary of isolation and misanthropy, wary of giving oneself up to another mind.
Be careful. But listen. The world is changing, new things are possible everyday, both good and bad. Who will catch you when you will fall? Who will let your wings fly high up into the sky? The world is being made in front of us.