This is quite a place to make a pitch for a new kind of AI service. I thought Less Wrong was known as a haven of AI doom. Haven't you heard that AI is either going to kill us all, or else transform the world in some ineffable posthuman way?
Even if I put that aside for a moment, the idea of having an agent in charge of spending my money is alarming, perhaps because I have so little to begin with; and also because I expect it to want to spend my money, because that's what consumer society is about. You do say (statement 5) that the agent should "help me save", and frankly, the people at the lower levels of society probably need an agent that is primarily defensive, that will not just do their financial planning, but which will protect them from scams and exploitation and bad habits. The problem is, that would put you at odds with the business model of a lot of people, who subsist in this world by e.g. convincing other people to buy things that they don't actually need.
Now, maybe social classes higher up the economic food chain can afford to have an agent that is not always in defensive mode. They have money to spend on things beyond survival, and they're determined to get out there and spend it, they just want to spend it as well as possible. This starts to sound like a problem in utility maximization, the agent's first task being to infer the utility function of the user.
Here we have entered conceptual territory familiar to Less Wrongers. The upshot is that if the agent is smart enough, it will deduce that what the user really wants is {inexpressible goal derived from transhuman extrapolation of human volitional impulses}, and it will cure death and take over the world in order to bring about a transhuman utopia; but if it extrapolates incorrectly, it will bring about a transhuman dystopia instead.
This may sound weird or whimsical if you really haven't encountered AI-safety alignment-lore before, but seriously, this is where AI naturally leads - beyond humanity. Even if we do get a world of millions of empowered consumers with friendly agents making their purchasing decisions, that is a transitional condition that will be swept aside by the godlike superintelligences that are the next logical stage in the evolution of AI (swept aside, unless those godlike superintelligences actually want their human wards to live in market societies).
Having expressed my cynicism about actually existing capitalism, and my transhumanist conviction that AIs will replace humans as the ones in charge, let me try to be a little positive. Some things are worth paying for! Some services available in the marketplace, maybe even most of them, do have genuine value! Being an entrepreneur can actually be a net positive for the world! And even if you were ultimately just in it for yourself, you do have a right to make your way in the world in that fashion!
You might wish to investigate Pattie Maes from MIT, and dig up her thesis on reflective agents. In the 1990s, she was a guru of the future agent-based society, I'm sure her thoughts and her career would have a few lessons for you. And if I was really was an entrepreneur trying to design an AI agent intended to act as an economic surrogate for its human user, I might think about it from the perspective of George Reisman's Capitalism, a hybrid work of Objectivist philosophy and Austrian economics, written according to the premise that capitalism done right really is the most virtuous economic system. It has nothing to say about Internet economics specifically, but it's a first-principles work, so if those principles are right, they should still be valid even when we're talking about a symbiotic economy of AIs and humans. (In fact, you could just feed the book to GPT-5 and ask it to write you a business plan.)
Thanks for the thoughtful comment.
(1) I agree that as you progress to a higher class, they are more willing to let the agent spend money.
(2) But if you focus the agent to be primarily defensive, I believe almost all classes of society would appreciate that. Why wouldn’t I want the same thing for a better price (either through a drop or negotiation), quality, or alignment with my values? You could incentivize the agent through a cut of the savings.
(3) If you are truly concerned about these agents departing from their given purpose, you can always require the user approval after every purchase. So the agent is like “Hey here’s something you might interested in because of your {needs} and {something you stated you wanted}”. And you can respond with yes or no and for the final stage, you approve (Apple Pay + Face ID). The advantage of the tokenization tech behind Apple Pay is that it never shares your card details with the merchant (instead payment tokens), and I would imagine it could be the same for these agents.
(4) We do live in a world where people try to convince you to buy things you need. I do think this model will slightly collapse in the future. Soon, with personalized AI controlling almost all of the information we see, very few companies/products will truly be able to create something inherently viral and game the network of agents. So I actually think we’re going to see a shift toward companies that make what people truly want because otherwise, the agents will never bring that product to their human users’ attention. This would improve market efficiency!
I’m new to LessWrong so I probably glossed over the super intelligence possibility more quickly than I should have. In the meantime, I think we should still build useful tools like these economic surrogate agents. People are using ChatGPT to buy (instead of Google or visiting 5 different platforms), and if you make that experience more tuned to the user, I think there’s an opportunity. Any kind of superagent(s) might be our Last Interface. Will check out Pattie Maes!!
I personally would love to give my personal agent pocket money ($20-$30) and see what it can surprise me with[5].
Circumstances vary, but the last thing I need is MOAR STUFF arriving at my home out of the blue.
Q: Wouldn't this idea end up as an ad business once it matures?
Maturity might instead be businesses training their own AIs to advertise to the agents. The end game is people buying stuff and people selling stuff and nobody knowing why, but the AIs assure everyone they're making the best deals.
The endgame you have described is a strong possibility. I guess you might try to fight that through increased transparency and ensuring alignment between agent behavior and user preferences. I do see the core issues still stand like the possibility of collusion, hidden incentives, and preference manipulation.
ChatGPT is uncool culturally in my generation (I am 18). The relationship is one-way and reactive.
This is the first I heard this, and I'm interested to know more detail (I'm in my late 30s). Are all LLMs uncool? ChatGPT in particular? are nearby things "cool"?
The average person doesn't care what LLM/technique is used in their favorite applications. It just so happens to be that ChatGPT came out first, and they stuck with it simply because they are unaware of other options. For those who know about the existence of Claude or Gemini, they constantly model-switch. "LLM" is likely a foreign term to most non-CS majors.
With respect to my generation, most people use ChatGPT for answers. In fact, I had the chance to once share a $200/mo ChatGPT Pro subscription with 15 other students. Throughout the 8 months, there were barely any questions that stemmed from true curiosity. The idea of "prompting" for an answer is a very transactional interaction (think of the people in high school who are "friends" with smart people just to get homework help) . There is no persistence or relationship built between conversations. The only thing that could be considered cool about the tech is the "fast answers", but people start expecting more and get disappointed quickly when they try to one-shot a project.
From a personal perspective, I do think LLMs are cool. I just think they need to be wrapped around a thoughtful experience to be made more accessible to the masses (which is why not all LLM wrappers are bad). I can't wait for a future where the LLM opens up a conversation with something I may or may not find interesting.
Okay, so, like, teenagers use them, just in a kinda functional way where they're not, like, excited about it? is like, more like "google" than like "myspace/facebook/snapchat" when each of those things was cool?
yes, that's a good way to put it. In the beginning, it was definitely something young people were excited about. But LLMs are a lot more like utilities now. You use it when you need it and move on with life.
Here is the question that I have been losing sleep over: "What would it take for personal AI or agents to find its consumer moment?".
About 95% of public startups are working on agents for B2B SaaS. So far, these agents are semi-reliable and perform the best when an outcome is clearly definable and measurable such as minutes or money saved. When building agents for consumers though, we are still very early.
There are a lot of tensions to navigate when building the "perfect consumer agent". Output can quickly become indeterministic in a human-AI conversation. Consumers also have a high bar for what an AI presence should feel like in their lives. They expect the personality to be realistic and not sycophantic. Context about users such as their preferred brands, budget, and allergies should be documented implicitly. Finally, the agent should be able to inform the user on why it can or can't do something (ex: this website looks suspicious to buy from).
Let's do a thought exercise. I'm going to make a series of statements on what I think would need to happen for agents to become ready for mainstream adoption by consumers. Hint: the revolution begins with payments & purchases.
Ok. What a list.
Personally, I am excited for this future, and I'm confident that a version of what I have presented above will happen. Think about it. Your greatest strength and point of pride is knowing what you like and don't like. And if you are extra good, others trust your opinions and finds. So let the agent do the boring stuff (reorders, negotiations, returns, research). Your preferences like the type of food you eat, music you listen to, and style you buy will remain yours. Only now, these preferences are further amplified by your agent.
Feel free to let me know though which parts of the list you agree or disagree with. If you want to help build around this question and push the category forward, please feel free to reach out. I truly believe in using AI to create something novel and delightful for me and my loved ones. I'm on X: @kavyavenkat4. Send hatemail to: kavyav500@gmail.com. Your grievances will be acknowledged.
Two more bonus sections for you all.
Downstream Implications:
For the counterarguments I anticipate, I've put together a brief FAQ:
Q: What about Perplexity Comet?
A: Comet seems awesome, and if you prompt it on a workflow to execute, it will do a decent job. I do think that on a mobile phone, you don't want to see the agent go through every step in front of you (search for e-commerence should happen in the background on its own time). Perplexity's distribution is also weak. It feels like a great power user tool, but even then, the average user still chooses ChatGPT to compare items and get recommendations.
Q: Couldn't any of the foundational AI companies make their own version of this purchasing agent?
A: Absolutely, and they probably will. These companies don't know how to distribute and design for cultural fit though[6]. The bigger issue is that no matter what these companies promise you, they will use your data against you (selling to advertisers). I think we are ready for a different model. Once you know the preferences of enough people, you have all the leverage. Intelligence about collective demand will matter more in a future when agents do the research, negotiation, and ordering on your behalf.
Q: What is the monetization model?
A: This is a good question. You can easily set up tiers where each agent has a different set of capabilities. Maybe you have some agents subscribe to each other if you want to buy the same things as your favorite human influencer or curator. Affiliate revenue and commission on transactions is also another method of monetization. Finally, I think collective intelligence is something future business would pay for. Here is a sample insight: "50 people want this product to be in the color blue". The key here is that the demand being extracted directly from what consumers actually want, not manufactured by advertisers trying to drum up interest.
Q: Wouldn't this idea end up as an ad business once it matures?
A: See the second part of question 2.
Q: Why require the 'approval' by the user for each purchase?
A: Money is deeply personal and sensitive. Most want control over where every dollar goes[7]. Some may argue that this kills the "autonomous" nature of this agent. Discovery does happen autonomously. It is the final "yes" (or Apple Pay Click) that requires the human input, and through this purposeful friction, you build trust anyway. Note that early adopters might be willing to allocate a part of their budget for serendipitous discovery and purchase by the agent [8].
This post is cross-posted from Substack.
Scott Belsky writes about how "personalization effects" are the new network effects. Valuable AI products shouldn't just collect context about you. They should use it to improve every turn of the conversation.
The word 'taste' is thrown around a lot. For this post, I define taste as this higher-dimension algorithm that helps you choose which of the options you like. It doesn't matter if your taste is better or worse than someone else's. The agent is designed to capture YOUR taste.
Humans are natural curators. Just look at how culturally relevant sites like Pinterest or Letterboxd and feautres like "Saved Reels" are.
Great essay by Daisy Alioto titled "The Future of Media is Bank". She argues that agents will soon decide for us which media we consume. The media is accessed by paying for it with a model that is more flexible and spontaneous than subscriptions.
Let me know if you are interested in the results of the "Surprise Me" experiment.
This might be a hot take, but ChatGPT's Cambrian success is carried by first-mover advantage. Operator exists, but I believe there is an opportunity for this payments agents idea to be independently developed. OpenAI is stretched thin and is heavily invested in the AGI race. Experience and design seem like an afterthought.
This sentence was inspired by my desire to have an AI evaluate my spending decisions. I was going to build and sell as a personal finance tool before realizing why this idea wouldn't work: (1) Most people don't try to budget or save let alone pay for a tool like that (see why Mint.com failed). (2) People aren't rational when it comes to money. (3) Revealing statistical insights about your spending habits is not enough. You need to give users a reason to return multiple times in a day. Numbers always have to be accompanied by context that is ideally proactive.
This is a reasonable assumption. The analogue is passive investment vehicles (index funds, ETFs or algo-trading).