LESSWRONG
LW

kavya
232110
Message
Dialogue
Subscribe

X: @kavyavenkat4 I like AI, markets, and culture. Writing and reading on here is my equivalent of skipping stones. 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1kavya's Shortform
4d
8
kavya's Shortform
kavya1h30

The aspect of your work to care about the most is replay value. How many times do people keep coming back? Number of replays, rereads, and repeat purchases are proxies for high resonance. On that note, I wish more writing platforms let you see in aggregate how many first-time readers visited again and how spaced out their visits were. If they can still look past the known plot and imperfections in your work, you're on to something. 

Reply
My AI Predictions for 2027
kavya16h10

But it's not the kind of thinking that leads to clever jokes, good business ideas, scientific breakthroughs, etc.

 

Could an LLM write a show like Seinfeld? This might actually be my test for whether I accept that LLMs are truly clever. Anyone who's watched it knows that Seinfeld was great because of two reasons: (1) Seinfeld broke all the rules of a sitcom. (2) The show follows very relatable interactions on the most non-trivial issues between people and runs it with it for seasons. There is no persistent plot or character arc. You have humans playing themselves. And yet it works.  

Reply
My AI Predictions for 2027
kavya16h30

Benchmarks sound like a way to see how well LLMs do when up against reality, but I don't think they really are. Solving SAT problems or Math Olympiad problems only involves deep thinking if you haven't seen millions of math problems of a broadly similar nature.

 

Benchmarks are best-case analysis of model capabilities. A lot of companies benchmark max, but is this inherently bad? If the process is economically valuable and repetitive, I don't care how the LLM gets it done even if it is memorizing the steps.  

Reply
My AI Predictions for 2027
kavya16h30

Nobody has identified as step-by-step process to generate funny jokes, because such a process would probably be exponential in nature and take forever.

 

I tweeted about why I think AI isn't creative a few days ago. It seems like we have similar thoughts. A good idea comes from noticing a potential connection between ideas and recursively filling in the gaps through verbalizing/interacting with others. The compute for that right now is unjustified. 

Reply
kavya's Shortform
kavya3d30

Young people would benefit a lot more if they defaulted to forming and defending an opinion in real-time. I would rather say what I think and find out how wrong I am than keep waiting for more data. 

This thought came to me during a walk to class. A good professor of mine would show us a graph with a blurred out title. He’d ask for our initial observations or what we think it represents. Even that is intimidating for most because no one wants to say something stupid or too simple. This idea that you can’t have conviction and update your beliefs later needs to be unlearnt. 

Reply
kavya's Shortform
kavya4d10

Yes, that is one aspect of creativity I’m referring to. But more than the immediate, associative leaps, I think I’m interested in their ability to sample concepts across very different domains and find connections whether that is done deliberately or  randomly. Though with humans, the ideas that plague our subconscious are tied to our persistent, internal questions. 

Reply
kavya's Shortform
kavya4d61

My theory on why AI isn't creative is that it lacks a 'rumination mode'. Ideas can sit and passively connect in our minds for free. This is cool and valuable. LLMs don't have that luxury. Non-linear, non-goal-driven thinking is expensive and not effective yet.

Cross-posted from X 

Reply
The Future of AI Agents
kavya4d10

The endgame you have described is a strong possibility. I guess you might try to fight that through increased transparency and ensuring alignment between agent behavior and user preferences. I do see the core issues still stand like the possibility of collusion, hidden incentives, and preference manipulation. 

Reply
The Future of AI Agents
kavya4d30

Thanks for the thoughtful comment.

(1) I agree that as you progress to a higher class, they are more willing to let the agent spend money.

(2) But if you focus the agent to be primarily defensive, I believe almost all classes of society would appreciate that. Why wouldn’t I want the same thing for a better price (either through a drop or negotiation), quality, or alignment with my values? You could incentivize the agent through a cut of the savings.

(3) If you are truly concerned about these agents departing from their given purpose, you can always require the user approval after every purchase.  So the agent is like “Hey here’s something you might interested in because of your {needs} and {something you stated you wanted}”. And you can respond with yes or no and for the final stage, you approve (Apple Pay + Face ID). The advantage of the tokenization tech behind Apple Pay is that it never shares your card details with the merchant (instead payment tokens), and I would imagine it could be the same for these agents. 

(4) We do live in a world where people try to convince you to buy things you need. I do think this model will slightly collapse in the future. Soon, with personalized AI controlling almost all of the information we see, very few companies/products will truly be able to create something inherently viral and game the network of agents. So I actually think we’re going to see a shift toward companies that make what people truly want because otherwise, the agents will never bring that product to their human users’ attention. This would  improve market efficiency!

I’m new to LessWrong so I probably glossed over the super intelligence possibility more quickly than I should have. In the meantime, I think we should still build useful tools like these economic surrogate agents. People are using ChatGPT to buy (instead of Google or visiting 5 different platforms), and if you make that experience more tuned to the user, I think there’s an opportunity. Any kind of superagent(s) might be our Last Interface. Will check out Pattie Maes!!

Reply
The Future of AI Agents
kavya5d32

yes, that's a good way to put it. In the beginning, it was definitely something young people were excited about. But LLMs are a lot more like utilities now. You use it when you need it and move on with life. 

Reply
Load More
1kavya's Shortform
4d
8
5The Future of AI Agents
5d
8