Randomly (or quite explicitly if I'm being honest) I've been trying to work on trying to create a more general algorithmic feed for a bit so I've got some context that might be interesting.
Most of it is in a github repo of mine as I've been working on different ways of look at recommendation algorithms and thinking of ways to make them more useful to people.
Some of my thoughts on the codebase are best represented as part of the codebase and I cba going through it fully myself and write a long comment so I created an LLM block below.
My actual human thoughts are something like that it seems kind of hard to create a nice algorithmic feed tool that works more universally for people? My initial attempts at training an algorithm for this didn't work, my next thing I will do is to do a RAG text embedding style setup where the thought is basically that if I already have a bunch of preference data locally it will be easy to build on that.
If that doesn't work I've been cooking this plan for using active inference and a brain inspired knowledge architecture in order to explicitly parametrize exploration as a key value in the system itself and also to be able to train it more efficiently over time.
So basically I agree with you on this post but it seems like a tool where the natural incentives are against it where it is at least not trivial to build something good. (I'm ok at SWE but not the best and I'm just doing this in my free time so don't overupdate)
LLM summary of prompted codebase knowledge
A few pieces of this are already further along than the framing suggests, and I think the interesting question is different from "who will build it."
On the "who" question: Bluesky's AT Protocol already ships a custom feed generator API — any feed on the network can be an LLM-curated one, and the switching cost for users is one tap. Paper Skygest (a curated academic-paper feed) has 50K+ weekly users on exactly that infrastructure. Matter and Readwise Reader are already doing LLM-curated long-form for personal reading. The startup you're predicting exists in several forms; the pieces just haven't been assembled into the specific "tell it what you want and it obeys" product yet, probably because that product is less defensible than it sounds.
The less-discussed problem: **declared preferences aren't actually the alignment target most people want.** If I tell a feed "no Trump news," I get a feed that mirrors my present self — which is fine for filtering, but it's not "aligned," it's just obedient. It optimizes against ragebait by replacing one reward-hacked loss function with another (my stated preferences, which are also gameable, just by me). The deeper misalignment in YouTube/X isn't that they ignore what I say I want; it's that nobody — including me — has a clean articulation of what attention *should* optimize for, especially in aggregate.
The more interesting design targets I see are things like:
- **Bridging objectives** (what Community Notes uses): rank content that gets positive engagement *across* ideological clusters, not within them. This is already deployed at scale and demonstrably reduces the ragebait equilibrium without needing personalization at all.
- **Epistemic-stance-aware ranking**: labelers or LLMs tagging claim/question/evidence/opinion, then ranking by curiosity/inquiry rather than engagement.
- **Slow feeds**: weekly digests, not real-time streams, where latency itself is part of the design (sidesteps the recency/outrage coupling).
These don't require users to "declare preferences" in the shallow sense. They require a *different objective function* at the infrastructure layer. And crucially, open protocols like ATProto make it possible to run that experiment without having to out-compete YouTube.
So I'd restate the prediction as: the disruption isn't "LLMs will obey what you tell them to show you," it's that the cost of running a feed with a non-trivial objective function dropped by ~100x, and the equilibrium where one algorithm optimizes everyone's attention for ad revenue is no longer a natural monopoly. The startup play is real; the deeper shift is that "algorithmic feed" stops being a thing one company does.
I wish! The problem is that the popular sources are BOTH misaligned AND limited-access. They don't allow users or 3p apps to curate the content differently. This is so deep in their economic model that it's likely that a significant change would kill the content.
I fear there's a real tension between what we (LessWrong readers - overthinking, systematizing, and non-conforming) want and what most people are able/willing to use. Why did Mastodon fail? Failed to get critical mass among non-geeks. But also because it was difficult to set up your feed/follows to get the right level of interaction. That exact crack (giving the user what they react to) is intentionally missing (in favor of giving the user what they ask for). And for most people, it was less useful.
For now, this is the equilibrium. Your options for media feeds are:
- corporate-aligned ML-recommended cocaine personalized to you for the purpose of optimizing attention
- voting systems (reddit, Hacker News, LessWrong) which aren't personalized to you
- non-algorithmic manual curation (personal blogs, newspaper websites like ArsTechnica) which isn't personalized to you
I think you missed the best one available right now: non-algorithmic manual curation which you personalize to you by yourself.
For example on Lesswrong you can subscribe to posts by authors, this leads to a feed (in the notification pane) that is currated to you. It's how I got to this post. On youtube one needs to remove a lot of content views to avoid algorithmic recommendations but when you do so the "subscriptions" page remains, which is almost not algorithmically filtered. I personally use ublock origin with custom filters to block all algorithmic recommendation. For news, I subscribed to a handful of email newsletter and currently just one RSS feed, all in the same software (thunderbird).
I wouldn't say I perfectly removed algo recommendations from my life (some are too trivial for me to bother finding how to remove them or too hard to remove), but I got 80% of the way there and it makes a huge difference.
The flaw of this approach however is that the curation to me mostly relies on a single criteria "who did it" or on existing curation lists.
What do you mean "will soon", some of them e.g. Facebook are already enshittified beyond all repair
I predict that LLMs are about to disrupt algorithmic media feeds, and that this will start with a startup that curates blogs for you.
Big Media is Misaligned
If you look at a list of the world's top 10 websites, half of them are media websites. Of these 5 media behemoths, 4 (YouTube, Facebook, Instagram, X/Twitter) are misaligned [1] media feeds. By "misaligned media feed", I mean a website where the primary user interface is you go to the homepage and a giant machine learning algorithm shows you what you are most likely to engage with.
These media behemoths are fundamentally misaligned with user values. Note that I wrote "user" and not "human". These algorithms are well aligned with their corporate owners' target metrics. For example, YouTube's recommendation algorithm is well aligned with its targets of clickthrough rate and watch time. The problem is that media websites' values are unaligned with user values. Consequently, big media peddles the media equivalent of crack cocaine. That's why it's so easy to find ragebait on X/Twitter, and why thumbnails are so important to YouTube videos.
For now, this is the equilibrium. Your options for media feeds are:
What all of these have in common is that none of them curate based on an individual user's declared preferences. That's about to change. As of recent developments in LLMs, it is now possible to curate media feeds based on declared preferences. I should be able to tell YouTube "I don't want to see any more YouTube videos about Donald Trump news" and have that obeyed. Right now YouTube does not have that feature.
So, who will implement this? In the long run, I believe that YouTube/Facebook/Instagram/etc. will implement a feature like this. But I believe that this domain will be pioneered by startups due to a conflict of interest: A system where big media peddles cocaine is solidly in the short-term financial interests of big media. (Especially since LLM curation is surely more expensive to run than the current recommendation algorithms.) They will change eventually, but won't change until threatened. The only question is how soon all of this happens. Probably first a LLM personalized media feed startup takes off, and then YouTube, Facebook, Instagram and X/Twitter will scramble to catch up.
Where will this start? Probably with long-form writing and probably not with short-form video. Especially because there isn't a good way to get a personalized feed of blog articles right now. I can open up YouTube and get a feed of personalized video recommendations, but there's no [2] equivalent personalized feed for blogs. Personalized LLM media feeds won't be isolated to blogs, of course. That's just where I predict the revolution will start.
Misaligned cocaine media feeds will continue to exist, of course, just as literal crack cocaine exists, but I predict that in the future they will be considered vices, the way candy and soda are considered vices today. In the future, LLM-curated media will be considered the "good for you" option.
Imagine a world with no ragebait, no clickbait, and no <whatever else you don't like> in your media feeds. This world is already possible. The only question is how long until someone finds the time to vibe code an aligned recommendation algorithm.
The not-obviously-misaligned-to-maximize-attention media behemoth is reddit. ↩︎
Perhaps Substack's home page is an exception. I don't know as I don't use it. ↩︎