Slop, as I've generally seen it, refers specifically to AI-generated slop. You mention as much in your article. However, it seems like in your article, despite the fact that you do recognize that this term is generally used to refer to AI-generated slop at large, you seem to argue generally in favor of the concept that more people being able to produce more works will lead to good things. And I think that if those works were as they have been prior to, let's say, five years ago, when the influx of slop began, then I may agree with you. If there were simply easier ways for smaller creators to get seen or for people to get started with less resources, I would agree. But I don't think that what's happening is people being able to enter into things with less resources. I think what's happening is that larger amounts of attention are being able to be captured by lower amounts of effort. And that's the main worry, when people are being negative about slop: If it no longer requires more than a few minutes worth of effort to generate 100 videos that could potentially entertain 80 out of 100 people, that means that it's possible to make money and it's possible to capture attention with far higher efficiency and far lower effort overall as compared to pre-slop.
And that fact means that low effort content will be a much higher volume of what everyone sees on the Internet because it's able to capture attention on those feeds as well as or better than things that have more effort involved, A rising tide of slop raises all boats, possibly, yes, but most of those boats are slop.
Anecdotal evidence: I recently found a channel that generated about 10 videos per day, all within the theme of 'Simpsons episode recaps' using 'screenshots' of 'episodes' that were really just AI-generated images, and all of which more or less had to do with Elon Musk visiting Springfield, or another plot involving Elon Musk, or involving some sort of popular figure visiting Springfield.
Some of these videos were truly horrendous, hilariously bad or scary stuff, just absolutely incomprehensible imagery, nonsensical scripts that were obviously written by ChatGPT on a cheap plan, etc.,
But despite the fact that out of probably over a thousand videos that they had uploaded, almost all were doing extremely poorly, every once in a while one of the videos would hit thousands or hundreds of thousands of views, thus making it profitable for this person to spam YouTube with 15 slop videos a day on the off chance that they get one big hit, because the slop happened to be particularly engaging, or the script ended up being just coherent enough to seem plausible, and the images just convincing enough to seem real. I don't think that most people would believe that this sort of channel is beneficial to anyone, except for those who want cheap entertainment and don't care about the validity of what they're seeing, or how it was produced.
Also, generally on these platforms, everyone has to create a huge glut of content in the hopes that they get one or two big videos, because you could just get unlucky and not get more than a couple eyeballs on your stuff, even if you've made something truly good or worthwhile, and the algorithm particularly favors regular uploads. Which means that this increased volume from all the slop is making it harder for people to get seen, not easier, as you seem to suggest.
Excellent and interesting work.
This is excellent and related to a project I'm working on, I'm really happy to find this post, and I thank you for sharing it, and if I make headway on my project (which is similar but not as risk of eating your lunch, I assure) I'll be happy to share it with you. Language learning could be so, so much easier than it currently is, and stuff like this is at the core of how it could be improved.
I think that the LLM editing makes this quite a bit more difficult to read and grok/internalize. This is one case where I wish I was reading something less 'polished' if the polish is Claude 'helpfully' giving things names like 'The Perfectionist's Pause'. It's a shame because I really agree with the Adlerian view of these things and I think you're pointing at a lot of the right stuff here but it's honestly kind of insufferable to read it through this lens.
But, maybe that's just me using the 'Moral Fortress' and 'Perfectionist's Pause'. (That isn't just me being cute or passive aggressive, I think those are the two most likely mechanisms from your post that I might be applying here to discredit your work)
My immediate impression is that this is intended to placate the growing crowd of ChatGPT users who decried the removal of 4o and 4.1, by giving them something very 'warm', as they describe it, in terms of affect and response. I don't know how to feel about this.
Sadly, stock Android doesn't seem to let you change Intensity for grayscale, only the colorblindness modes.
I think your title should be 'I Asked Claude What Paul Fussell Would Have To Say If I Asked Him For Writing Advice' because I feel quite clickbaited.
I had no idea who Paul Fussell was prior to reading your article, so my introduction to him was getting misled that you actually asked some presumably impressive writer for writing advice, and my time reading your article would be spent reading words from a prestigious writer, not an LLM.
I understand that Lesswrong has very lax rules specifically for titles and I think it's fine and kind of interesting because authors get to give their articles engaging titles. I don't think it's fine and kind of interesting when someone lies in their title to make a post centered around the output of a Claude prompt.
This seems HUGE for my personal productivity. If it works how you've stated, it's absolutely unbelievably huge. You've described my exact problem with getting started on tasks with any level of complexity, and I'm very excited to see if your proposed solution works- just reading it and imagining attempting it myself, i felt very strongly that this could be very effective. Thank you- I'll try to report back after I give it a shot.
It could also be your 'subscribe to my substack!' at the end, maybe.
yeah, I really think people should be more clear about when they are linking to their own paywalled substack. shouldn't this be a linkpost at very least?