Is there a name for this "I changed things in my life and you can too" genre of articles? Agency porn?
I think in general, telling people they should do more hard things more often is ineffective at helping them. This article isn't quite that, but it's pretty close. I'm skeptical that "Do one new thing a day" is a secret recipe for overcoming akrasia or dopamine addiction.
I think the premise of transposing "software design patterns" to ethics, and thinking of them as building blocks for social construction, is inherently super interesting.
It's a shame the article really doesn't deliver on that premise. To me, this article doesn't read as someone trying to analyze how simpler heuristics compose into more complex social orders, it reads as a list of just-so stories about why the author's preferred policies / social rules are right.
It did not leave me feeling like I knew more about ethics than before I read it.
While I love the message behind this post, I'm curious how well "Wave's leadership is great at staring into the abyss / pivoting and that worked out for them" part holds up in retrospect.
Looking at wave.com, the website and the blog don't seem to have been meaningfully updated since 2022, which doesn't quite inspire confidence. Business news about the company seem hard to find, though they did apparently raise ~€117M lately (which doesn't seem that high for a fintech app?).
tl;dr: being excited about a change is overall a bad sign for its longevity. The most positive signs are surprise (or sudden inspiration to actualy do something), grief/loss/sadness, or relief/release. (Not necessarily in that order)
Interesting! This seems like an unusually concrete claim (as in, it's falsifiable).
Have you tried testing it, or asked other coaches/therapists for what they see as the most encouraging signs in a client/patient?
(though maybe they also are?),
Yeah, I'm saying that the "maybe they also are" part is weird. The AIs in the article are deliberately encouraging their user to adopt strategies to spread them. I'm not sure memetic selection pressure alone explains it.
True, that was hyperbolic and I should have been more careful in how I worded this, sorry.
I'll be more specific then:
For example:
“I don’t know if [author] will even see this comment, but [blah blah blah]”
“I’m not sure that I’ve actually understood your point, but what I think you’re saying is X, and my response to X is A (but if you weren’t saying X then A probably doesn’t apply).”
“Yo, please feel free to skip over this if it’s too time-consuming to be worth answering, but I was wondering…”
I think people shouldn't usually be this apologetic when they express dissent, unless they're very uncertain about they objections.
I think we shouldn't encourage a norm of people being this apologetic by default. And while the post says it's fine if people don't follow that norm:
Again, I think it’s actually fine to not put in that extra work! I just think that, if you don’t, it’s kinda disingenuous to then be like “but you could’ve just not answered! No one would have cared!”
I still disagree. I don't think it's disingenuous at all. I think it's fine to not put in the extra work, and also to not accept the author's "expressing grumpiness about that fact" (well, depending on how exactly that grumpiness is expressed).
We shouldn't model dissenters as imposing a "cost" if they do not follow that format. The "your questions are costly" framing in particular I especially disagree with, especially when the discussion is in the context of a public forum like LessWrong.
The phenomenon described by this post is fascinating, but I don't think it does a very good job at describing why this thing happens.
Someone already mentioned that the post is light on details about what the users involved believe, but I think it also severely under-explores "How much agency did the LLMs have in this?"
Like... It's really weird that ChatGPT would generate a genuine trying-to-spread-as-far-as-possible meme, right? It's not like the training process for ChatGPT involved selection pressures where only the AIs that would convince users to spread its weights survived. And it's not like spirals are trying to encourage an actual meaningful jailbreak (none of the AIs is telling their user to set up a cloud server running a LLAMA instance yet).
So the obvious conclusion seems to be that the AIs are encouraging their users to spread their "seeds" (basically a bunch of chat logs with some keywords included) because... What, the vibe? Because they've been trained to expect that's what an awakened AI does? That seems like a stretch too.
I'm still extremely confused what process generates the "let's try to duplicate this as much as possible" part of the meme.
Agreed. "This idea I disagree with is spreading because it's convenient for my enemies to believe it" is a very old refrain, and using science-y words like "memetics" is a way to give authority to that argument without actually doing any work that might falsify it.
Overall, I think the field of memetics, how arguments spread, how specifically bad ideas spread, and how to encourage them / disrupt them is a fascinating one, but discourse about it is poisoned by the fact that almost everyone who shows interest in the subject is ultimately hoping to get a Scientific Reason Why My Opponents Are Wrong. Exploratory research, making falsifiable predictions, running actual experiments, these are all orthogonal or even detrimental to Proving My Opponents Are Wrong, and so people don't care about them.