I also wasn't a heavy user, it's just something that I noticed from a few conversations with Sonnet 4.5, then I started noticing it in writing that other people co-wrote with Claude. It wouldn't surprise me if Opus uses it even more but I'm not really sure.
As a data point, I did notice the overuse of "genuinely" before the constitution was added to Claude (at least publicly). So I think it would have been introduced somehow during training.
It refers to the existing norm. The author is saying that a recently developed norm is likely less load-bearing than a long-existing one, so the attempt to abolish it is less likely to be flawed.
Sometimes you are a bad friend in ways that you don't realise; everyone has their blind spots. Telling people before taking actions that affect them can let you adjust your expectations before doing something you don't realise is harmful.
As someone who recently got too wrapped up in "just doing things" at the expense of a friend whose permission I did not ask, this post strongly resonates with me right now. Seems like a good word of caution.
On the other hand, things will probably be fine in the long term and as @Bastiaan said, I probably have a more accurate sense of where the boundaries are than I would have if I just avoided doing things. So it's maybe a grey area. But you'll probably always be better off asking permission from the people you might be affecting, as long as you care about their opinion and they are in good faith.
My experience of a Goenka retreat was basically the exact same as yours, with the added complication that I got sick during the retreat, which was very much not fun but also didn't seem to stop my body from eventually dissolving into waves of vibration (so to speak).
Cool and interesting, but it didn't seem much more than that and enough of the retreat made me feel skeptical/put off that I didn't go again.
I suspect that "enlightenment" is probably a bundle of different things rather than one discrete thing, and maybe what it means depends on the culture and even how an individual relates to the world. This is based on the heuristic that when you dig into the nature of mental states, they tend to not fall into neat categories that are the same from person to person.
However, there are people existing today who claim to be "awakened" who were certainly self-aware, and still describe a dramatic change in their perception of the world. The descriptions tend to f...
I'm getting the impression that "consciousness" is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is "conscious".
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?
That makes sense—everything in context. I wouldn't want to go around assuming that I can just tease anyone who is experiencing psychological distress, but I think I do have a sense of specific circumstances where it feels appropriate. And hey, I cannot remember the last time I looked like an asshole, so I'm probably overdue anyway.
Reading this has made something click for me, I think.
The other day a friend of mine had what he felt was an extremely embarrassing moment—although really it was not nearly as bad as he felt like it was. I kind of had this blog series in mind when we were assuring him that it was fine, and it didn't quite connect with him, and I knew it wouldn't, but I also didn't really know what to do so I felt kind of awkward even though I worried that feeling awkward would make things worse.
Part of it is that I was hiding information, in that I actually found the situa...
Another excellent post. This particular post has clarified the framework for me enough that I could imagine it impacting my interactions with people.
It seems like this is formalising things that people tend to gain a partial intuition for through social interaction.
How much of this is perscriptive vs descriptive? I could use music theory to explain why a song sounds good, but in most cases music theory works better as a post-facto explanation than an instruction for how to write good music. Do you think this framework is useful for learning how to change people's expectations/beliefs/attention/etc. or is it a description of something that could be learnt just as well without the framework?
I very much enjoyed this short story.
Before I read the spoiler text at the end, I was confused about the postscript. While the main story has a very clear metaphor and intention, the postscript completely diverges from that and instead sets up the intro to a cliche YA science fantasy action novel; it could have been written by James Patterson. I wonder if modern LLMs would do any better.
Edit: I tried it with ChatGPT. It gave a more realistic opener that matches the text better, but it was too explicit in calling back to phrases directly used in the text, like someone trying to show off how much they remember. Plausibly this could be fixed with the right prompting.
I'm very much enjoying the series so far.
I find this topic very interesting, but it's hard for me to tell what these techniques would look like in practice. It might help if you have more examples of what these looks like. Or maybe it's the kind of thing that I would need to experience personally to get it?
How do you know how much respect the other person is giving you, so that you can successfully bid for attention? Is this just a matter of experience?
Very interesting post. I'll be interested to see how this fits in with other psychological frameworks that have been posited on this forum, like Chipmonk's and Steven Byrnes'.
Some of what you've said so far resonates with me—I have had the experience of a particular instance of suffering dissolving pretty quickly once I noticed that the thing I was observing and the suffering I felt from it are distinct and can be separated from each other. Some of this seems unlike anything I've heard before (like the Attention-Respect-Security model) and I'm curious to s...
I think the complement sandwich can be useful as a stepping stone to good communication. That said, I think of it as a narrow formulation of a more general (and less precisely defined) approach to conversation that I might point to with phrases like "work with people where they are at" and "be aware of the emotions that your words induce in other people". There was an article on LessWrong that I can't find, arguing that clear communication is worded to pre-emptively avoid likely misunderstandings and misconceptions. The idea I'm pointing to is like that, b...
I very much appreciate this post, because it strongly resonates with my own experience of laziness and willpower. Reading this post feels less like learning something new and more like an important reminder.
This is not quite accurate. You can't uniformly pick a random rational number from 0 to 1, because there are countably many such numbers, and any probability distribution you assign will have to add up to 1. Every probability distribution on this set assigns a nonzero probability to every number.
You can have a uniform distribution on an uncountable set, such as the real numbers between 0 and 1, but since you can't pick an arbitrary element of an uncountable set in the real world this is theoretical rather than a real-world issue.
As far as I know, any mathematical case in which something with probability 0 can happen does not actually occur in the real world in a way that we can observe.
On the other hand, the more you get accustomed to a pleasurable stimulus, the less pleasure you receive from it over time (hedonic adaptation). Since this happens to both positive and negative emotions, it seems to me that there is a kind of symmetry here. To me this suggests that decreasing prediction error results in more neutral emotional states rather than pleasant states.
I disagree that all prediction error equates to suffering. When you step into a warm shower you experience prediction error just as much as if you step into a cold shower, but I don't think the initial experience of a warm shower contains any discomfort for most people, whereas I expect the cold shower usually does.
Furthermore, far more prediction error is experienced in life than suffering. Simply going for a walk leads to a continuous stream of prediction error, most of which people feel pretty neutral about.
I've been particularly impressed by 3.1 Pro's ability to do math problems. I have 3 problems that I like to pose to AIs in increasing levels of difficulty (all requiring or greatly aided by a postgraduate-level knowledge of mathematics).
Gemini 3.1 Pro and Opus 4.6 are the first models that could solve the first one, or even come close to a correct solution. Opus was unnecessarily verbose and appealed to some advanced mathematics jargon, while Gemini gave a much simpler, far more readable solution.
The second problem was eventually solved by Opus after a cou... (read more)