Amarko has not written any posts yet.

I also wasn't a heavy user, it's just something that I noticed from a few conversations with Sonnet 4.5, then I started noticing it in writing that other people co-wrote with Claude. It wouldn't surprise me if Opus uses it even more but I'm not really sure.
As a data point, I did notice the overuse of "genuinely" before the constitution was added to Claude (at least publicly). So I think it would have been introduced somehow during training.
It refers to the existing norm. The author is saying that a recently developed norm is likely less load-bearing than a long-existing one, so the attempt to abolish it is less likely to be flawed.
Sometimes you are a bad friend in ways that you don't realise; everyone has their blind spots. Telling people before taking actions that affect them can let you adjust your expectations before doing something you don't realise is harmful.
As someone who recently got too wrapped up in "just doing things" at the expense of a friend whose permission I did not ask, this post strongly resonates with me right now. Seems like a good word of caution.
On the other hand, things will probably be fine in the long term and as @Bastiaan said, I probably have a more accurate sense of where the boundaries are than I would have if I just avoided doing things. So it's maybe a grey area. But you'll probably always be better off asking permission from the people you might be affecting, as long as you care about their opinion and they are in good faith.
My experience of a Goenka retreat was basically the exact same as yours, with the added complication that I got sick during the retreat, which was very much not fun but also didn't seem to stop my body from eventually dissolving into waves of vibration (so to speak).
Cool and interesting, but it didn't seem much more than that and enough of the retreat made me feel skeptical/put off that I didn't go again.
I suspect that "enlightenment" is probably a bundle of different things rather than one discrete thing, and maybe what it means depends on the culture and even how an individual relates to the world. This is based on the heuristic that when you dig into the nature of mental states, they tend to not fall into neat categories that are the same from person to person.
However, there are people existing today who claim to be "awakened" who were certainly self-aware, and still describe a dramatic change in their perception of the world. The descriptions tend to fall along similar lines, and include:
I'm getting the impression that "consciousness" is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is "conscious".
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?
That makes sense—everything in context. I wouldn't want to go around assuming that I can just tease anyone who is experiencing psychological distress, but I think I do have a sense of specific circumstances where it feels appropriate. And hey, I cannot remember the last time I looked like an asshole, so I'm probably overdue anyway.
I've been particularly impressed by 3.1 Pro's ability to do math problems. I have 3 problems that I like to pose to AIs in increasing levels of difficulty (all requiring or greatly aided by a postgraduate-level knowledge of mathematics).
Gemini 3.1 Pro and Opus 4.6 are the first models that could solve the first one, or even come close to a correct solution. Opus was unnecessarily verbose and appealed to some advanced mathematics jargon, while Gemini gave a much simpler, far more readable solution.
The second problem was eventually solved by Opus after a couple of false claims and some strong hints (and the final solution still had some inaccuracies), but Gemini just breezed... (read more)