It's interesting how two years later, the "buy an expert's time" suggestion is almost outdated. There are still situations where it makes sense, but probably in the majority of situations any SOTA LLM will do a perfectly fine job giving useful feedback on exercises in math or language learning.
Thanks for the post!
I guess a related pattern is the symmetric case where people talk past each other because both sides are afraid their arguments won't get heard, so they both focus on repeating their arguments and nobody really listens (or maybe they do, but not in a way that convinces the other person they really got their argument). So there, too, I agree with your advice - taking a step back and repeating the other person's viewpoint seems like the best way out of this.
Some further examples:
It certainly depends on who's arguing. I agree that some sources online see this trade-off and end up on the side of not using flags after some deliberation, and I think that's perfectly fine. But this describes only a subset of cases, and my impression is that very often (and certainly in the cases I experienced personally) it is not even acknowledged that usability, or anything else, may also be a concern that should inform the decision.
(I admit though that "perpetuates colonialism" is a spin that goes beyond "it's not a 1:1 mapping" and is more convincing to me)
This makes me wonder, how could an AI figure out whether it had conscious experience? I always used to assume that from first person perspective it's clear when you're conscious. But this is kind of circular reasoning as it assumes you have a "perspective" and are able to ponder the question. Now what does a, say, reasoning model do? If there is consciousness, how will it ever know? Does it have to solve the "easy" problem of consciousness first and apply the answer to itself?
In no particular order, because interestingness is multi-dimensional and they are probably all to some degree on my personal interesting Pareto frontier:
Random thought: maybe (at least pre-reasoning-models) LLMs are RLHF'd to be "competent" in a way that makes them less curious & excitable, which greatly reduces their chance of coming up with (and recognizing) any real breakthroughs. I would expect though that for reasoning models such limitations will necessarily disappear and they'll be much more likely to produce novel insights. Still, scaffolding and lack of context and agency can be a serious bottleneck.
Interestingly, the text to speech conversion of the "Text does not equal text" section is another very concrete example of this:
So what made you change your mind?