Apologies for the impoliteness, but... man, it sure sounds like you're searching for reasons to dismiss the study results. Which sure is a red flag when the study results basically say "your remembered experience is that AI sped you up, and your remembered experience is unambiguously wrong about that".
Like, look, when someone comes along with a nice clean study showing that your own brain is lying to you, that has got to be one of the worst possible times to go looking for reasons to dismiss the study.
Y'know, I got one of those same u-shaped Midea air conditioners, two or three years ago. Just a few weeks ago I got a notice that it was recalled. Poor water drainage, which tended to cause mold (and indeed I encountered that problem). Though the linked one says "updated model", which makes me suspect that it's deeply discounted because the market is flooded with recalled air conditioners which were modified to fix the problem.
... which sure does raise some questions about exactly what methodology led wirecutter to make it a top pick.
Speaking for myself: I don't talk about this topic because my answers route through things which I do not want in the memetic mix, do not want to upweight in an LLM's training distribution, and do not want more people thinking about right now.
Agreed, I don't think it's actually that rare. The rare part is the common knowledge and normalization, which makes it so much easier to raise as a hypothesis in the heat of the moment.
If you want a post explaining the same concepts to a different audience, then go write a post explaining the same concepts to a different audience. I am well aware of the tradeoffs I chose here. I wrote the post for a specific purpose, and the tradeoffs chosen were correct for that purpose.
On the one hand, yeah, that's the dream.
On the other hand, focusing on people and groups and working together seems to be the #1 way that people lose track of wizard power in practice, and end up not having any. It's just so much easier to say to oneself "well, this seems hard, but maybe I can get other people in a group to do it", and to do that every time something nontrivial comes up, and for most people in the group to do it every time something comes up, until most of what the group actually does is play hot potato while constructing a narrative about how valuable it is for all these people to be working together.
I don't know the full sequence of things such a person needs to learn, but probably the symbol/referent confusion thing is one of the main pieces. The linked piece talks about it in the context of "corrigibility", but it's very much the same for "consciousness".
Yeah, Stephen's comment is indeed a mild update back in the happy direction.
I'm still digesting, but a tentative part of my model here is that it's similar to what typically happens to people in charge of large organizations. I.e. they accidentally create selection pressures which surround them with flunkies who display what the person in charge wants to see, and thereby lose the ability to see reality. And that's not something which just happens to crazies. For instance, this is my central model of why Putin invaded Ukraine.
None of that about AI relationships sounds particularly bad. Certainly that's not the sort of problem I'm mainly worried about here.
It sounds like both the study authors themselves and many of the comments are trying to spin this study in the narrowest possible way for some reason, so I'm gonna go ahead make the obvious claim: this result in fact generalizes pretty well. Beyond the most incompetent programmers working on the most standard cookie-cutter tasks with the least necessary context, AI is more likely to slow developers down than speed them up. When this happens, the developers themselves typically think they've been sped up, and their brains are lying to them.
And the obvious action-relevant takeaway is: if you think AI is speeding up your development, you should take a very close and very skeptical look at why you believe that.