The base rate for acute psychosis is so high
Do you happen to know the number? Or is this a vibe claim?
Man, I have conflicting feelings about this post. This entire approach to speeches is... probably the right choice for someone with typical public speaking skills, but puts a ceiling on how good it can get.
For comparison, here is my general approach for basically all of my public speaking:
The whole strategy of "write speech, practice doing that exact speech, then deliver it exactly as practiced" leaves no room to match the audience's energy on the fly. It rules out most forms of audience interaction as part of the speaking, because real audience interaction introduces the possibility of surprises. When I watch other people use the "follow the plan" style, it feels like it's not engaging with the audience (because, well, it isn't).
And entire concept of holding a written script while on-stage would just be complete anathema to engaging speaking, at least the style I usually use. You're stuck to the podium, so basically half of good public speaking is immediately ruled out; you can't use most really expressive body language, can't use space and movement to communicate context switches or move attention flow.
... but the flip side is that the style I usually rely on requires being completely comfortable on stage, and requires a deep understanding of the plan such that one can generalize off-distribution as surprises come up. It would be totally nonviable for lots of people.
If you want to solve alignment and want to be efficient about it, it seems obvious that there are better strategies than researching the problem yourself, like don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already.
Alas, approximately every single person entering the field has either that idea, or the similar idea of getting thousands of AIs to work on the issue instead of researching it themselves. We have thus ended up with a field in which nearly everyone is hoping that somebody else is going to solve the hard parts, and the already-small set of people who are just directly trying to solve it has, if anything, shrunk somewhat.
It turns out that, no, hiring lots of other people is not actually how you win when the problem is hard.
It sounds like both the study authors themselves and many of the comments are trying to spin this study in the narrowest possible way for some reason, so I'm gonna go ahead make the obvious claim: this result in fact generalizes pretty well. Beyond the most incompetent programmers working on the most standard cookie-cutter tasks with the least necessary context, AI is more likely to slow developers down than speed them up. When this happens, the developers themselves typically think they've been sped up, and their brains are lying to them.
And the obvious action-relevant takeaway is: if you think AI is speeding up your development, you should take a very close and very skeptical look at why you believe that.
Apologies for the impoliteness, but... man, it sure sounds like you're searching for reasons to dismiss the study results. Which sure is a red flag when the study results basically say "your remembered experience is that AI sped you up, and your remembered experience is unambiguously wrong about that".
Like, look, when someone comes along with a nice clean study showing that your own brain is lying to you, that has got to be one of the worst possible times to go looking for reasons to dismiss the study.
Y'know, I got one of those same u-shaped Midea air conditioners, two or three years ago. Just a few weeks ago I got a notice that it was recalled. Poor water drainage, which tended to cause mold (and indeed I encountered that problem). Though the linked one says "updated model", which makes me suspect that it's deeply discounted because the market is flooded with recalled air conditioners which were modified to fix the problem.
... which sure does raise some questions about exactly what methodology led wirecutter to make it a top pick.
Speaking for myself: I don't talk about this topic because my answers route through things which I do not want in the memetic mix, do not want to upweight in an LLM's training distribution, and do not want more people thinking about right now.
Agreed, I don't think it's actually that rare. The rare part is the common knowledge and normalization, which makes it so much easier to raise as a hypothesis in the heat of the moment.
If you want a post explaining the same concepts to a different audience, then go write a post explaining the same concepts to a different audience. I am well aware of the tradeoffs I chose here. I wrote the post for a specific purpose, and the tradeoffs chosen were correct for that purpose.
I'm not sure what it would even mean to teach something substantive about ML/AI to someone who lacks the basic concepts of programming. Like, if someone with zero programming experience and median-high-school level math background asked me how to learn more about ML/AI, I would say "you lack the foundations to achieve any substantive understanding at all, go do a programming 101 course and some calculus at a bare minimum".
For instance, I could imagine giving such a person a useful and accurate visual explanation of how modern ML works, but without some programming experience they're going to go around e.g. imagining ghosts in the machine, because that's a typical mistake people make when they have zero programming experience. And a typical ML expert trying to give an explain-like-I'm-5 overview wouldn't even think to address a confusion that basic. I'd guess that there's quite a few things like that, as is typical. Inferential distances are not short.