Nitpick: IDK but my impression is that in the Raptor example, your point about "simple core" substantively applies, but also much (most?) of the visible complexity difference isn't about the design being much simpler than someone could imagine at the beginning, but rather stuff like
We needed a bunch of sensors at the beginning, now we don't anymore.
and
The design stabilized, so a bunch of parts (that previously wanted to be separate so that they could more easily be rearranged) can now be fused into one part, so there's more stuff hidden on the inside / made more spatially compact / has fewer joints.
and
We proved out performance of various elements, so we need fewer redundancies.
This stuff has the flavor of optimization that usually should come after the core design is worked out.
Here's some of how I think about it: plans are made of smaller plans, inside their steps to achieve 'em. And smaller plans have lesser plans, and so ad infinitum
Every plan has ends to cause, and steps to help it cause 'em.
And every step's a little plan, and that's the way ad nauseum.
I've also found this useful in my software engineering career. However, a flipside: when something is both easy to do and to undo, the overhead induced by questioning the requirements sometimes leads to not doing enough experiments. It depends per-task on (how hard to implement + how hard it will be to undo) / how hard to check why I want to implement. if it's motivated by vague intuition, or by a user request, AND it's low overhead to (implement | maintain | undo), then I often just do it then undo it later if I don't like it. (I've mostly been on teams where I was granted autonomy to just do things based on my own taste.)
also, sometimes the answer you get from the requirement-giver is "this is my area of expertise, you can reply with your own requirements and we can debate", which is face-saving/partially-guess-culture speak for "you don't get to question that requirement in full depth, because if you do you'll optimize in a domain which is too hard to get you up to speed on right now, according to me, the person you're asking; seeing as how it's my job to optimize in that domain, and the domain in question has significant complexity I can't explain to you efficiently". eg, one often gets this from lawyers, hardware engineers, etc. people who OOP would tell you to just push through; I think he's specifically asking people to throw away the safety margins of "someone else has a reason to ask for this" and try to know everything, and sometimes that's reasonable - quite often having that ethos has been extremely helpful for me - but I think there's a lot to be lost from having to download a full copy of a requirement's motivation into one's head in order to act on it.
(Both of my comments are trying to get at a reason I think lesswrong is systematically underperforming what it should be due to what appears to me to be a culture problem in the team. I'm motivated by irritation at a category of community feedback going unapplied. That the issue I'm describing seems to apply to the companies run by OOP seems to back up my concern - a tendency to underuse data gained from sources that are unable to explain themselves in the ontology the team uses internally. This is probably less than half of the variance in explaining lack of responsiveness; probably the top item on the list is just tech debt, same as any org that takes a while to apply user feedback.)
Man, oh man, does this point out one of the biggest problems with working with AI for software engineering. A competent human will question my assumptions, and help me see from a different perspective. Right now, all of the LLMs just sort of run with whatever I tell them, like a well-read and fast-typing intern.
I know that this is an active area of development, and I hope that we find solutions to this problem for all of the reasons you described above.
In most requests I make to Claude Code, I explicitly write "don't start yet, ask clarifying questions first" and then answer any questions it comes up with.
I done similarly with my CLAUDE.md, including a line close to what you've written but also the line "Please search for and consider other solutions beyond those I've suggested before writing any code." It seems likely to me that there are better ways to communicate this notion to Claude, though, and I've done less experimentation than I would like.
I find that as soon as you start to implement this advice, you start to run into significant problems.
From my experience, a lot of times it's actually quite hard to talk about the requirements.
To give 2 examples:
How are you i.e. supposed to question business assumptions as an engineer?
How are you supposed to question the requirements about sth so ambiguous as the DX of the API you are supposed to work on, where everyone may have a different opinion?
Context: Every Sunday I write a mini-essay about an operating principle of Lightcone Infrastructure that I want to remind my team about. I've been doing this for about 3 months, so we have about 12 mini essays. This is the first in a sequence I will add to daily with slightly polished versions of these essays.
The first principle, and the one that stands before everything else, is to question the requirements.
Here's how Musk describes that principle:
Here's some of how I think about it: plans are made of smaller plans, inside their steps to achieve 'em. And smaller plans have lesser plans, and so ad infinitum.[1]
But it's hard to get your subplans right up front.
There are a bunch of reasons for this: sometimes something looks like it might be easy from 1,000 feet, and a lot gnarlier at 10 feet. Sometimes, one of your subproblems looks like it has a standard solution, but that standard solution is built to solve a bunch of issues you don't care about. The point of questioning the requirement is to de-silo your subplan, and look at a larger scope, and use that to figure out a better plan.
Most work done in most organizations is work that is best done never at all. The core of most solutions turns out to be much more elegant and simple than anyone who worked on the first version of something was able to imagine:
Why do we have requirements at all? Of course, we have plans and subplans, but those don't constitute "requirements". As will be a recurring theme in these principles, the need for things like the "requirements", and the associated risks, rear their head when a project needs more than one person to be completed, and as such, when tasks need to be split up across many individuals, and often shudder hierarchies of individuals.
The best way I have found to set up a delegee to understand the goal of a task and to question the associated requirements, is to communicate two things at the same time:
Abstractly communicating goals is usually a doomed endeavor, so start with a concrete solution sketch. Then, taking the solution sketch as a starting point, explain what problems it is trying to solve. No plan survives contact with the enemy, but the initial plan is still usually the best tool you have for conveying your eventual goal.
As a delegee, question the requirements. Throw away the solution sketch you were given, and think about how you would go about this problem from first principles, in the context of its broader goal.
In conversation this looks like this:
Most requirements come in the form of uncommunicated assumptions, so the above conversational pattern often plays out at different levels of abstraction:
Tomorrow: "Do not hand off what you cannot pick up"
Credit for this mini-poem and the subsequent paragraph goes to Rafe