Speaking very roughly our best tools for figuring out the truth are inference and empiricism. By inference I mean using things like Math, Logic, and theory in general to conclude new facts from things we assume to be true. By empiricism I mean looking at the world, doing experiments, etc.
Inference tends to work particularly well when you're highly confident in your premises. Empiricism tends to work particularly well in domains of high uncertainty.
Nothing prevents you from combining the two – for example, my basic applied thought framework is to "run towards uncertainty" – that is, have a theory, identify the points of highest uncertainty in the theory, figure out the smallest experiment/action to resolve that uncertainty, do it. Basically the scientific method. This is what I call "Risk Driven Development" in the context of programming.
People from highly theoretical degrees tend to struggle with high-uncertainty domains after graduating from school because their go-to tool is pure inference, and inference without empiricism fails in the real world because your assumptions are never 100% true, not even close. (They generally learn empiricism with practice.)
The failure modes of high empiricism without theory are much more subtle. Pure empiricism pretty much always works decently well. Failures look more like "didn't invent general relativity" – theory tends to gather a small number of large victories. Less commonly, theory lets you avoid a mistake the first time you do something, or more generally learn from fewer examples.
One major point of contention among programmers is how much value you gain from using abstractions that are 95% true vs. 100% true. Programmers who are really good at inference gain a huge advantage from 100% true abstractions. Programmers who aren't gain a 5% advantage, and thus see it as a huge cost with little benefit. The vast majority of functional programming advocates you meet will be people whose preferred method is inference.
Someone who is strong at inference and weak at empiricism entering a high-uncertainty domain like flirting will often be given advice like "don't think too much." This doesn't usually work: the advice-giver has an "empiricism button", so to speak, that they can activate in that situation. The pure theorist does not, so simply turning off theory doesn't teach them empiricism. A more effective approach, at least in my experience, is to develop theories around effectively interacting in those situations, noticing points of high uncertainty and testing them.
More generally, some high-E, low-I people form an implicit (or explicit) belief that theory is actively counterproductive. They will see lots of apparently confirming examples of this, because theory is rarely useful quickly.
Theory is more or less low-status in domains like business, which means that even when successful people attribute their success to theory, those memes will not spread. A great author on theory applied to business is Eliyahu Goldratt.
A common example of theory being useful is when you find a technique that works in one case, and realize it can be significantly generalized. E.g. Agile basically comes from Lean + Theory of Constraints, which were originally developed for factories. Limiting work in progress is so helpful in so many domains it's almost like cheating.
I disagree that "pure empiricism usually works pretty well". It's more that a failure of theory looks different: to someone who lacks the necessary theory, a problem simply looks intractable or anti-inductive or just plain confusing.
Economics is a great source of examples: consider rent control. To someone with no knowledge of rent control, the fact that it creates a shortage of housing units is surprising; to someone who's been through econ 101, it's obvious. To someone without the theoretical knowledge, it might not even be obvious that rent control has anything to do with the shortage at all - to the pure empiricist, the world is full of random surprises popping up all the time, and a housing shortage is just one more such surprise. Why would it have anything to do with rent control?
This leads into a broader point: theory tells us which questions to ask in the first place. Economics theory says "housing shortage? Check for rent control!", whereas pure empiricism would just check every conceivable factor to see what correlates with housing shortages. It's the same principle as privileging the hypothesis: a large amount of evidence is needed just to distinguish a hypothesis from entropy in the first place. Theory can provide that evidence: it doesn't always give the right answers, but it gives us enough evidence to pick a hypothesis.
Looking at things is an attribute of multiple very different approaches. Flirting seems very different from e.g. the process of writing The Voyage of the Beagle. In the former, experience is part of a feedback process. In the latter, experience is part of a process of finding out and cataloguing what there is.
If you approach flirting as though you were going to write a report home about it, I expect this will take you out of the experience in a way that will not work out well for you (even though if you do it the usual way, you still might write up your thoughts afterwards ). In the other direction, it's easy to imagine an alternate-universe Darles Charwin who visited the Galapagos Islands and instead of cataloguing the finches, simply followed the reward signal they represented and ate them. The type of looking at the world thus displayed would crucially differ from what Charles Darwin actually did.
I'm having trouble parsing this comment. Is there a missing "different" in the second sentence?
Pure empiricism doesn't exist. Aether variable is all the inference that comes with trimming the hypothesis space and the ontological commitments that entails.
I'm unfamiliar with Lean and Agile. Any chance you could point me towards a formal introduction on these?
The author was referencing Lean product development and Agile software development
The E and I in "high-E, low-I" are empiricism and inference?