15

Finding a good formulation for a problem is often most of the work of solving it.

I agree with this intuitively, and I feel like I have seen this principle at work in my own work and in the problems I have tried to solve. However, when I try to convince others of this idea, I struggle to find examples that they can connect with or that they find compelling.

I suspect that programmers find this idea appealing because we routinely work with formal systems, and all of us know the experience of making a minor change in perspective and seeing an impossible problem turn into an easy one. So I'm most interested in examples that have nothing to do with code, examples that a lay audience would be able to grasp.

I would be particularly interested in examples from the history of science or medicine, if anyone can think of some. Scott and Scurvy is the only example I currently know of, and while interesting, does not seem like a perfect fit.

Much appreciated!

New Comment

1 Answers sorted by top scoring

johnswentworth

Jul 02, 2019

40

Here's a programming example which I expect non-programmers will understand. Everyday programming involves a lot of taking data from one place in one format, and moving it to another place in another format. A company I worked for had to do even more of this than usual, and also wanted to track all those data flows and transformations. So I sat down and had a long think about how to make it easier to transform data from one format to another.

Turns out, this sort of problem can be expressed very neatly as high-school algebra with json-like data structures. For instance, you have some data like [{'name':'john',...},{'name':'joe',...},...] and you want to extract a list of all the names. As an algebra problem, that means finding a list of solutions to [{'name': X}] = data. (Of course there's simpler ways of doing it for this simple example, but for more complicated examples with tens or even hundreds of variables, the algebra picture scales up much better.)

Problem formulation is even more important in data analysis and/or machine learning problems. At one company I worked for, our product boiled down to recommendation. We had very fat tails of specific user tastes and corresponding items, so clustering-based approaches (i.e. find similar users, recommend things similar to things they like) gave pretty mediocre recommendations - too many users/items just weren't that similar to any major clusters, and we didn't have enough data to map out tiny clusters. Formulating the problem as pure content-based recommendation - i.e. recommending things for one user without using any information whatsoever about what "similar" users were interested in - turned out to work far better.

Anyway, that's enough from my life. Some historical examples:

... etc. Practically any topic in applied math began with somebody finding a neat new formulation.

Information theory is a great example of this. You can find some nice articles around talking about the history of it. Basically Shannon lived with the details that would come to be abstracted by information theory for years and he felt there was some general way to describe it all, but it took him considerable effort to figure it out, and it was very much non-obvious that there was a good unifying abstraction or what it would look like before it was complete. Now the short paper that introduced information theory remains one of the most widely read academ...

A symptom of this missing reformulation is often when people focus on a particular solution to a problem that is implicit in their mind, often without realizing it. I often have an interaction like this at work, which can be summed up as "What problem is this solution for?":

Technical lead: "What would it take to implement X?"

Me: "Why do you want to do X?"

TL, visibly frustrated: "To achieve Y"

Me: "What is a goal of having Y?"

TL, even more frustrated: "There is a customer request for Z, and Y is how we can implement it"

Me: "What problem is the customer trying to solve?"

TL, now exasperated: "I don't know for sure, but the customer service asked for Z"

Me: "My guess is that what triggered a request for Z is that they have an issue with A, B or maybe C, and, given their limited understanding of our product, they think that Z will solve it. I am quite sure that there are alternative approaches to solving their issue, whatever it is, and Z is only one of them, likely not the best one. Let's figure out what they are struggling with, and I can suggest a range of approaches, then we can decide which of those make sense."

TL: "I need to provide an estimate to the customer service so they can invoice the customer"

Me: "As soon as we figure out what we are implementing, definitely. Or do you want me to just blindly do X?"

TL: "Just give me the estimate for X." sometimes accompanied by "Let me run the reports and see what's going on"

Me: "N weeks of my time" [well padded because of the unknowns]

Occasionally some time later, after some basic investigation: the real problem they seem to be facing is actually P, and it has multiple solutions, of which X is one, but it requires more work than X' or X'' and interferes with the feature F for other customers. Let's run the latter two by the customer, with a cost and timeline for each, and see what happens.

In the above pattern there were multiple levels of confusing problems with solutions:

• The customer asked for Z without explaining or even understanding what ails them
• The customer service people didn't push back for clarification, and just assumed that Z is what needs to be done
• The TL decided that Y will solve Z and that X is a way to implement Y

This may or may not be related to the question you are asking, though. Here is a classic example from physics after the null result of the Michelson-Morley experiment that showed that the speed of light is constant: "What happens to the medium that light propagates in?" vs "What if we postulate that light propagation does not need a medium?"

Try replacing the word 'formulation' and 'solving' with 'representation' and 'traversal'. If a search space seems intractably large you either

1) haven't screened off major portions of the search space

2) have a traversal algorithm that takes too long to either generate the next move in the space or to check the node it is on

Changing formulation is usually about bounding the search space by choosing a representation that captures more knowledge about screening off irrelevant solutions. Any time we use metaphors we're engaged in a sort of hopeful reference class forecast. We artificially bound the search space and hope that the area we've bounded has a solution in it. We improve performance by having 'taste' in metaphors, ie choosing metaphors that seem to have visible moving parts similar to the moving parts of our problem domain and hoping that these extend to the non visible parts as well.