[ Question ]

What are your strategies for avoiding micro-mistakes?

by An1lam 1 min read4th Oct 201911 comments

18


I've recently been spending more time doing things that involve algebra and/or symbol manipulation (after a while not doing these things by hand that often) and have noticed that small mistakes cost me a lot of time. Specifically, I can usually catch such mistakes by double-checking my work, but the cost of not being able to trust my initial results and redo steps is very high. High enough that I'm willing to spend time working to reduce the number of such mistakes I make even if it means slowing down quite a bit or adopting some other costly process.

If you've either developed such strategies for avoiding making such mistakes or would good at it in the first place, what do you do?

Two notes on the type of answers I'm looking for:

1. I should note that one answer is just to use something like WolframAlpha or Mathematica, which I do. That said, I'm still interested in not having to rely on such tools for things in the general symbol manipulation reference class as I don't like relying on my computer being present to do these sorts of things.

2. I did do some looking around for work addressing this (found this for example), but most of it suggested basic strategies that I already implement like being neat and checking your work.

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

4 Answers

A quote from Wheeler:

Never make a calculation until you know the answer. Make an estimate before every calculation, try a simple physical argument (symmetry! invariance! conservation!) before every derivation, guess the answer to every paradox and puzzle.

When you get into more difficulty math problems, outside the context of a classroom, it's very easy to push symbols around ad-nauseum without making any forward progress. The counter to this is to figure out the intuitive answer before starting to push symbols around.

When you follow this strategy, the process of writing a proof or solving a problem mostly consists of repeatedly asking "what does my intuition say here, and how do I translate that into the language of math?" This also gives built-in error checks along the way - if you look at the math, and it doesn't match what your intuition says, then something has gone wrong. Either there's a mistake in the math, a mistake in your intuition, or (most common) a piece was missed in the translation.

This, I think, may be too domain-specific to really be answerable in any useful way. Anyway, more broadly: when you run into errors, it's always good to think sort of like pilots or sysadmins in dealing with complex system failures - doing research and making errors is certainly a complex system, where there are many steps where errors could be caught. What are the root causes, how did the error start propagating, and what could have been done throughout the stack to reduce it?

  1. constrain the results: Fermi estimates, informative priors, inequalities, and upper/lower bounds are all good for telling you in advance roughly what the results should be, or at least building intuition about what you expect

  2. implement in code or theorem-checker; these are excellent for flushing out hidden assumptions or errors. As Pierce puts it, proving a theorem about your code uncovers many bugs - and it doesn't matter what theorem!

  3. solve with alternative methods, particularly brute force: solvers like Mathematica/Wolfram are great just to tell you what the right answer is so you can check your work. In statistics/genetics, I often solve something with Monte Carlo (or ABC) or brute force approaches like dynamic programming, and only then, after looking at the answers to build intuitions (see: #1), do I try to tackle an exact solution.

  4. test the results: unit test critical values like 0 or the small integers, or boundaries, or very large numbers; use property-based checking (I think also called 'metamorphic testing') like QuickCheck to establish that basic properties seem to hold (like always being positive, monotonic, input same length as output etc)

  5. ensemble yourself: wait a while and sleep on it, try to 'rubber duck' it to activate your adversarial reasoning skills by explaining it, go through it in different modalities

  6. generalize the results, so you don't have to resolve it: the most bugfree code or proof is the one you never write.

  7. When you run into an error, think about it: how could it have been prevented? If you read something like Site Reliability Engineering: How Google Runs Production Systems or other books about failure in complex systems, you might find some useful inspiration. There is a lot of useful advice: for example, you should have some degree of failure in a well-functioning system; you should keep an eye on 'toil' versus genuinely new work and step back and shave some yaks when 'toil' starts getting out of hand; you should gradually automate manual workflows, perhaps starting from checklists as skeletons

    Do you need to shave some yaks? Are your tools bad? Is it time to invest in learning to use better programs or formalization methods?

    If you keep making an error, how can it be stopped?

    If it's a simple error of formula or manipulation, perhaps you could make a bunch of spaced repetition system flashcards with slight variants all stressing that particular error.

    Is it machine-checkable? For writing my essays, I flag as many errors or warning signs using two scripts and additional tools like linkchecker.

    Can you write a checklist to remind yourself to check for particular errors or problems after finishing?

    Follow the 'rule of three': if you've done something 3 times, or argued at length the same thesis 3 times, etc, it may be time to think about it more closely to automate it or write down a full-fledged essay. I find this useful for writing because if something comes up 3 times, that suggests it's important and underserved, and also that you might save yourself time in the long run by writing it now. (This is my theory for why memory works in a spaced repetition sort of way: real world facts seems to follow some sort of long-tailed or perhaps mixture distribution, where there are massed transient facts which can be safely forgotten, and long-term facts which pop up repeatedly with large spacings, so ignoring massed presentation but retaining facts which keep popping up after long intervals is more efficient than simply having memories be strengthened in proportion to total number of exposures.)

One mini-habit I have is to try to check my work in a different way from the way I produced it.

For example, if I'm copying down a large number (or string of characters, etc.), then when I double-check it, I read off the transcribed number backwards. I figure this way my brain is less likely to go "Yes yes, I've seen this already" and skip over any discrepancy.

And in general I look for ways to do the same kind of thing in other situations, such that checking is not just a repeat of the original process.

This applies to any sort of coding, as well. Trivial mistakes compound and are difficult to find later. My general approach is unit testing. For each small section of work, do the calculation or process or sub-theorem or whatever in two different ways. Many times the check calculation can be an estimate, where you're just looking for "unsurprising" as the comparison, rather than strict equality.