Why it feels like everything is a trade-off
Epistemic status: A cute insight that explains why it might feel like you always have to make sacrifices along one metric to get improvements along another. Seems tautological once you understand it. Might be obvious to everyone. Meta-epistemic status or something: My first post. Testing the waters. Tl;dr: Skip to the last paragraph. Example of a trade-off I'm a programmer. I'm also a design prude. I'm also lazy. This all means that I spend a lot of my time chasing some different metrics in my code: 1) How easy it is to read. 2) How long it takes to run. 3) How long it takes to write. 4) ... These metrics are often at odds with one another. Just the other day I had to make a trade-off involving a function I'd written to evaluate a polynomial at a given point. Originally, it was written in a way that I felt was self-explanatory: it looped over the derivatives-at-zero of the polynomial, which were passed in as a list, and summed up the appropriate multiples of powers of x — a Taylor sum. Pseudo-code: def apply_polynomial( deriv, x ): sum = 0 for i from 0 to length( deriv ): sum += deriv[i] * pow(x, i) / factorial(i) return sum It turned out that this function was a significant bottleneck in the execution time of my program: about 20% of it was spent inside this function. I was reasonably sure that the pow and factorial functions were the issue. I also knew that this function would only ever be called with cubics and lower-degree polynomials. So I re-wrote the code as follows: def apply_cubic( deriv, x ): sum = 0 len = length( deriv ) if len > 0: sum += deriv[0] if len > 1: sum += deriv[1] * x if len > 2: square = x * x sum += deriv[2] * square / 2 if len > 3: cube = square * x sum += deriv[3] * cube / 6 return sum Sure enough, this improved the runtime significantly — by nearly the whole 20%