Posts

Sorted by New

Wiki Contributions

Comments

We agree that an individual can manage their goal better - keeping an eye on the big picture, not ending up with lost purposes - than an organization. But what are you proposing (are you proposing anything?) Should we simplify, get rid of large and complex organizations? Seems that even when you count all the waste of lost purposes, we're more productive with them, at least in many cases ("better off" is another question, but let's leave that aside since it doesn't really bear directly on this issue). Complexity and scale have advantages, as well as this disadvantage; the mere existence or even extent of "lost purposes" doesn't imply lower overall efficiency.

When I try to explain to a computer how to do what I want (programming), my first explanation always leads to a million lost purposes (bugs). I test it, find the bugs, and clarify my instructions, trying to make them less ambiguous - less possible for the computer to fulfil the letter of my law without the spirit. Eventually I reach a point where testing doesn't turn up many more problems, but of course I'll never eliminate all lost purposes - software of sufficient complexity, even the most vetted commercial software, still has loads of bugs (and even aside from bugs, times where it follows the letter rather than the spirit of what the programmer intended). So why do we bother? Because the formalized instructions are more leverageable, and the computer is faster, than doing the work myself. Faster by enough orders of magnitude to make up for all the lost purposes and then some.

Ultimately my takeaways are:

-We should create only the laws/organizations/programs that make us much better off, even taking into account all the pitfalls of lost purposes.

-We should design these laws/organizations/programs as carefully as possible, with a constant eye to how every intention we express can lead to a frustrated goal.

-We should test and reassess these laws/organizations/programs as often as we possibly can, in every way that we possibly can, to catch as many instances as possible of their failing to accomplish what we wanted, and thus to continually refine them. And we should take the testability and refinability of a mechanism into account, when deciding whether it's worthwhile. (Computer programs are particularly amenable to this kind of testing, which is why they generally end up doing about 80% of what they're supposed to; an education system is much harder to treat this way.)

I think the concern of lost purposes is often underweighed by actual policymakers, but I also think it's unproductive to complain about lost purposes in the abstract, and misguided to design our lives around avoiding them (rather than avoiding the cases where they outweigh their benefits). Incorporating concerns about them (in the ways listed above) seems useful.