Reminds me of when I was 8 and our history teacher told us about some king of England being deposed by the common people. We were shocked and confused as to how this could happen - he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found this hilarious.)
Great post. Three comments:
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this - may be a useful heuristic. No doubt this is a topic discussed in the field.
Secondly, no doubt there is much to be said about what the natural social and temporal boundaries of people’s moral and other influence & plans are, eg family, friends, work, retirement, death (and contents of their will); and how these can change - eg if you gain or exercise power/influence, say by getting an important job, having children, or doing things with wider influence (eg donating to charity), which can be for better or worse.
Thirdly, a minor observation: chess has an equivalent to the Go thing about a local sequence of moves ending in a stop sign, viz. an exchange of pieces - eg capturing a pawn in exchange for a pawn, or a much longer & more complicated sequence involving multiple pieces, but either way ending in a ‘quiet position’ where not very much is happening. Before Alpha Zero, chess programs considering an exchange would look at all plausible ways it might play out, stopping each move sequence only when a quiet position was reached. And in the absence of an exchange or other instability, would stop a sequence after a ‘horizon’ of say 10 moves (and evaluate the resulting situation on the basis of the board position, eg what pieces there are and their mobility).
FWIW ‘directionally correct’ includes ‘right but for the wrong reasons’, I.e. right only by fluke, hence irrelevant & ignorable. Which isn’t what you want to include. Though it’s maybe not often used in that situation
In London where I live, philosophy meetup groups are much better than this. A broader mix of people - few have philosophy degrees, few know any formal philosophy, some have no university degree, very many recent immigrants, though admittedly almost everyone is middle class. Almost always good conversations, with decent reasoning, including people taking contrary and controversial stances, but respectfully discussed and never any heatedness or performative wokeness. Discussions in groups of 4-6 people work best. (The main bad dynamic is if you get someone who talks too much and dominates a conversation.)
How about ‘out-of-control superintelligence’? (Either because it’s uncontrollable or at least not controlled.) Which carries the appropriately alarming connotations that it’s doing its own thing and that we can’t stop it (or aren’t doing so anyway)
Indeed building something you want, or that someone you know wants, is necessary, but not sufficient! I'd say it depends how much time you're going to spend creating it and whether you have broader commercial ambitions at the outset.
If you're creating something you're going to use yourself anyway, that could well justify creating it (if it won't take too long). Similarly if you're creating it for someone else (as a favour, or who will pay you appropriately). Or if you can create a minimum viable product quickly to try out on people.
Also, particularly in the realm of short software projects, there's a blurry line between creating something for fun/interest and doing so with serious commercial intentions, i.e. you could justify doing it speculatively without feeling you'd wasted your time if it goes nowhere.
But if you're going to take months (or years) full-time creating something with a view to commercializing it, i.e. make a serious effort, it is remiss not to do basic research and evaluation first, to find out whether there really is a market for your thing (e.g. who customers would be, potential market size, what customers currently do instead, whether you can actually improve on that enough, how hard that might be, what customers would be prepared to spend), whether your thing should do what you think it should (i.e. its features, or indeed whether you’d be better off creating something else entirely), etc. It's far cheaper to do basic research & planning than to spend months/years creating something speculatively and only then discover much/all of that was misguided.
explicitly without concern for how exactly you are going to commercialize. Indeed, most successful companies figure out where their margins come from quite late into their lifecycle.
The exact way you commercialize or get margins can of course change - but if you can't figure out any way way of making it work on paper, the chances of it succeeding in real life are slim.
(My LW article on this FWIW: Write a business plan already — LessWrong)
Good post. On one point, I think Landmines are useful in many fields, to warn against important beginners’ mistakes/misconceptions. Though (unless for big safety reasons) this should indeed be secondary to positive advice.
Eg with a startup, don’t spend lots of time creating a product before writing a business plan. The plan should come first, or at least early on, because it’s how you decide whether to create the product! (Something I’ve written about on here)
Re being deficient in vitamins, it’s worth taking a supplement containing all 23 essential micronutrients (every few days), as almost no one gets 100% of the recommended daily amount of all of them, which is nearly impossible to achieve from a plausible diet anyway. I.e. you are probably somewhat deficient in something.
Broadly agree - I overstated my point; of course some people don’t have these concepts. But I think there is a big gap between having these concepts as theory (eg IVT in pure math) and applying them in practice to less obvious cases.
(Cf Wittgenstein thought that understanding a concept just was knowing how to apply it - you don’t fully understand it until you know how to use it.)
When scientists first realised an atomic bomb might be feasible (in the UK in 1939), and how important it would be, the UK defence science adviser reckoned there was only a 1 in 100,000 chance of successfully making one. Nonetheless the government thought that high enough to instigate secret experiments into it.
(Obliquely relevant to AI risk.)
https://en.wikipedia.org/wiki/Frisch–Peierls_memorandum