Sam Harris has a new book, The Moral Landscape, in which he makes a very simple argument, at least when you express it in the terms we tend to use on LW: he says that a reasonable definition of moral behavior can (theoretically) be derived from our utility functions. Essentially, he's promoting the idea of coherent extrapolated volition, but without all the talk of strong AI.
He also argues that, while there are all sorts of tricky corner cases where we disagree about what we want, those are less common than they seem. Human utility functions are actually pretty similar; the disagreements seem bigger because we think about them more. When France passes... (read 152 more words →)
I looked into it and, yes, this looks basically correct with a caveat: it's computationally very expensive to get those first stages to land on their own at a convenient, precisely chosen location. We've been doing propulsive landings for decades with e.g. the Apollo moon landers and the Viking Mars probes, the latter of which had to be fully autonomous because of speed-of-light delays. Landing a big long rocket is a bit harder because of its somewhat unwieldy shape, but inverted pendulum control problems are definitely... (read more)