Language isn’t just about efficiency; cultural aesthetic is a terminal value.
Also:
Not that I disagree with the conclusion, but these are good arguments against democracy, humanism and especially the idea of a natural law, not against creating a sentient AI.
C is Turing-complete, which means Gödel-complete, so yeah, the universe can be viewed as a C program.
Utopia: Sexual mores straight out of a Spider Robinson novel: Sexual jealousy has been eliminated; no one is embarrassed about what turns them on; universal tolerance and respect; everyone is bisexual, poly, and a switch; total equality between the sexes; no one would look askance on sex in public any more than eating in public, so long as the participants cleaned up after themselves.
Sounds like another flavour of dystopia to me...
I can’t decide anything in 2 minutes, so I’d just one-box it because I remember it as the correct solution to the original Newcomb’s problem — and hope for the best.
In the end, all that matters is choice. If you can always choose a safe and comforting option, nothing is scary (as long as you don’t have the itching sense of justice that makes you meddle in strangers’ business). If you can’t, it’s an authoritarian dystopia no matter how good it is on other accounts.
Benefit #2 seems superficial compared to good architecture, which is usually heavy. I’m not sure if it’s feasible to put, say, some Neoclassical or Georgian house on wheels. And even mostly-wooden Rivendell-like architecture wouldn’t be that light, unless it’s some unsatisfying plastic fake.
Also, I don’t think robotic cars would be enough to overcome the huge inherent space inefficieny of cars. The key to solving traffic jams is good public transport plus walkable cities.
Benefit #6 looks really promising though.
Aesthetic preferences are a huge part of our personalities, who would agree to any enhancement that would destroy them? And as long as they’re present, a transhuman will be even more effective at making everything look, sound, smell etc. beautiful — in some form or another (maybe in a simulation if it’s detailed enough and if we decide there’s no difference), because a transhuman will be more effective at everything.
If you’re talking about the human body specifically, I don’t think a believable LMD (with artificial...
Ah, ok, makes sense.
I’m not questioning scope insensitivity in general here, but can someone explain to me why does it matter what number of birds they’re trying to save? Obviously, your contribution alone is not going to save them all (unless your’re rich and donating a lot of money), and, if you don’t know anything about how efficient those programs are, you may as well assume a fixed amount of money will save a fixed number of birds.
I like how if you combine this post with You Only Need Faith in Two Things, they basically solve the problem of induction for all intents and purposes. It’s impossible to unwind past some very basic, very weak assumptions, and they’re all you need to reinvent the entire Bayesian epistemology => you might as well assume it’s correct.