cousin_it

Wikitag Contributions

Comments

Sorted by

"Apparatchik" in the USSR was some middle-aged Ivan Ivanovich who'd yell at you in his stuffy office for stepping out of line. His power came from the party apparatus. While the power of Western activists is the opposite: it comes from civil society, people freely associating with each other.

This rhetorical move, calling a Western thing by an obscure and poorly fitting Soviet name, is a favorite of Yarvin: "Let's talk about Google, my friends, but let's call it Gosplan for a moment. Humor me." In general I'd advise people to stay away from his nonsense, it's done enough harm already.

The objection I'm most interested in right now is the one about induced demand (that's not the right term but let's roll with it). Like, let's say we build many cheap apartments in Manhattan. Then the first bidders for them will be rich people - from all over the world! - who would love to get a Manhattan apartment for a bargain price. The priced-out locals will stay just as priced out, shuffled to the back of the line, because there's quite many rich people in the world who are willing to outbid them. Maybe if we build very many apartments, and not just in Manhattan but everywhere, the effect will eventually run out; but it'll take very many indeed.

The obvious fix is to put a thumb on the scale somehow, for example sell these cheap apartments only as primary residences. But then we lose the theoretical beauty of "just build more", and we really should figure out what mix of "just build more" and "put a thumb on the scale" is the most cost-efficient for achieving what we want.

cousin_it*42

Maybe you're pushing your proposal a bit much, but anyway as creative writing it's interesting to think about such scenarios. I had a sketch for a weird utopia story where just before the singularity, time stretches out for humans because they're being run at increasing clock speed, and the Earth's surface also becomes much larger and growing. So humanity becomes this huge, fast-running civilization living inside an AI (I called it "Quetzalcoatl", not sure why) and advising it how it should act in the external world.

My wife used to have a talking doll that said one phrase in a really annoying voice. Well, at some point the doll short-circuited or something, and started turning on at random times. In the middle of the night for example it would yell out its phrase and wake everyone up. So eventually my wife took the doll to the garbage dump. And on the way back she couldn't stop thinking about the doll sitting there in the garbage, occasionally yelling out its phrase: "Let's go home! I'm already hungry!" This isn't creative writing btw, this actually happened.

The thread about Tolkien reminded me of Andrew Hussie's writing process. Start by writing cool scenes, including any elements you like. A talking tree? Okay. Then worry about connecting it with the story. The talking tree comes from an ancient forest and so on. And if you're good, the finished story will feel like it always needed a talking tree.

I'd be really interested in a similar breakdown of JK Rowling's writing process, because she's another author with a limitless "toybox".

cousin_it*40

I think something like the Culture, with aligned superintelligent "ships" keeping humans as basically pets, wouldn't be too bad. The ships would try to have thriving human societies, but that doesn't mean granting all wishes - you don't grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you'd be able to move up the spectrum from human to ship.

Maybe tangential, but this reminded me of a fun fact about Hong Kong's metro: it's funded by land value. They put a station and get some land development rights near it. Well, building the station obviously makes land around it more valuable. So they end up putting stations where they'd be most useful, and fares can be cheap because the metro company makes plenty of money from land. So the end result is cheap, well-planned public transport which is profitable and doesn't take government money.

Not to pick on you specifically, but just as a general comment, I'm getting a bit worried about the rationalist book review pipeline. It seems it usually goes like this: someone writes a book with an interesting idea -> a rationalist (like Scott) writes a review of it, maybe not knowing much about the topic but being intrigued by the idea -> lots of other rationalists get the idea cached in their minds. So maybe it'd be better if book reviews were written by people who know a lot about the topic, and can evaluate the book in context.

Like, a while ago someone on LW asked people to recommend textbooks on various topics, but you couldn't recommend a textbook if it was the only one you'd read on the topic, you had to read at least two and then recommend one. That seems on the right track to me, and requiring more knowledge of the topic would be better still.

cousin_it1310

I think I can destroy this philosophy in two kicks.

Kick 1: pleasure is not one-dimensional. There are different parts of your brain that experience different pleasures, with no built-in way to compare between them.

When you retreat from kick 1 by saying "my decision-making provides a way to compare, the better pleasure is the one I'll choose when asked", here comes kick 2: your decision-making won't work for that. There are compulsive behaviors that people want to do but don't get much pleasure from them. And in every decision there's a possible component of that, however small.

You could say "I'll compare decisions based on how much pleasure they bring, excluding compulsiveness", but you can't do that due to kick 1 again. So the philosophy just collapses.

Good point. But I think the real game changer will be self-modification tech, not longevity tech. In that case we won't have a "slow adaptation" problem, but we'll have a "fast adaptation in weird directions" problem which is probably worse.

Load More