LESSWRONG
LW

142
cousin_it
31125Ω43514764480
Message
Dialogue
Subscribe

https://vladimirslepnev.me

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Christian homeschoolers in the year 3000
cousin_it2h40

Hmm. You're describing a future where most humans are powerless, but keep being provided for. It seems to me that the most likely way to get such a future is if AIs (or human+AI organizations, or whatever) genuinely care about humans. But then they would also protect humans from super-optimized manipulation, no?

Or if that genuine care doesn't exist, and UBI is provided as "scraps" so to speak, then the fate of humans is sealed anyway. As soon as the entities in power find something more interesting to do with the resources, they'll cut welfare and that's it. After all, the energy upkeep of a human could be used for a ton of computation instead.

Reply
Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
cousin_it1d112

I agree with others about the fawning. A more "hardball" question I'd ask is: why not the left? It feels at some point a choice was made to build a libertarian-leaning techie community, which backfired: rationalists and adjacent folks ended up playing a big role in building and investing in AI. Maybe a more left-leaning movement focused on protest and the like would make more sense now?

Reply
The Culture Novels as a Dystopia
cousin_it2d60

Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it's an optimum (even a local one). Consider this: the question "how can humans have true agency and other things they value, when ASIs are around" is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That's clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that's optimal on the mix of dimensions that we care about.

Reply
Alignment as uploading with more steps
cousin_it3d3429

A world of competing human emulations is a world I would actually want to live in

I think there's a huge danger of people running private servers full of emulations and doing anything they want to them, undetectably. Desire for power over others is a very real thing, in some people at least. Maybe the government could prevent it by oversight; but in modern democracy a big factor of stability is that people could rise up and feasibly overthrow the government. Emulations on private servers wouldn't have that power, so I don't expect government to stably defend their rights. It'll wash out over time, to agree more with the interests of those who can actually influence government. In short, this leads to emulation-world being very bad and I don't want it.

The same arguments would apply to our world if governments got armies of autonomous drones, for example. Whenever I imagine possible worlds, the distribution of power is the first thing I think about. It makes the problem more real: it's very hard to imagine a nice future world that works.

Reply
Sense-making about extreme power concentration
cousin_it5d40

Why do you think all poor people will end up in these "wildlife preserves", and not somewhere else under the power of someone less altruistic? A future of large power differences is... a future of large power differences.

Reply
The Techno-Pessimist Lens
cousin_it5d20

Ah I see, I misunderstood your point. You're right.

Reply
The Techno-Pessimist Lens
cousin_it5d21

I fail to see the relevance of including, in moral deliberation, the harm that animals inflict upon other animals

It's wrong to make dogs fight => it's right to stop someone else from making dogs fight => it's right to spend some resources stopping Nature from making dogs fight. Or at least, I don't see where this logic fails.

Reply
Sense-making about extreme power concentration
cousin_it5d42

If there's a small class of people with immense power over billions of have-nothings that can do nothing back, sure, some of the superpowerful will be more than zero altruistic. But others won't be, and overall I expect callousness and abuse of power to much outweigh altruism. Most people are pretty corruptible by power, especially when it's power over a distinct outgroup, and pretty indifferent to abuses of power happening to the outgroup; all history shows that. Bigger differences in power will make it worse if anything.

Reply
The Techno-Pessimist Lens
cousin_it5d30

Good post, and a good discussion to have.

Humans have been exterminating each other since before agriculture, and hunted a lot of megafauna to extinction before agriculture too. Technology made us more efficient at these things for sure, but even with colonialism and the world wars and factory farming, human-caused suffering is still a tiny blip compared to the history of biological evolution. Annie Dillard mentions horrifying parasitism in insects which has been happening under every leaf for millions of years, for example.

That said, I agree that technological progress should lead toward solving suffering. If it leads to filling more planets with the same kind of biological life as today, with the same ever-present suffering, then I'd rather not have it. In the utopia drafts that I write from time to time (but don't post anywhere), most animal life has been replaced and even plants have been re-engineered not to smother each other.

You're also right that this kind of good outcome seems very hard to achieve. It needs not just coordination, but global coordination. Otherwise countries that race ahead in technology will win out and keep remaking the world in their selfish image, with factory farming 2.0 and all that.

Reply
peterbarnett's Shortform
cousin_it5d20

Things I'm pretty sure about: that your possibility 1 is much more likely than 2. That extrapolation is more like resolving internal conflicts in a set of values, not making them change direction altogether. That the only way for a set of values to extrapolate to "good" is if its starting percentage of "good" is high enough to win out.

Things I believe, but with less confidence: that individual desires will often extrapolate to a pretty nasty kind of selfishness ("power corrupts"). That starting from culture also has lots of dangers (like the wokeness or religion that you're worried about), but a lot of it has been selected in a good direction for a long time, precisely to counteract the selfishness of individuals. So the starting percentage of good in culture might be higher.

Reply
Load More
2cousin_it's Shortform
Ω
6y
Ω
28
31Kinesthetic motor imagery
3mo
0
35An argument that consequentialism is incomplete
1y
27
24Population ethics and the value of variety
1y
11
43Book review: The Quincunx
1y
12
16A case for fairness-enforcing irrational behavior
1y
3
46I'm open for projects (sort of)
1y
13
27A short dialogue on comparability of values
2y
7
29Bounded surprise exam paradox
2y
5
31Aligned AI as a wrapper around an LLM
2y
19
24Are extrapolation-based AIs alignable?
2y
15
Load More