LESSWRONG
LW

94
cousin_it
31055Ω43514764440
Message
Dialogue
Subscribe

https://vladimirslepnev.me

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Sense-making about extreme power concentration
cousin_it1d40

Why do you think all poor people will end up in these "wildlife preserves", and not somewhere else under the power of someone less altruistic? A future of large power differences is... a future of large power differences.

Reply
The Techno-Pessimist Lens
cousin_it1d20

Ah I see, I misunderstood your point. You're right.

Reply
The Techno-Pessimist Lens
cousin_it1d21

I fail to see the relevance of including, in moral deliberation, the harm that animals inflict upon other animals

It's wrong to make dogs fight => it's right to stop someone else from making dogs fight => it's right to spend some resources stopping Nature from making dogs fight. Or at least, I don't see where this logic fails.

Reply
Sense-making about extreme power concentration
cousin_it1d42

If there's a small class of people with immense power over billions of have-nothings that can do nothing back, sure, some of the superpowerful will be more than zero altruistic. But others won't be, and overall I expect callousness and abuse of power to much outweigh altruism. Most people are pretty corruptible by power, especially when it's power over a distinct outgroup, and pretty indifferent to abuses of power happening to the outgroup; all history shows that. Bigger differences in power will make it worse if anything.

Reply
The Techno-Pessimist Lens
cousin_it1d30

Good post, and a good discussion to have.

Humans have been exterminating each other since before agriculture, and hunted a lot of megafauna to extinction before agriculture too. Technology made us more efficient at these things for sure, but even with colonialism and the world wars and factory farming, human-caused suffering is still a tiny blip compared to the history of biological evolution. Annie Dillard mentions horrifying parasitism in insects which has been happening under every leaf for millions of years, for example.

That said, I agree that technological progress should lead toward solving suffering. If it leads to filling more planets with the same kind of biological life as today, with the same ever-present suffering, then I'd rather not have it. In the utopia drafts that I write from time to time (but don't post anywhere), most animal life has been replaced and even plants have been re-engineered not to smother each other.

You're also right that this kind of good outcome seems very hard to achieve. It needs not just coordination, but global coordination. Otherwise countries that race ahead in technology will win out and keep remaking the world in their selfish image, with factory farming 2.0 and all that.

Reply
peterbarnett's Shortform
cousin_it1d20

Things I'm pretty sure about: that your possibility 1 is much more likely than 2. That extrapolation is more like resolving internal conflicts in a set of values, not making them change direction altogether. That the only way for a set of values to extrapolate to "good" is if its starting percentage of "good" is high enough to win out.

Things I believe, but with less confidence: that individual desires will often extrapolate to a pretty nasty kind of selfishness ("power corrupts"). That starting from culture also has lots of dangers (like the wokeness or religion that you're worried about), but a lot of it has been selected in a good direction for a long time, precisely to counteract the selfishness of individuals. So the starting percentage of good in culture might be higher.

Reply
Sense-making about extreme power concentration
cousin_it2d5438

Yeah. I think the main reason democracy exists at all is because people are necessary for skilled labor and for fighting wars. If that goes away, the result will be a world where money and power just discards most people. Why some people think "oh we'll implement UBI if it comes to that", I have no idea. When it "comes to that", there won't be any force powerful enough to implement UBI and interested in doing so. My cynical view is that the promise of distributing AI benefits in the future is a distraction: look over there, while we take all power.

Reply1
Review: E-bikes on Hills
cousin_it2d20

Is that you in the photo? Why is your left hand a claw?

Reply
The Rise of Parasitic AI
cousin_it2d138

Thank you for writing this! I have a question though. The post says "many cases" and so on. Can we get some estimates on how many people are affected now, and is it growing or decreasing?

Reply
peterbarnett's Shortform
cousin_it4d*62

I'm not very confident, but will try to explain where the intuition comes from.

Basically I think the idea of "good" might be completely cultural. As in, if you extrapolate what an individual wants, that's basically a world optimized for that individual's selfishness; then there is what groups can agree on by rational negotiation, which is a kind of group selfishness, cutting out everyone who's weak enough (so for example factory farming would be ok because animals can't fight back); and on top of that there is the abstract idea of "good", saying you shouldn't hurt the weak at all. And that idea is not necessitated by rational negotiation. It's just a cultural artifact that we ended up with, I'm not sure how.

So if you ask AI to optimize for what individuals want, and go through negotiations and such, there seems a high chance that the resulting world won't contain "good" at all, only what I called group selfishness. Even if we start with individuals who strongly believe in the cultural idea of good, they can still get corrupted by power. The only way to get "good" is to point AI at the cultural idea to begin with.

You are of course right that culture also contains a lot of nasty stuff. The only way to get something good out of it is with a bunch of extrapolation, philosophy, and yeah I don't know what else. It's not reliable. But the starting materials for "good" are contained only there. Hope that makes sense.

Also to your other question: how to train philosophical ability? I think yeah, there isn't any reliable reward signal, just as there wasn't for us. The way our philosophical ability seems to work is by learning heuristics and ways of reasoning from fields where verification is possible (like math, or everyday common sense) and applying them to philosophy. And it's very unreliable of course. So for AIs maybe this kind of carry-over to philosophy is also the best we can hope for.

Reply
Load More
2cousin_it's Shortform
Ω
6y
Ω
28
31Kinesthetic motor imagery
3mo
0
35An argument that consequentialism is incomplete
1y
27
24Population ethics and the value of variety
1y
11
43Book review: The Quincunx
1y
12
16A case for fairness-enforcing irrational behavior
1y
3
46I'm open for projects (sort of)
1y
13
27A short dialogue on comparability of values
2y
7
29Bounded surprise exam paradox
2y
5
31Aligned AI as a wrapper around an LLM
2y
19
24Are extrapolation-based AIs alignable?
2y
15
Load More