LESSWRONG
LW

679
cousin_it
31140Ω43514764520
Message
Dialogue
Subscribe

https://vladimirslepnev.me

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2cousin_it's Shortform
Ω
6y
Ω
28
No wikitag contributions to display.
I enjoyed most of IABIED
cousin_it2h20

Through making us more uncertain about the situation, this lowers our confidence that everyone will die.

This seems to rely on the assumption that "there's nowhere to go but up": that we're pretty certain of doom, so wildcards in the future can only make us less certain. Unfortunately that doesn't work. Wildcards in the future can also increase s-risks, and there's no limit how bad things can get, potentially much worse than extinction.

Reply
Alignment as uploading with more steps
cousin_it13h*20

My argument goes something like this: 1) throughout history, big differences in power have been a recipe for abuse; 2) uploading allows bigger power differences than ever existed before. It's a big concern to me and I'm not sure we can "wing it", it's better to have a plan now.

Reply
Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
cousin_it1d*20

Yeah. As an example of one of the "nuances" that would have to be thrown out, maybe an effective anti-AI movement would even have to be anti-AI-alignment. As in, it would tell young people "don't work on AI alignment".

Reply
Christian homeschoolers in the year 3000
cousin_it2d20

Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies). In general, I think human life in a world with smarter-than-human AIs requires deliberate "habitat preservation", which in turn requires AIs to make some judgment calls on what's good or bad for humans. There's just no way around that.

Reply
Christian homeschoolers in the year 3000
cousin_it3d183

Hmm. You're describing a future where most humans are powerless, but keep being provided for. It seems to me that the most likely way to get such a future is if AIs (or human+AI organizations, or whatever) genuinely care about humans. But then they would also protect humans from super-optimized manipulation, no?

Or if that genuine care doesn't exist, and UBI is provided as "scraps" so to speak, then the fate of humans is sealed anyway. As soon as the entities in power find something more interesting to do with the resources, they'll cut welfare and that's it. After all, the energy upkeep of a human could be used for a ton of computation instead.

Reply
Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment
cousin_it4d123

I agree with others about the fawning. A more "hardball" question I'd ask is: why not the left? It feels at some point a choice was made to build a libertarian-leaning techie community, which backfired: rationalists and adjacent folks ended up playing a big role in building and investing in AI. Maybe a more left-leaning movement focused on protest and the like would make more sense now?

Reply
The Culture Novels as a Dystopia
cousin_it5d60

Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it's an optimum (even a local one). Consider this: the question "how can humans have true agency and other things they value, when ASIs are around" is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That's clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that's optimal on the mix of dimensions that we care about.

Reply
Alignment as uploading with more steps
cousin_it6d3429

A world of competing human emulations is a world I would actually want to live in

I think there's a huge danger of people running private servers full of emulations and doing anything they want to them, undetectably. Desire for power over others is a very real thing, in some people at least. Maybe the government could prevent it by oversight; but in modern democracy a big factor of stability is that people could rise up and feasibly overthrow the government. Emulations on private servers wouldn't have that power, so I don't expect government to stably defend their rights. It'll wash out over time, to agree more with the interests of those who can actually influence government. In short, this leads to emulation-world being very bad and I don't want it.

The same arguments would apply to our world if governments got armies of autonomous drones, for example. Whenever I imagine possible worlds, the distribution of power is the first thing I think about. It makes the problem more real: it's very hard to imagine a nice future world that works.

Reply
Sense-making about extreme power concentration
cousin_it8d40

Why do you think all poor people will end up in these "wildlife preserves", and not somewhere else under the power of someone less altruistic? A future of large power differences is... a future of large power differences.

Reply
The Techno-Pessimist Lens
cousin_it8d20

Ah I see, I misunderstood your point. You're right.

Reply
Load More
31Kinesthetic motor imagery
3mo
0
35An argument that consequentialism is incomplete
1y
27
24Population ethics and the value of variety
1y
11
43Book review: The Quincunx
1y
12
16A case for fairness-enforcing irrational behavior
1y
3
46I'm open for projects (sort of)
1y
13
27A short dialogue on comparability of values
2y
7
29Bounded surprise exam paradox
2y
5
31Aligned AI as a wrapper around an LLM
2y
19
24Are extrapolation-based AIs alignable?
2y
15
Load More