LESSWRONG
LW

Walker Vargas
222260
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Walker Vargas's Shortform
6mo
1
No wikitag contributions to display.
How likely is AGI to force us all to be happy forever? (much like in the Three Worlds Collide novel)
Walker Vargas6mo10

In the ending where humanity gets modified, some people commit suicide. The captain thinks it doesn't make sense to choose complete erasure over modification.

Reply
Walker Vargas's Shortform
Walker Vargas6mo20

If sperm whales were sapient, and had their own languages how recently would humans have noticed? We wouldn't be able to hear much of their speech. Without the ability for advanced tool use and without agriculture, I think it would be rather hard for us to notice. I don't think this would be discovered any earlier than the 20th century. Do we know that we aren't in this situation?

Reply
Multinational corporations as optimizers: a case for reaching across the aisle
Walker Vargas2y10

Do they think it's a hardware/cost issue? Or do they think that "true" intelligence is beyond our abilities?

Reply
Multinational corporations as optimizers: a case for reaching across the aisle
Walker Vargas2y50

This is also a plausible route for spreading awareness of AI safety issues to the left. The downside is that it might make AI safety a "leftest" issue if a conservative analogy is not introduced at the same time.

Reply
Bids To Defer On Value Judgements
Walker Vargas2y10

I think of it as deferring to future me vs. deferring to someone else.

Reply
riceissa's Shortform
Walker Vargas2y10

Another consideration is how much money someone has to hand. If someone only make $1,000 a month, they may choice $25 shoes that will last a year over $100 shoes that will last 5 years. Essentially, it is the complimentary idea of economy of scale.

Reply
A case for gamete personhood (reductio ad absurdum)
Walker Vargas2y10

Personhood is a legal category and an assumed moral category that policies can point to. Usually, the rules being argued about are about the acceptability of killing something. The category is used differently depending on the moral framework, but it is usually assumed to point at the same objects. Therefore disagreements are interpreted as mistakes.

Personally, I have my doubts on there being an exact point in development that you can point to where a human becomes a person. If there is it might be weeks after birth.

Reply
Criticism of Eliezer's irrational moral beliefs
Walker Vargas2y10

If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn't care what the "right" thing is. Humans also probably don't care about the cosmic tablet either. That sort of thing isn't what "morality" is references. The argue is more of a trick to get people recognize that than a formal argument.

Reply
Criticism of Eliezer's irrational moral beliefs
Walker Vargas2y12

I think the point is that people try to point to things like God's will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God's decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.

Reply
What if we Align the AI and nobody cares?
Walker Vargas2y32

While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.

Reply
Load More
2Walker Vargas's Shortform
6mo
1
4I'm looking for alternative funding strategies for cryonics.
Q
6y
Q
4