Comments

deluks917's Shortform

I remember Yudkowsky asking for a realistic explanation for why the Empire in Star Wars is stuck in an equilibrium where it builds destroyable gigantic weapons.

What weird beliefs do you have?

Does this include extreme examples, such as pieces of information that permanently damage your mind when exposed to, or antimemes?

Have you made any changes to your personal life because of this?

Auctioning Off the Top Slot in Your Reading List

I predict that this will not become popular, mostly because of the ick-factor around monetary transactions between indviduals that most people have.

However, the inverse strategy seems just as interesting (and more likely to work) to me.

What if AGI is near?

I want to clarify that "AGI go foom!" is not really concerned with the nearness of the advent of AGI, but with whether AGIs have a discontinuity that results in an acceleration of the development of their intelligence over time.

Book Review: The Secret Of Our Success

For completion, here's the prediction on the naive theory, namely that intelligence is instrumentally useful and evolved because solving plans helps you survive:

[This comment is no longer endorsed by its author]Reply
niplav's Shortform

Isn't life then a quine running on physics itself as a substrate?

I hadn't considered thinking of quines as two-place, but that's obvious in retrospect.

Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance

Let the record show that 6 years later, the price of bitcoin has increased 250-fold over the price at the time at which this article was written.

niplav's Shortform

Life is quined matter.

niplav's Shortform

Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one.

(for the record, I think the first premise is true)

niplav's Shortform

The child-in-a-pond thought experiment is weird, because people use it in ways it clearly doesn't work for (especially in arguing for effective altruism).

For example, it observes you would be altruistic in a near situation with the drowning child, and then assumes that you ought to care about people far away as much as people near you. People usually don't really argue against this second step, but very much could. But the thought experiment makes no justification for that extension of the circle of moral concern, it just assumes it.

Similarly, it says nothing about how effectively you ought to use your resources, only that you probably ought to be more altruistic in a stranger-encompassing way.

But not only does this thought experiment not argue for the things people usually use it for, it's also not good for arguing that you ought to be more altruistic!

Underlying it is a theme that plays a role in many thought experiments in ethics: they appeal to game-theoretic intuition for useful social strategies, but say nothing of what these strategies are useful for.

Here, if people catch you letting a child drown in a pond while standing idly, you're probably going to be excluded from many communities or even punished. And this schema occurs very often! Unwilling organ donors, trolley problems, and violinists.

Bottom line: Don't use the drowning child argument to argue for effective altruism.

Load More