wiresnips

Posts

Sorted by New

Comments

Rationality Quotes November 2013

Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn't be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.

Ah- and this is the point of the quote. Oh, I like that.

Rationality Quotes November 2013

Utilitarianism isn't a description of human moral processing, it's a proposal for how to improve it.

Rationality Quotes October 2013

That's not necessarily false, but it's a dangerous thing to say to yourself. Mostly when I find myself thinking it, I've just wasted a great deal of time, and I'm trying to convince myself that it wasn't really wasted. It's easy to tell myself, hard to verify, and more pleasant than thinking my time-investment was for nothing.

Post ridiculous munchkin ideas!

This is transformative. Thank you.

Rationality Quotes February 2012

Either both are true, or neither.

Sarah Connor and Existential Risk

Anyone smart enough to be dangerous is smart enough to be safe? I'm skeptical- folksy wisdom tells me that being smart doesn't protect you from being stupid.

But in general, yes- the threat becomes more and more tangible as the barrier to AI gets lower and the number of players increases. At the moment, it seems pretty intangible, but I haven't actually gone out and counted dangerously smart AI researchers- I might be surprised by how many there are.

To be clear, I was NOT trying to imply that we should actually right now form the Turing Police.

Sarah Connor and Existential Risk

Edited, in the interest of caution.

However, this is exactly the issue I'm trying to discuss. It looks as though, if we take the threat of uncaring AI seriously, this is a real problem and it demands a real solution. The only solution that I can see is morally abhorrent, and I'm trying to open a discussion looking for a better one. Any suggestions on how to do this would be appreciated.

[Altruist Support] How to determine your utility function

If we accept that what someone 'wants' can be distinct from their behaviour, then "what do I want?" and "what will I do?" are two different questions (unless you're perfectly rational). Presumably, a FAI scanning a brain could answer either question.

Rationality Quotes: April 2011

The question of which is kind of still there, though. Procrastination is lazy, but getting drunk at work is irresponsible.

Load More