LESSWRONG
LW

nigerweiss
3154770
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Steelmanning Young Earth Creationism
nigerweiss11y00

It's going to be really hard to come up with any models that don't run deeply and profoundly afoul of the Occam prior.

Reply
[LINK] Slashdot interview with David Ettinger of the Cryonics Institute
nigerweiss12y120

When asked a simple question about broad and controversial assertions, it is rude to link to outside resources tangentially related to the issue without providing (at minimum) a brief explanation of what those resources are intended to indicate.

Reply
Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96
nigerweiss12y10

I don't speak Old English, unfortunately. Could someone who does please provide me with a rough translation of the provided passage?

[This comment is no longer endorsed by its author]Reply
For FAI: Is "Molecular Nanotechnology" putting our best foot forward?
nigerweiss12y00

It isn't the sort of bad argument that gets refuted. The best someone can do is point out that there's no guarantee that MNT is possible. In which case, the response is 'Are you prepared to bet the human species on that? Besides, it doesn't actually matter, because [insert more sophisticated argument about optimization power here].' It doesn't hurt you, and with the overwhelming majority of semi-literate audiences, it helps.

Reply
For FAI: Is "Molecular Nanotechnology" putting our best foot forward?
nigerweiss12y00

Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we're not trying to feel smug in our rational superiority here; we're trying to save the world.

Reply
For FAI: Is "Molecular Nanotechnology" putting our best foot forward?
nigerweiss12y00

That's... not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT - I'd bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.

All we're doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There's nothing wrong with that. Would we need to do it if people were rational agents? No - but, as you may be aware, we definitely don't live in that universe.

Reply
For FAI: Is "Molecular Nanotechnology" putting our best foot forward?
nigerweiss12y00

I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.

Reply
Mahatma Armstrong: CEVed to death.
nigerweiss12y50

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI predicts will result from that change) don't look palatable to you, the AI won't make the first change. In other words, the AI will never assign you a value set that you find objectionable right now. This is safe in some sense, but not ideal. The profoundly racist will never accept a version of their values which, because of its exposure to more data and fewer cognitive biases, isn't racist. Ditto for the devoutly religious. This model of CEV doesn't offer the opportunity for growth.

It might be wise to compromise by locking the maximum number of edges in the graph between you and your EV to some small number, like two or three - a small enough number that value drift can't take you somewhere horrifying, but not so tightly bound up that things can never change. If your CEV says it's okay under this schema, then you can increase or decrease that number later.

Reply
Rationality Quotes June 2013
nigerweiss12y20

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

Reply
Who thinks quantum computing will be necessary for AI?
nigerweiss12y20

Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

Reply
Load More
-14How should negative externalities be handled? (Warning: politics)
12y
132
17A Series of Increasingly Perverse and Destructive Games
12y
33
5Inferring Values from Imperfect Optimizers
13y
20
2AI "Boxing" and Utility Functions
13y
6