PeterMcCluskey

Wiki Contributions

Comments

Is this the post you're looking for?

I've got a Mercedes with an Active Blind Spot Assist that eliminates the need to worry about this.

I understand how we can avoid trusting an AI if we've got a specification that the proof checker understands.

Where I expect to need an AI is for generating the right specifications.

Note that effectively we are saying to trust the neural network

I expect that we're going to have to rely on some neural networks regardless of how we approach AI. This paper guides us to be more strategic about what reliance to put on which neural networks.

I initially dismissed Orthogonal due to a guess that their worldview was too similar to MIRI's, and that they would give up or reach a dead end for reasons similar to why MIRI hasn't made much progress.

Then the gears to ascension prodded me to take a closer look.

Now that I've read their more important posts, I'm more confused.

I still think Orthogonal has a pretty low chance of making a difference, but there's enough that's unique about their ideas to be worth pursuing. I've donated $15k to Orthogonal.

Eliminating the profit motive would likely mean that militaries develop dangerous AI a few years later.

I'm guessing that most people's main reason is that it looks easier to ban AI research than to sufficiently reduce the profit motive.

The belief in a universal, independent standard for altruism, morality, and right and wrong is deeply ingrained in societal norms.

That's true of the norms in WEIRD cultures. It is far from universal.

I expect such acausal collaboration to be harder to develop than good calibration, and therefore less likely to happen at the stage I have in mind.

Load More