Wiki Contributions

Comments

Tachikoma5mo106

The important question is, why now? Why with so little evidence to back-up what is such an extreme action?

I'm confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn't with AIs generating 'art' it's that some artists have historically been able to make a living by creating commercial art, and AI's being capable of generating commercial art threatens the livelihood of those human artists.

There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.

Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.

The world population is set to decline over the course of this century. Fewer humans will mean fewer innovations as the world grows greyer, and a smaller workforce must spend more effort and energy taking care of a large elderly population. Additionally, climate change will eat into some other fraction of available resources to simply keep civilization chugging along that could have instead been used for growth. The reason AGI is so important is because it decouples intelligence from human population growth.

How distressed would you be if the "good ending" were opt-in and existed somewhere far away from you? I've explored the future and have found one version that I think would satisfy your desire but I'm asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they're available at the Opt-In Revolution, in narrative form.

If they can build the golem once, surely they can build it again. I see no reason why not to order it to destroy itself—not even in an explicit manner but simply by putting it into situations where it faces a decision whether to sacrifice itself to save others, and then watching what decision it makes. And once you know how to build one, you can streamline the process to build many more to gather enough statistical confidence that the golem will, in a variety of situations in- and out-of-context, make decisions that prioritize the well-being of others over itself. 

Trying to 'solve' ethics by providing a list of features like was done with image recognition algorithms of yore is doomed to failure. Recognizing the right thing to do, just like recognizing a cat, requires learning from millions of different examples encoded in giant inscrutable neural networks.

Answer by TachikomaMar 07, 202370

Early rationalist writing on the threats of unaligned AGI emerged out of thinking on GOFAI systems that were supposed to operate on rationalist or logical thought processes. Everything is explicitly coded and transparent. Based on this framework, if an AI system operates on pure logic, then you'd better ensure that you specify a goal that doesn't leave any loopholes in it. In other words, AI would follow the laws exactly as they were written, not as they were intended by fuzzy human minds or the spirit animating them. Since the early alignment researchers could figure out how to logically specify human values and goals that would parse for a symbolic AI without leaving loopholes large enough to threaten humanity, they grew pessimistic about the whole prospect of alignment. This pessimism has infected the field and remains with us today, even with the rise of deep learning with deep neural networks.

I will leave you with a quote from Eliezer Yudkowsky which I believe encapsulates this old view of how AI, and alignment, were supposed to work.

Most of the time, the associational, similarity-based
architecture of biological neural structures is a terrible
inconvenience. Human evolution always works with neural
structures - no other type of computational substrate is
available - but some computational tasks are so ill-suited to the
architecture that one must turn incredible hoops to encode them
neurally. (This is why I tend to be instinctively suspicious of
someone who says, 'Let's solve this problem with a neural net!'
When the human mind comes up with a solution, it tends to phrase
it as code, not a neural network. 'If you really understood the
problem,' I think to myself, 'you wouldn't be using neural
nets.') - Contextualizing seed-AI proposals 

Why have Self-Driving Vehicle companies made relatively little progress compared to expectations? It seems like autonomous driving in the real world might be nearly AGI-complete, and so it might be a good benchmark to measure AGI progress against. Is the deployment of SDCs being held up to a higher degree of safety than humans holding back progress in the field? Billions have been invested over the past decade across multiple companies with a clear model to operate on. Should we expect to see AGI before SDCs are widely available? I don't think anyone in the field of autonomous vehicles think they will be widely deployed in difficult terrain or inclement weather conditions in five years.