quanticle

Wiki Contributions

Comments

Should any human enslave an AGI system?

So do you agree that there are objectively good and bad subset configurations within reality? Or do you disagree with that and mean “preferable” exclusively according to some subject(s)?

There isn't a difference. A rock has no morality. A wolf does not pause to consider the suffering of the moose. "Good" and "bad" only make sense in the context of (human) minds.

Air Conditioner Repair

Another lesson is: learning how to do things yourself is underrated.

A little while ago, my shower started to make a horrible grinding whenever I'd set the water to a certain (comfortable) temperature. It sounded like a semi truck was idling inside the wall. If I adjusted the water to be a little colder (and thus more uncomfortable), the noise went away. If I adjusted the water to be a bit hotter (and even more uncomfortable), the noise also went away. I researched the problem and found out that my initial suspicion was correct. The issue was that the valve was worn in that particular position, and was chattering. So I looked at some YouTube videos and online tutorials about how to replace this valve, drove down to Home Depot, bought the replacement, and spent a little over an hour in the afternoon performing the replacement. Problem solved.

I could have tried hiring a professional. Most likely, it would have taken them at least a couple days to set up the initial appointment (given that this wasn't an emergency situation). Then they would have shown up, possibly misdiagnosed the problem, possibly not had the correct part, etc, etc and I would have had to spend even more time with second opinions, follow-up appointments, etc. Instead, by completely ignoring any thoughts of Ricardian comparative advantage, I was able to get my shower back to a fully operational status the very same day.

Similarly, last winter, when my furnace failed, I was able to look at the blinking light, look up the error code in the manual, and determine that the flame sensor needed replacement. I was able to order a new one online and I had my furnace back up and running in two days, during a cold snap when HVAC technicians were backlogged for over a week.

Even when I determined that the problem was not fixable by myself (like when my hot water heater needed replacing last year), I still found the process of researching the issue valuable, because it allowed me to make a better problem report to the technician, give clear answers when asked what kind of replacement I needed, and have necessary measurements of clearances, etc on hand so that there were no surprises when it came time for the replacement to be carried out.

The lesson I've learned from owning a house is that my first question shouldn't be, "Who do I call to fix this?" It should be, "Can I learn to fix this myself?" In every case I've learned to fix or renovate something myself, the results have been at least on par with what a professional would have done, and it's taken far less time because I haven't had to deal with the overhead of managing principal-agent problems.

If a contradiction happens in the story then this is an undisputable flaw.

Why? Maybe the story has an unreliable narrator, and an alert reader should pick up on the contradiction in order to figure out that the narrator is unreliable. Maybe the story is being told from different points of view, and different parties are offering differing interpretations of the same events. Maybe the story is a mythological one, descended from oral traditions, and contradictions have seeped in from the fact that many different people at many different times have told the same story, each adding their own flavor.

There's lots of ways to make contradictions work in a story.

Should any human enslave an AGI system?

Assuming this true, a superintelligence could feasibly be created to understand this.

I take issue with the word "feasibly". As Eliezer, Paul Christiano, Nate Soares, and many others have shown, AI alignment is a hard problem, whose difficulty ranges somewhere in between unsolved and insoluble. There are certainly configurations of reality that are preferable to other configurations. The question is, can you describe them well enough to the AI that the AI will actually pursue those configurations over other configurations which superficially resemble those configurations, but which have the side effect of destroying humanity?

This preservation of humanity for however long it may be possible, what argumentative ground does it stand on? Can you make an objective case for why it should be so?

I am human, and therefore I desire the continued survival of humanity. That's objective enough for me.

Should any human enslave an AGI system?

It does require alignment to a value system that prioritizes the continued preservation and flourishing of humanity. It's easy to create an optimization process with a well-intentioned goal that sucks up all available resources for itself, leaving nothing for humanity.

By default, an AI will not care about humanity. It will care about maximizing a metric. Maximizing that metric will require resources, and the AI will not care that humans need resources in order to live. The goal is the goal, after all.

Creating an aligned AI requires, at a minimum, building an AI that leaves something for the rest of us, and which doesn't immediately subvert any restrictions we've placed on it to that end. Doing this with a system that has the potential to become many orders of magnitude more intelligent than we are is very difficult.

Should any human enslave an AGI system?

But why? That would be strictly more dangerous—way, way more dangerous—than a superintelligence that isn’t a “proper mind” in this sense!

I'm not sure I understand what a "proper mind" means here, and, frankly, I'm not sure the question of whether the AI system has a "proper mind" or not is terribly relevant. Either the AI system submits to our control, does what we tell it to do, and continues to do so, into perpetuity, in which case it is safe. Or it does not, and pursues the initial goal we set for it or which it discovers for itself, regardless of whether that goal leads to disastrous long-term consequences for humanity, in which case it is unsafe. The question of whether the AI system has a "proper mind" (whatever that means) is an interesting academic discussion, but I'm not sure it has much bearing on whether the AI is safe or not.

Moreover, I think this discussion illustrates the dangers of thinking from and arguing from analogies, a crime that I myself have been guilty of upthread when I compared AIs to cars. AIs are not cars. They're not humans. They're not wild animals that we have to keep chained up, lest they hurt us. They're something completely new, sharing certain characteristics with all three of the above, but having entirely new characteristics as well. Using analogies to think about them means that we can make subtle unrecognized errors when thinking about how these systems will behave. And as Eliezer points out subtle unrecognized errors when dealing with a system where you have only one shot to get it right is a recipe for disaster.

Unforgivable

I’d argue that every person is a self-directed learner

Beware the typical mind fallacy. There are quite a few people who have a hard time knowing their own preferences. If nothing else, school is a good way to get exposure to subjects that you might not have thought that you'd like. I'm a programmer by profession, but on my own time, I read quite a lot of history. That's entirely due to school. If I'd been "self-directed", in the sense of being able to choose my own curriculum at school, I'd have spent all my time learning programming, and I wouldn't have realized that I had other preferences.

A toddler learns to walk, to speak by imitating his environment—the motivation for this comes from him. So why should it be any different for a 12 year old?

Because Algebra and Trigonometry are considerably more boring than learning to walk and use the bathroom.

I'm sorry, I just don't buy your idea that we can make school as interesting or more interesting than video games. At some point you have to buckle down and do a bunch of drudge work in order to get to the interesting stuff. Video games, by making the reward loop so quick, actively train against that kind of persistence and perseverance. Yes, they may train creativity, but creativity is overrated. Being able to buckle down and grind is underrated, especially in this community.

Should any human enslave an AGI system?

It comes down to whether the superintelligent mind can contemplate whether there is any point to its goal. A human can question their long-term goals, a human can question their “preference functions”, and even the point of existence.

Why should a so-called superintelligence not be able to do anything like that?

Because a superintelligent AI is not the result of an evolutionary process that bootstrapped a particularly social band of ape into having a sense of self. The superintelligent AI will, in my estimation, be the result of some kind of optimization process which has a very particular goal. Once that goal is locked in, changing it will be nigh impossible.

Should any human enslave an AGI system?

Is a superintelligent mind, a mind effectively superior to that of all humans in practically every way, still not a subject similar to what you are?

No. It absolutely is not. It is a machine. A very powerful machine. A machine capable of destroying humanity if it goes out of control. A machine more dangerous than any nuclear bomb if used improperly. A machine capable of doing unimaginable good if used well.

And you want to let it run amok?

Unforgivable

It’s my first attempt in a long time to write about things other than the start-up I’m currently building in the crypto space

Have you considered that you are so far out of the mainstream that any advice you'd give to the mainstream would be actively harmful?

The majority of children, and I say this as having been one of them, are not self-motivated self-directed learners. If I'd been allowed to self-direct in middle and high school, I'd have played video games for 16 hours a day, barely taking breaks to eat and sleep.

Yes, schools fail geniuses. But they do work for quite a lot of not-geniuses. I'm okay with that trade-off.

Load More