Sorted by New

Wiki Contributions


I remember I came up with a similar thought experiment to explain the Categorical Imperative.

Assume there is only one Self-Driving Car on the market, what principle would you want it to follow?

The first priciple we think of is: ”Always do what the driver would want you to do”.

This would certainly be the principle we would want if our SDC was the only car on the road. But there are other SDCs and so in a way we are choosing a principle for our own car which is also at the same time a ”universal law”, valid for every car on the road.

With this in mind, it is easy to show that the principle we could rationally want is: ”Always act on that principle which the driver can rationally will to become a universal law”.

Coincidently this is also Kant’s Categorical Imperative.

Yes, but that ”generative AI can potentially replace millions of jobs” is not contradictory to the statement that it eventually ”may turn out to be a dud”.

I initially reacted in the same way as you to the exact same passage but came to the conclusion that it was not illogical. Maybe I’m wrong but I don’t think so.

I think the auther ment that there was a perception that it could replace millions of jobs, and so an incentive for business to press forward with their implementation plans, but that this would eventually back fire if the hallucination problem is insoluble.

I am a Kantian and believe that those a priori rules have already been discovered.

But my point here was merely that you can isolate the part that belongs to pure ethics from evererything empirical, like in my example what a library is; why do people go to libraries; what is a microphone and what is it’s purpose and so on. What makes an action right or wrong at the most fundamental level however is independent of everything empirical and simply an a priori rule.

I guess also my broader point was that Stephen Wolfram is far too pessimistic about the prospects of making a moral AI. A future AI may soon have a greater understanding of the world and the people in it, and so all we have to do is to provide the right a priori rule and we will be fine.

Of course, the technical issue still remains: how do we make the AI stick to that rule, but that is not an ethical problem but an engineering problem.

You can’t isolate individual ”atoms” in ethics, according to Wolfram. Let’s put that to the test. Tell me if the following ”ethical atoms” are right or wrong:

  1. I will speak in a loud voice

2…on a monday

3…in a public library

4…where I’ve been invited to speak about my new book and I don’t have a microphone.

Now, (1) seems morally permissible, and (2) doesn’t change the evaluation. (3) does make my action seem morally impermissible, but (4) turns it around again. I’m convinced all of this was very simple to everyone.

Ethics is the science about the a priori rules that make these judgments so easy to us, or at least that was Kant’s view which I share. It should be possible to make an AI do this calculation even faster than we do, and all we have to do is to provide the AI with the right a priori rules. When that is done, the rest is just empirical knowledge about libraries and human beings and we will eventually have a moral AI.

I think that was the point. Comedians of the future will be performers. They will not write their own jokes but be increasingly good at reading the lines written by AI.

When Chat GPT came out I asked it to write a Seinfeld episode about taking an IQ-test. In my judgment it was just as good and funny as every other Seinfeld episodes I’ve watched. Mildly amusing. Entertaining enough not to stop reading.

This answer is a little bit confusing to me. You say that ”agency” may be an important concept even if we don’t have a deep understanding of what it entails. But how about a simple understanding?

I thought that when people spoke about ”agency” and AI, they meant something like ”a capacity to set their own final goals”, but then you claim that Stockfish could best be understood by using the concept of ”agency”. I don’t see how.

I myself kind of agree with the sentiment in the original post that ”agency” is a superfluous concept, but want to understand the opposite view?

You seem to hold the position that:

  1. Scientists and not philosophers should do meta-ethics and normative ethics, until
  2. AGIs can do it better at which point we should leave it to them.

I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.

”Whom the gods would destroy, they first make mad.”

One way to kill everyone would be to turn the planet into a Peoples Temple and have everyone drink the kool-aid.

I think even those of us who think of ouselves as psychologically robust will be defenseless against the manipulations of a future GPT-9.

I thought that was what was meant. The question is probably the easiest one to answer affirmatively with a high degree of confidence. I can think of several ongoing ”moral catastrophs”.

Load More