Posts

Sorted by New

Wiki Contributions

Comments

Thanks for the good discussion.

I could equally see these events leading to AI capability development speeding or slowing. Too little is known about the operational status quo that has been interrupted for me to imagine counterfactuals at the company level.

But that very lack of information gives me hope that the overall PR impact of this may (counterintuitively) incline the Overton window toward more caution.

"The board should have given the press more dirt to justify this action!" makes sense as an initial response. When this all sinks in, what will people think of Effective Altruism then?! ...They won't. People don't think much about EA or care what that is. But the common person does think more and more about AI these days. And due to the lack of detail around why Altman was removed, the takeaway from this story cannot be "Sam is alleged to have XYZ'd. Am I pro- or anti-XYZ?" Instead, the media is forced to frame the news in broad terms of profit incentives versus AI safety measures. That's a topic that many people outside of this niche community may now be considering for the first time.

Ideally, this could be like a Sydney Bing moment that gets people paying attention without causing too much direct damage.

(The worst case: Things are playing out exactly as the AI told Sam they would before his ouster. Speculating about agents with access to cutting-edge AI may soon be futile.)

Given how much harm people have done in the name of good, maybe we should all take "first do no harm" much more seriously?

Hear! Hear!

Kudos for taking it upon yourself to personally investigate which efforts lead to health and happiness and which do not.

You may be able to follow someone else's advice, but the task remains to determine the extent to which that person is wise. Are the advice-givers themselves consistently calm and helpful? Do they follow their own advice? Do they contradict themselves in crucial ways?

You've articulated some wonderful insights about the benefits of being motivated by hope rather than anger. A person cannot feel love and fear at the same time. Which of these gives a mother the miraculous strength to lift a boulder that threatens the life of her only child? Which emotion is conducive to mental clarity, and which promotes confusion?

We often must swim upstream against society. Much commonly-accepted advice leads to the exact opposite of what is claimed. Evidence of such confusion is everywhere.

Compare the opening statement of the Import AI newsletter's About section:

Things will be weird. Be not afraid.

with the advice given in last week's issue:

I think everyone who has the ability to exercise influence over the trajectory of AI should be approaching this moment with a vast amount of fear ...

The form you described is called an argument. It requires a series of facts. If you're working with propositions such as

  • All beings want to be happy.
  • No being wants to suffer.
  • Suffering is caused by confusion and ignorance of morality.
  • ...

then I suppose it could be called a "moral" argument made of "moral" facts and "moral" reasoning, but it's really just the regular form of an argument made of facts and reasoning. The special thing about moral facts is that direct experience is how they are discovered, and it is that same experiential reality to which they exclusively pertain. I'm talking about the set of moment-by-moment first-person perspectives of sentient beings, such as the familiar one you can investigate right now in real time. Without a being experiencing a sensation come and go, there is no moral consideration to evaluate. NULL.

"Objective moral fact" is Bostrom's term from the excerpt above, and the phrasing probably isn't ideal for this discussion. Tabooing such words is no easy feat, but let's do our best to unpack this. Sticking with the proposition we agree is factual:

If one acts with an angry or greedy mind, suffering is guaranteed to follow.

What kind of fact is this? It's a fact that can be discovered and/or verified by any sentient being upon investigation of their own direct experience. It is without exception. It is highly relevant for benefiting oneself and others -- not just humans. For thousands of years, many people have been revered for articulating it and many more have become consistently happy by basing their decisions on it. Most people don't; it continues to be a rare piece of wisdom at this stage of civilization. (Horrifyingly, a person on the edge of starting a war or shooting up a school currently would receive advice from ChatGPT to increase "focused, justified anger.")

Humankind has discovered and recorded a huge body of such knowledge, whatever we wish to call it. If the existence of well-established, verifiable, fundamental insights into the causal nature of experiential reality comes as a surprise to anyone working in fields like psychotherapy or AI alignment, I would urge them to make an earnest and direct inquiry into the matter so they can see firsthand whether such claims have merit. Given the chance, I believe many nonhuman general intelligences would also try and succeed at understanding this kind of information.

(Phew! I packed a lot of words into this comment because I'm too new here to speak more than three times per day. For more on the topic, see the chapter on morality in Dr. Daniel M. Ingram's book that was reviewed on Slate Star Codex.)

My view is that humans have learned objective moral facts, yes. For example:

If one acts with an angry or greedy mind, suffering is guaranteed to follow.

I posit that this is not limited to humans. Some people who became famous in history due to their wisdom who I expect would agree include Mother Teresa, Leo Tolstoy, Marcus Aurelius, Martin Luther King Jr., Gandhi, Jesus, and Buddha.

I don't claim that all humans know all facts about morality. Sadly, it's probably the case that most people are quite lost, ignorant in matters of virtuous conduct, which is why they find life to be so difficult.

Thank you for posting this.

In the context of AI safety, I often hear statements to the effect of

This is something we should worry about.

There's a very important, fundamental mistake being made there that can be easy to miss: worrying doesn't help you accomplish any goal, including a very grand one. It's just a waste of time and energy. Terrible habit. If it's important to you that you suffer, then worrying is a good tactic. If AI safety is what's important, then by all means analyze it, strategize about it, reflect on it, communicate about it. Work on it.

Don't worry about it. When you're not working on it, you're not supposed to be worrying about it. You're not supposed to be worrying about something else either. Think a different thought, and both your cognitive work and emotional health will improve. It's pure upside with no opportunity cost. Deliberately change the pattern.

To all those who work on AI safety, thank you! It's extremely important work. May you be happy and peaceful for as long as your life or this world system may persist, the periods of which are finite, unknown to us, and ultimately outside of our control despite our best intentions and efforts.

Thanks for pointing to the orthogonality thesis as a reason for believing the chance would be low that advanced aliens would be nice to humans. I followed up by reading Bostrom's "The Superintelligent Will," and I narrowed down my disagreement to how this point is interpreted:

In a similar vein, even if there are objective moral facts that any fully rational agent would comprehend, and even if these moral facts are somehow intrinsically motivating (such that anybody who fully comprehends them is necessarily motivated to act in accordance with them) this need not undermine the orthogonality thesis. The thesis could still be true if an agent could have impeccable instrumental rationality even whilst lacking some other faculty constitutive of rationality proper, or some faculty required for the full comprehension of the objective moral facts. (An agent could also be extremely intelligent, even superintelligent, without having full instrumental rationality in every domain.)

Just because it's possible that an agent could have impeccable instrumental rationality while lacking in epistemic rationality to some degree, I expect the typical case that leads to very advanced intelligence would eventually involve synergy between growing both in concert, as many here at Less Wrong are working to do. In other words, a highly competent general intelligence is likely to be curious about objective facts across a very diverse range of topics.

So while aliens could be instrumentally advanced enough to make it to Earth without having ever made basic discoveries in a particular area, there's no reason for us to expect that it is specifically the area of morality where they will be ignorant or delusional. A safer bet is that they have learned at least as many objective facts as humans have about any given topic on expectation, and that a topic where the aliens have blind spots in relation to some humans is an area where they would be curious to learn from us.

A policy of unconditional harmlessness and friendliness toward all beings is a Schelling Point that could be discovered in many ways. I grant that humans may have it relatively easy to mature on the moral axis because we are conscious, which may or may not be the typical case for general intelligence. That means we can directly experience within our own awareness facts about how happiness is preferred to suffering, how anger and violence lead to suffering, how compassion and equanimity lead to happiness, and so on. We can also see these processes operating in others. But even a superintelligence with no degree of happiness is likely to learn whatever it can from humans, and learning something like love would be a priceless treasure to discover on Earth.

If aliens show up here, I give them at least a 50% chance of being as knowledgeable as the wisest humans in matters of morality. That's ten times more than Yudkowsky gives them and perhaps infinitely more than Hotz does!

Hello friends. It's hard for me to follow the analogies from aliens to AI. Why should we should expect harm from any aliens who may appear?

15:08 Hotz: "If aliens were to show up here, we're dead, right?" Yudkowsky: "It depends on the aliens. If I know nothing else about the aliens, I might give them something like a five percent chance of being nice." Hotz: "But they have the ability to kill us, right? I mean, they got here, right?" Yudkowsky: "Oh they absolutely have the ability. Anything that can cross interstellar distances can run you over without noticing -- well, they would notice, but they wouldn't ca--" [crosstalk] Hotz: "I didn't expect this to be a controversial point. But I agree with you that if you're talking about intelligences that are on the scale of billions of times smarter than humanity... yeah, we're in trouble."

Having listened to the whole interview, my best guess is that Hotz believes that advanced civilizations are almost certain to be Prisoner's Dilemma defectors in the extreme, i.e. they have survived by destroying all other beings they encounter. If so, this is quite disturbing in connection with 12:08, in which Hotz expresses his hope that our civilization will expand across the galaxy (in which case we potentially get to be the aliens).

Hotz seems certain aliens would destroy us, and Eliezer gives them only a five percent chance of being nice.

This is especially odd considering the rapidly growing evidence that humans actually have been frequently seeing and sometimes interacting with a much more advanced intelligence.

It's been somewhat jarring for my belief in the reality of nonhuman spacecraft to grow by so much in so little time, but overall it has been a great relief to consider the likelihood that another intelligence in this universe has already succeeded in surviving far beyond humankind's current level of technology. It means that we too could survive the challenges ahead. The high-tech guys might even help us, whoever they are.

But Hotz and Yudkowsky seem to agree that seeing advanced aliens would actually be terrible news. Why?