Posts

Sorted by New

Wiki Contributions

Comments

The methodology says "We used iSay/Ipsos, Dynata, Disqo, and other leading panels to recruit the nationally representative sample". (They also say elsewhere that "Responses were census-balanced based on the American Community Survey 2021 estimates for age, gender, region, race/ethnicity, education, and income using the “raking” algorithm of the R “survey” package".)

There are good ways to argue that AI X-risk is not an extraordinary claim, but this is not it. Besides for that "a derivation from these 5 axioms" does not make a claim "ordinary", the axioms themselves are pretty suspect or at least not simple.

"AI gets better, never worse" does not automatically imply to everyone that it gets better forever, or that it will soon surpass humans. "Intelligence always helps" is true, but non-obvious to many people. "No one knows how to align AI" is something that some would strongly disagree with, not having seen their personal idea disproved. "Resources are finite" jumps straight to some conclusions that require justification, including assumptions about the AI's goals. "AI cannot be stopped" is strongly counter-intuitive to most people, especially since they've been watching movies about just that for their whole lives.

And none of these arguments are even necessary, because AI being risky is the normal position in society. The average person believes that there are dangers, even if polls are inconsistent about whether an absolute majority often worries particularly about AI wiping out humanity. The AI optimist's position is the "weird", "extraordinary" one.

Contrast the post with the argument from stopai's homepage: "OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop." In that framing, it is hard to argue that it's an extraordinary claim.

See https://www.humanornot.ai/ , and its unofficial successor, https://www.turingtestchat.com/ . I've determined that I'm largely unable to tell whether I'm talking to a human or a bot within two minutes. :/

Suggestions:

  • Allow for more than two hypotheses.
  • Maybe make the sliders "snap" to integer values, so that it looks cleaner.
  • Working with evidence percents "given all the evidence above" is sometimes hard to do. It may be useful to allow evidence-combination blocks just to allow filling things in as groups, even if only one of the numbers actually goes into the result, just so that the user can see that it all adds up to 100% and none of the dependent odds seem unreasonable.
  • Tooltips giving explanations of the terms "Prior" and "Posterior" could be good.
  • Some mouse-hover effect for the sliders' areas might help.

I'm surprised there's no tag for either "AI consciousness" or "AI rights", given that there have been several posts discussing both. However, there's a lot of overlap between the two, so perhaps both would be redundant, and the question of which is broader/more fitting becomes relevant. Thoughts?

(Sorry if this is not the right place to put this.)

Bing writes that one of the premises is that the AIs "can somehow detect or affect each other across vast distances and dimensions", which seems to indicate that it's misunderstanding the scenario.

People do not have the ability to fully simulate a person-level mind inside their own mind. Attempts to simulate minds are accomplished by a combination of two methods:

  1. "Blunt" pattern matching, as one would do for any non-human object; noticing what tends to happen, and extrapolating both inner and outer patterns.
  2. "Slotting in" elements of their own type of thinking, into the pattern they're working with, using their own mind as a base.

(There's also space in between these two, such as pattern-matching from one's own type of thinking, inserting pattern-results into one's own thinking-style, or balancing the outputs of the two approaches.)

Insofar as the first method is used, the result is not detailed enough to be a real person. Insofar as the second is used, it is not a distinct person from the person doing the thinking. You can simulate a character's pain by either feeling it yourself and using your own mind's output, or by using a less-than-person rough pattern, and neither of these come with moral quandaries.

Relevant quote from the research paper: "gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time".

A few errors: The sentence "We're all crypto investors here." was said by Ryan, not Eliezer, and the "How the heck would I know?" and the "Wow" (following "you get a different thing on the inside") were said by Eliezer, not Ryan. Also, typos:

  • "chatGBT" -> "chatGPT"
  • "chat GPT" -> "chatGPT"
  • "classic predictions" -> "class of predictions"
  • "was often complexity theory" -> "was off in complexity theory" (I think?)
  • "Robin Hansen" -> "Robin Hanson"
Load More