A common argument I've seen for focusing on current AI harms over future ones is that the future ones are sci-fi (i.e. the ideas originated in the science fiction genre). This argument is fallacious though, because many (perhaps all?) current AI harms are also sci-fi. It is an isolated demand for rigor.

  • AI art: Thing of Beauty (1958)

    By trial and error, Fish discovers that the machine produces high-quality drawings of people and things. Fish enters one of the drawings in an artistic competition, claiming that it was drawn by a nephew. It wins, but to receive the full prize money, the artist is required to paint the image on a wall. ...Knight's short story anticipated, by roughly 64 years, an actual event. In the 2022 Colorado State Fair, an image created with the artificial intelligence program Midjourney won a blue ribbon. As in the Knight story, the judges did not realize that the image was created by machine.

  • Bias and discrimination: Weird Fantasy #18 (1953)

    In 1953's Weird Fantasy #18, Al Feldstein and Joe Orlando produced the story "Judgment Day," about an astronaut observer being sent from a Galactic Alliance to see if a robot planet is ready to be admitted into their alliance. However, the astronaut is disappointed to learn that the robots differentiate among each other based on the color of their outer sheathing... Ultimately, he has to turn the planet down, since it is exhibiting behavior that had become outdated and prohibited by the Galactic Alliance in the future.

  • Predictive policing: All the Troubles of the World (1958)

    The story begins with government administrators being warned of an upcoming murder attempt. Joseph Manners, the man accused of the crime, is placed under house arrest, despite his protests that he is ignorant of any planned crime and the refusal of law enforcement officers to tell him what crime he is possibly guilty of. In spite of the arrest, Multivac reports that the odds of the crime happening increased because of the government's actions, and it continues to rise with every change.

  • Climate change, concentration of power, mental health issues, worker exploitation: these are key features of the cyberpunk genre.


New Comment
3 comments, sorted by Click to highlight new comments since: Today at 8:39 AM

I don't think people who refer to extinction threats from AI as "science-fictional" are making an implicit argument along the lines of "there have been science fiction stories about this, therefore it will not happen".

Their argument is more like "there is no clear sign of this happening in the immediate future, nor an obvious path by which it can happen using only well understood and verified phenomena; so far things like it are found only in science fiction and not in reality, and for it to become real will require other things that so far are found only in science fiction -- e.g., nanotechnology, "backdoors" in the human mind -- to become real first; so if it seems like an imminent threat to you, the most likely explanation is that you are taking science fiction as a reliable guide to future reality, and you shouldn't".

That may not be a good argument! But it's an argument that isn't weakened much by giving examples of things that science fiction predicted and are now either real or imminent.

It is weakened a bit by that, because "X is found, so far, only in science fiction" is less evidence for "we should not worry too much about X" if things found only in science fiction very commonly become reality. But there are plenty of common science fiction tropes that fairly clearly aren't close to happening in the near future -- e.g., faster-than-light spacecraft, teleportation, antigravity -- that if someone finds something credible largely because they've encountered it in SF, then they're probably making a mistake.

I do agree that it provides some evidence against the idea, but I read some people trying to dismiss AI risk in it's entirety with the argument that it's sci-fi. This is obviously a way to strong conclusion to reach, because it would've prevented you from accepting the current harms.

I wouldn't consider AI art to be an "AI harm" - I think it's a tremendous net benefit for artists, just like the digital camera or Photoshop.

New to LessWrong?