Wiki Contributions

Comments

Spoilers for Fullmetal Alchemist: Brotherhood:

 

Father is a good example of a character whose central flaw is his lack of green. Father was originally created as a fragment of Truth, but he never tries to understand the implications of that origin. Instead, he only ever sees God as something to be conquered, the holder of a power he can usurp. While the Elric brothers gain some understanding of "all is one, one is all" during their survival training, Father never does -- he never stops seeing himself as a fragile cloud of gas inside a flask, obsessively needing to erect a dichotomy between controller and controlled. Not once in the series does he express anything resembling awe. When Father finally does encounter God beyond the Doorway of Truth, he doesn't recognize what he's seeing. The Elric brothers have artistic expressions of wonderment toward God inscribed on their Doorways of Truth, but Father's Doorway of Truth is blank.

Father's lack of green also extends to how he sees humans. It never seems to occur to Father that the taboo against human transmutation is anything more than an arbitrary rule. To him, humans are only ever tools or inconveniences, not people to appreciate for their own sake or look to for guidance. Joy-in-the-Other is what Father most deeply desires, but he doesn't recognize this need.

Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.

It's worth noting that no reference to preferences has yet been made. That's interesting because it suggests that there are both 0P-preferences and 1P-preferences. That intuitively makes sense, since I do care about both the actual state of the world, and what kind of experiences I'm having.

Believing in 0P-preferences seems to be a map-territory confusion, an instance of the Tyranny of the Intentional Object. The robot can't observe the grid in a way that isn't mediated by its sensors. There's no way for 0P-statements to enter into the robot's decision loop, and accordingly act as something the robot can have preferences over, except by routing through 1P-statements. Instead of directly having a 0P-preference for "a square of the grid is red," the robot would have to have a 1P-preference for "I believe that a square of the grid is red." 

What's your model of inflation in an AI takeoff scenario? I don't know enough about macroeconomics to have a good model of what AI takeoff would do to inflation, but it seems like it would do something.

You're underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn't address the ease of circumvention. There's no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.

All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. "Teachers" become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school.

This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.

Why do you ordinarily not allow discussion of Buddhism on your posts?

 

Also, if anyone reading this does a naturalist study on a concept from Buddhist philosophy, I'd like to hear how it goes.

An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.

OK, so maybe this is a cool new way to look at at certain aspects of GPT ontology... but why this primordial ontological role for the penis?

"Penis" probably has more synonyms than any other term in GPT-J's training data.

I particularly wish people would taboo the word "optimize" more often. Referring to a process as "optimization" papers over questions like:

  • What feedback loop produces the increase or decrease in some quantity that is described as "optimization?" What steps does the loop have?
  • In what contexts does the feedback loop occur?
  • How might the effects of the feedback loop change between iterations? Does it always have the same effect on the quantity?
  • What secondary effects does the feedback loop have?

There's a lot hiding behind the term "optimization," and I think a large part of why early AI alignment research made so little progress was because people didn't fully appreciate how leaky of an abstraction it is.

Load More