Wiki Contributions

Comments

Why do you ordinarily not allow discussion of Buddhism on your posts?

 

Also, if anyone reading this does a naturalist study on a concept from Buddhist philosophy, I'd like to hear how it goes.

An edgy writing style is an epistemic red flag. A writing style designed to provoke a strong, usually negative, emotional response from the reader can be used to disguise the thinness of the substance behind the author's arguments. Instead of carefully considering and evaluating the author's arguments, the reader gets distracted by the disruption to their emotional state and reacts to the text in a way that more closely resembles a trauma response, with all the negative effects on their reasoning capabilities that such a response entails. Some examples of authors who do this: Friedrich Nietzsche, Grant Morrison, and The Last Psychiatrist.

OK, so maybe this is a cool new way to look at at certain aspects of GPT ontology... but why this primordial ontological role for the penis?

"Penis" probably has more synonyms than any other term in GPT-J's training data.

I particularly wish people would taboo the word "optimize" more often. Referring to a process as "optimization" papers over questions like:

  • What feedback loop produces the increase or decrease in some quantity that is described as "optimization?" What steps does the loop have?
  • In what contexts does the feedback loop occur?
  • How might the effects of the feedback loop change between iterations? Does it always have the same effect on the quantity?
  • What secondary effects does the feedback loop have?

There's a lot hiding behind the term "optimization," and I think a large part of why early AI alignment research made so little progress was because people didn't fully appreciate how leaky of an abstraction it is.

The "pure" case of complete causal separation, as with civilizations in separate regions of a multiverse, is an edge case of acausal trade that doesn't reflect what the vast majority of real-world examples look like. You don't need to speculate about galactic-scale civilizations to see what acausal trade looks like in practice: ordinary trade can already be modeled as acausal trade, as can coordination between ancestors and descendants. Economic and moral reasoning already have elements of superrationality to the extent that they rely on concepts such as incentives or universalizability, which introduce superrationality by conditioning one's own behavior on other people's predicted behavior. This ordinary acausal trade doesn't require formal proofs or exact simulations -- heuristic approximations of other people's behavior are enough to give rise to it.

Trust and distrust are social emotions. To feel either of them toward nature is to anthropomorphize it. In that sense, "deep atheism" is closer to theism than "shallow atheism," in some cases no more than a valence-swap away. 

 

An actually-deeply-atheistic form of atheism would involve stripping away anthropomorphization instead of trust. It would start with the observation that nature is alien and inhuman and would extend that observation to more places, acting as a kind of inverse of animism. This form of atheism would remove attributions of properties such as thought, desire, and free will from more types of entities: governments, corporations, ideas, and AI. At its maximum extent, it would even be applied to the processes that make up our own minds, with the recognition that such processes don't come with any inherent essence of humanness attached. To really deepen atheism, make it illusionist.

Is trade ever fully causal? Ordinary trade can be modeled as acausal trade with the "no communication" condition relaxed. Even in a scenario as seemingly causal as using a vending machine, trade only occurs if the buyer believes that the vending machine will actually dispense its goods and not just take the buyer's money. Similarly, the vending machine owner's decision to set up the machine was informed by predictions about whether or not people would buy from it. The only kind of trade that seems like it might be fully causal is a self-executing contract that's tied to an external trigger, and for which both parties have seen the source code and verified that the other party have enough resources to make the agreed-upon trade. Would a contract like that still have some acausal element anyway?

I agree: the capabilities of AI romantic partners probably aren't the bottleneck to their wider adoption, considering the success of relatively primitive chatbots like Replika at attracting users. People sometimes become romantically attached to non-AI anime/video game characters despite not being able to interact with them at all! There doesn't appear to be much correlation between the interactive capabilities of fictional-character romantic partners and their appeal to users/followers.

  1. Sculpture wouldn't be immune if robots get good enough, but live dance and theater still would be. I don't expect humanoid robots to ever become completely indistinguishable from biological humans.
  2. I agree, since dance and theater are already so frequently experienced in video form.
Load More