Wiki Contributions

Comments

The monopsony approach to the labor market says they're the rule. A company doesn't actually formally have to be the only buyer of labor power in its region to hold monopsony power.

I added this to the blog post to explain why I don't think your objection goes through:

"[Edit: To respond to an objection that was made on another forum to this blog- advocate for in the context of this section does not necessarily mean the claim is true. If the public thinks the likelihood of X is 1%, and your own assessment, not factoring in the weight of others’ judgments, is 30%, you shouldn’t lie and say you think it’s true. Advocacy just means making a case for it, which doesn’t require lying about your own probability assessment.]"

Here's an analogy. AlphaGo had a network which considered the value of any given board position. It was separate from it's monte carlo tree search network- which explicitly planned the future. However it seems probable that in some sense, in considering the value of the board, AlphaGo was (implicitly) evaluating the future possibilities of the position. Is that the kind of evaluation you're suggesting is happening? "Explicitly" ChatGPT only looks one word ahead, but "implicitly" it is considering those options in light of future directions of development for the text?

Thankyou, I will start to have a read. At first glance, this reminds me of the phenomena of reference magnetism often discussed in philosophy of language. I suspect a good account of natural abstractions will involve the concept of reference magnetism in some way, although teasing out the exact relationship between the concepts might take a while.

I see your point now, but I think this just reflects the current state of our knowledge. We haven't yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.

When this knowledge becomes widespread, we'll have to confront the reality of what we do every time we hit run. And then we'll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GPT-3 to tell a story about Elon Musk, is that compatible with me being a good person?

* (in the case of GPT-3, probably yes, because the models created are so simple as to lack ethical status, so pressing play doesn't reflect poorly on the simulation requester. For more sophisticated models, the problem gets thornier.)

Certainly, it is possible, but I see little to guarantee our descendants won't create simulations that are like the world we live in now.

  1. Our descendants may well not regard sims as having the same rights as persons.
  2. Even if they do, if even a small number of rogue beings (or nations etc.) conducted such simulations, unethical as they may be, it is possible that simulations would soon outnumber real people- especially for critical junctures in history (e.g., right before the discovery of AGI.)
  3. The essay gives at least two ethical reasons which, in my view at least, may offer enough good to outweigh the suffering- such that even a person who cared deeply about sims might still sanction the existence of a world in which they suffer to achieve their aims.

So given those factors, we may be in a simulation, and given that, I think an interesting question is "is our being in a simulation compatible with our simulators being good people"

I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character's mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.