LESSWRONG
LW

philosophybear
1921070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Six economics misconceptions of mine which I've resolved over the last few years
philosophybear2y70

The monopsony approach to the labor market says they're the rule. A company doesn't actually formally have to be the only buyer of labor power in its region to hold monopsony power.

Reply
Rationalism and social rationalism
philosophybear2y10

I added this to the blog post to explain why I don't think your objection goes through:

"[Edit: To respond to an objection that was made on another forum to this blog- advocate for in the context of this section does not necessarily mean the claim is true. If the public thinks the likelihood of X is 1%, and your own assessment, not factoring in the weight of others’ judgments, is 30%, you shouldn’t lie and say you think it’s true. Advocacy just means making a case for it, which doesn’t require lying about your own probability assessment.]"

Reply
The idea that ChatGPT is simply “predicting” the next word is, at best, misleading
philosophybear3y33

Here's an analogy. AlphaGo had a network which considered the value of any given board position. It was separate from it's monte carlo tree search network- which explicitly planned the future. However it seems probable that in some sense, in considering the value of the board, AlphaGo was (implicitly) evaluating the future possibilities of the position. Is that the kind of evaluation you're suggesting is happening? "Explicitly" ChatGPT only looks one word ahead, but "implicitly" it is considering those options in light of future directions of development for the text?

Reply
The AI Control Problem in a wider intellectual context
philosophybear3y40

Thankyou, I will start to have a read. At first glance, this reminds me of the phenomena of reference magnetism often discussed in philosophy of language. I suspect a good account of natural abstractions will involve the concept of reference magnetism in some way, although teasing out the exact relationship between the concepts might take a while.

Reply
Theodicy and the simulation hypothesis, or: The problem of simulator evil
philosophybear3y10

I see your point now, but I think this just reflects the current state of our knowledge. We haven't yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.

When this knowledge becomes widespread, we'll have to confront the reality of what we do every time we hit run. And then we'll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GPT-3 to tell a story about Elon Musk, is that compatible with me being a good person?

* (in the case of GPT-3, probably yes, because the models created are so simple as to lack ethical status, so pressing play doesn't reflect poorly on the simulation requester. For more sophisticated models, the problem gets thornier.)

Reply
Theodicy and the simulation hypothesis, or: The problem of simulator evil
philosophybear3y10

Certainly, it is possible, but I see little to guarantee our descendants won't create simulations that are like the world we live in now.

  1. Our descendants may well not regard sims as having the same rights as persons.
  2. Even if they do, if even a small number of rogue beings (or nations etc.) conducted such simulations, unethical as they may be, it is possible that simulations would soon outnumber real people- especially for critical junctures in history (e.g., right before the discovery of AGI.)
  3. The essay gives at least two ethical reasons which, in my view at least, may offer enough good to outweigh the suffering- such that even a person who cared deeply about sims might still sanction the existence of a world in which they suffer to achieve their aims.

So given those factors, we may be in a simulation, and given that, I think an interesting question is "is our being in a simulation compatible with our simulators being good people"

Reply
Theodicy and the simulation hypothesis, or: The problem of simulator evil
philosophybear3y10

I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character's mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.

Reply
17Rationalism and social rationalism
2y
5
3Republishing an old essay in light of current news on Bing's AI: "Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks"
3y
0
27ChatGPT understands language
3y
4
11The AI Control Problem in a wider intellectual context
3y
3
2Verbal parity: What is it and how to measure it? + an edited version of "Against John Searle, Gary Marcus, the Chinese Room thought experiment and its world"
3y
0
105Language models are nearly AGIs but we don't notice it because we keep shifting the bar
3y
13
21Against John Searle, Gary Marcus, the Chinese Room thought experiment and its world
3y
43
9Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks
3y
1
1Recent advances in Natural Language Processing—Some Woolly speculations (2019 essay on semantics and language models)
3y
0
12Theodicy and the simulation hypothesis, or: The problem of simulator evil
3y
12
Load More