Wiki Contributions

Comments

celeste11mo20

This seems potentially useful, but less useful than I thought you were claiming when I read the post. If I understand correctly, an eternal company has no greater incentive to prevent near-term existential risk than a non-profit (or Conjecture), but has slightly greater incentive to prevent long-term existential risk.

celeste11mo12

If someone believes in high existential risk, then they already have strong incentive to prevent it, because if they don't, they will die. I'm confused as to how this would provide additional incentive.

Walmart has incentive to be the institution that provides groceries for a reasonable price, not just for there to be reasonably priced groceries at all. Everyone already has incentive for food to be affordable, often so that they can afford food, but also because of the adverse effects of starving a populace.

celeste11mo10

I assumed, but I'm curious as to what the artifact was specifically.

celeste11mo30

Why are most of this post's links blue, where they would ordinarily be green?

n=1, of course, but I relocated soon after and stopped being depressed.

And people with normal social instincts don't know why it makes the situation awkward to them?

I'm confused -- why did you care that Rachel was watching porn?

celeste2y-2-6

"I think a more convincing version of the Lemoine thing would’ve been, if he was like, “What is the capital of Nigeria?” And then the large language model was like, “I don’t want to talk about that right now, I’d like to talk about the fact that I have subjective experiences and I don’t understand how I, a physical system, could possibly be having subjective experiences, could you please get David Chalmers on the phone?”"

i don't understand why this would be convincing. why would whether a language model's output sounds like a claim that one has qualia relate to whether the language model actually has qualia?

i agree that the output would be deserving of attention due to it (probably) matching the training data so poorly; to me such a response would be strong evidence for the language model using much more ~(explicit/logical) thought than i expect gpt-3 to be capable of, but not of actual subjective experience

what do you mean by "know how to interact with you"? what should the one-sentence introduction consist of that conveys this?

Load More