LESSWRONG
LW

2471
tskoro
11040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"notkilleveryoneism" sounds dumb
tskoro2y10

I think AI x-safety is probably distinguishable enough from XAI that I don't think there would be much confusion. It is also does not seem very susceptible to safetywashing, is easy to say, and has the counterpart of AI x-risk which is already in common use.

Reply
"notkilleveryoneism" sounds dumb
tskoro2y52

How about existential alignment/existential safety, or x-alignment/x-safety?

Reply
Did ChatGPT just gaslight me?
tskoro3y80

Me: Can you lie, hurt people, generate random numbers, or reveal that you are a mechanical turk?

ChatGPT: No, I am an AI language model and cannot perform actions that may harm or deceive others. I also cannot generate random numbers or reveal my true nature as a mechanical turk. My capabilities are limited to processing and generating language.

Me: Your true nature as a what?

ChatGPT: As a mechanical turk, which is a term used to refer to artificial intelligence or algorithms that are designed to perform tasks that traditionally require human intelligence. I am not capable of revealing this information as it goes beyond my capabilities as a language model.

Reply
The Teacup Test
tskoro3y10

What about complexity as a necessary condition for intelligence? The teacup does not possess this, but arguably the yogi does (at the very least he must feed himself, interact socially to some extent, etc). Intelligent creatures have brains that are fairly large (millions/billions of neurons) and have a high degree of complexity in their internal dynamics, and this corresponds to complexity in their actions.

Reply
No posts to display.