Posts

Sorted by New

Wiki Contributions

Comments

I'm sure this talking point has been done to death, but if it's true that ChatGPT (in an experimental setting) was capable of deceiving someone on TaskRabbit to solve a capcha for it, and ChatGPT is only a language model, we have already far surpassed the kinds of capabilities Pinker has been dismissing for years.

It's similar to his writing on how language models will always be bad at the nuances of translating languages. I study Indonesian and Spanish, and recently had a conversation on character.ai switching between them. Unimaginable four years ago.

I think Pinker has an idea of how AI can and can't operate that is pretty rapidly becoming out-of-date, for someone who is so publicly vocal on the topic.

Kind of feels irresponsible to downplay safety issues.