Posts

Sorted by New

Wiki Contributions

Comments

Thank you for the post! You raise some great questions. I’m not an expert, but I’d like to check something. Is it possible that Pinker and Marcus are trying to distinguish blabbering from well-reasoned text? If we take that perspective for a moment, Bert or GPT-2 would appear more reasoned, when their training text is more focused. The broader their training text, the less reasoned their output. It’s just a hypothesis.