I think it's important to distinguish between the following two claims:
1. If GPT-3 has a world model, that model is inaccurate. 2. GPT-3 has no world model.
Claim 1. is certainly true, if only for the reason that real-world agents - including humans - are fallible (and perfectly accurate world models are not essential for competent practice). There's no reason to suppose that GPT-3 would be any different.
I guess one might argue that the safe applications of GPT-3 require it to have a world model that is at least as accurate, in every domain, as individual human models of the world. Marcus seems to believe that this can't be achieved by LLMs using statistical... (read more)
I think it's important to distinguish between the following two claims:
1. If GPT-3 has a world model, that model is inaccurate.
2. GPT-3 has no world model.
Claim 1. is certainly true, if only for the reason that real-world agents - including humans - are fallible (and perfectly accurate world models are not essential for competent practice). There's no reason to suppose that GPT-3 would be any different.
I guess one might argue that the safe applications of GPT-3 require it to have a world model that is at least as accurate, in every domain, as individual human models of the world. Marcus seems to believe that this can't be achieved by LLMs using statistical... (read more)