LLM cognition is probably not human-like — LessWrong