All of shenpen's Comments + Replies

Attention Lurkers: Please say hi


And I wonder why the word Rationalist has multiple meanings. You are clearly a Rationalist in one sense of the word but in this other sense (thankfully, because it is not good to be a Rationalist in this other sense): you are not.

Would you perhaps write a short post about it? Thanks in advance.

0SilasBarta11yHm, the article in the link raises some interesting issues, given the goals of this site. People here want to develop artificial, generally intelligent beings (AGIs), which involves specifying, unambiguously, what you want a machine to do in a way that it will be as creative (or more) and capable as humans are. Oakeshott refers to an attempt to instruct (humans) by pure reference to theory-driven rules as "rationalism" and considers it a huge error. Now, both LWers and Oakeshott would agree that to learn about the world, you have to interact with it, and the more, the better. But you can see the conflict between his worldview and that of this site's frequenters. While Oakeshottians will dismiss any kind of non-apprenticed teaching as futile, those here wish to use deep theoretical understanding of the lawfulness of intelligence to create beings that can learn with different restrictions than what humans have; and also, to break down this "tacit knowledge" humans use in complex tasks, into steps so simple a machine could follow them. Historically, the latter paradigm has been rife with failures next to ambitious promises, but in recent decades has made impressive strides in doing things that "of course" a machine could never do because of the "infinite" rules it would need to learn. Also, Oakeshott's critique is reminiscent of the discussion we had recently [] about how much (useful) knowledge you can convey to someone merely through explanation, without passing on the experience set. I supported the view people typically overestimate the extent of the knowledge that can't be explained and give up too easily in putting it in communicable form. Btw, the author, Gene Callahan is an antireductionist I've argued [] with in the past (that's a link to a part of an exchange I moved to my blog when he kept deleting my comments).
3JGWeissman11yFrom Newcomb's Problem and Regret of Rationality []: If it turns out that the techniques we advocate predictably lose, even though we thought they were reasonable, even though they came from our best mathematical investigation into what a rational agent should do, then we will conclude that those techniques are not actually rational, and we should figure out something else.