Wiki Contributions

Comments

I'm looking forward to checking out the responses you linked to.

One implication of the paper that I found interesting is that not every physical process implements every computation or even every computation of a comparable finite size. Thus, I find Chalmers' paper to be the most satisfactory response I've come across to Greg Egan's Dust Theory, previously discussed on lw here. (As others have anticipated though, you do need to grant a coherent and not-too-liberal notion of reliable causation, but we seem to have ample evidence for that.)

For many scientific interests, I agree that it may not be necessary to describe or conceive of the mind in these computational terms. But if one is engaged in a grand reductionist project comparable to reducing neuropsychology to molecular biology to atomic theory, then, well, it helps to have the equivalent of a precise atomic theory to reduce to. For the purposes of my philosophical research, I'm reducing metaethics to facts about the cognitive architecture of our decision algorithms, which in turn are reduced to certain kinds of instantiated computations, which are reduced a la Chalmers to physical processes, which I take to be modelled by Pearl style causal models allowing us to be otherwise agnostic about the level of explanation.

You're right that I was being intentionally vague. For what it's worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn't notice them, I wouldn't worry about it. This is especially true if we haven't met in person and you don't know much about me or my situation.

Hi everyone!

I'm John Ku. I've been lurking on lesswrong since its beginning. I've also been following MIRI since around 2006 and attended the first CFAR mini-camp.

I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.

This process landed me in University of Michigan's Philosophy PhD program, during which time I read Kurzweil's The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong's influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.

I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:

  • adapting David Chalmers' theory of when a physical system instantiates a computation,
  • formalizing a version of Daniel Dennett's intentional stance to determine when and which decision algorithm is implemented by a computation, and
  • modelling how we decide how to value by positing (possibly rather thin and homuncular) higher order decision algorithms, which according to my metaethics is what ethical facts get reduced to.

Since I think much of philosophy boils down to conceptual analysis, and I've also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.

Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I'm sure I've absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.

I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.

I apologize for the embarrassing amount of time it has taken to respond to this. This was posted before the negotiations were actually finalized, which took some number of weeks. Then, in a matter of months, I ended up returning all of the equity in exchange for a computer and waived referral fee. At this point, I assume any further details are a moot point.

You're right that Putnam's point is stronger than what I initially made it out to be, but I think my broader point still holds.

I was trying to avoid this complication but with two-dimensional semantics, we can disambiguate further and distinguish between the C-intension and the A-intension (again see the Stanford Encyclopedia of Philosophy article for explanation). What I should have said is that while it makes sense to be externalist about extensions and C-intensions, we can still be internalist about A-intensions.

I think many of the other commenters have done an admirable job defending Putnam's usage of thought experiments, so I don't feel a need to address that.

However, there also seems to be some confusion about Putnam's conclusion that "meaning ain't in the head." It seems to me that this confusion can be resolved by disambiguating the meaning of 'meaning'. 'Meaning' can refer to either the extension (i.e. referent) of a concept or its intension (a function from the context and circumstance of a concept's usage to its extension). The extension clearly "ain't in the head" but the intension is.

The Stanford Encyclopedia of Philosophy article on Two-Dimensional Semantics has a good explanation of my usage of the terms 'intension' and 'extension'. Incidentally, as someone with a lot of background in academic philosophy, I think making two-dimensional semantics a part of LessWrong's common background knowledge would greatly improve the level of philosophical discussion here as well as reduce the inferential distance between LessWrong and academic philosophers.

If the difficulty of a physiological problem is mathematical in essence, ten physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics and no further.

Norbert Wiener