I down-voted this comment because it is a clever ploy for karma that rests on exploiting LessWrongers' sometimes unnecessary enthusiasm for increasingly abstract and self-referential forms of reasoning but otherwise adds nothing to the conversation.
Twist: By "this comment" I actually mean my comme...(read more)
I am an active github R contributor and stackoverflow R contributor and I would be willing to coordinate. Send me an email: rkrzyz at gmail
So you are saying that explaining something is equivalent to constructing a map that bridges an inferential distance, whereas explaining something away is refactoring thought-space to remove an unnecessary gerrymandering?
It feels good knowing you changed your mind in response to my rebuttal.
I disagree with your preconceptions about the "anti" prefix. For example, an anti-hero is certainly a hero. I think it is reasonable to consider "anti" a contextually overloaded semantic negater whose scope does not have to be the naive interpretation: anti-X can refer to "opposite of X" or "opposit...(read more)
I got a frequent LessWrong contributor a programming internship this summer.
It is as if you're buying / shorting an index fund on opinions.
Strong AI could fail if there are limits to computational integrity on sufficiently complex systems, similar to heating and QM problems limiting transistor sizes. For example, perhaps we rarely see these limits in humans because their frequency is one in a thousand human-thought-years, and when they...(read more)
The possibility of an "adaptation" being in fact an exaptatation or even a spandrel is yet another reason to be incredibly careful about purposing teleology into a discussion about evolutionarily-derived mechanisms.
The question of the subject is too dense and should be partitioned. Some ideas for auxiliary questions:
- Do there exists attempts at classifications of [parenting styles](http://en.wikipedia.org/wiki/Parenting_styles)? (*So that we may [not re-invent tread tracks](http://lesswrong.com/lw/3m3/the_n...(read more)