# All of colinmarshall's Comments + Replies

Sounds like the concept of "agility" could be generalized richly indeed.

This is an important consideration. I just can't figure out how to test it.

Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.

This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I'm dubious about the extent to which it can be improved. In any case, I'm not quite to either goal myself yet. And if your own head-land making accurate predictions, that's a good thing; I just can't get those kinds of results out of mine. Yet.

0MrHen13yAnother random comment: Head-land is large and can be split into distinct patterns of behavior. Simulations about potential mates are probably going to be on different emotional circuits than strategizing about chess. (Unless, of course, you play chess differently than I do...) My hunches tell me that the chess simulations are going to be a little more accurate. Rationality certainly helps when testing the accuracy of head-land. My math teacher used to warn me about turning my brain off when working through math problems. If the answer didn't make intuitive sense check my work for bizarre mistakes. It turns out my head-land simulation of basic math problems is relatively accurate. Knowing its level of accuracy is an excellent tool for determining if we're in the wrong jungle [http://lesswrong.com/lw/12c/its_all_in_your_headland/yzw].

I second this request.

I'd like to make it that, but we'll see what I can do.

Nah; it was supposed to read "in which I construct." I just fumbled the editing.

Thank y'kindly. I upvote any and all comments that correct mistakes that would've made me look like a sub-lingual doof otherwise.

Thanks; duly noted. I plan to write a few posts on the "road testing" of Less Wrong and Less Wrong-y theories about rationality and the defeat of akrasia, so these are helpful pointers.

Thanks. I expect most of my posts here will be more Useful Practice than True Theory, but only just; my hope is that the Less Wrong community won't spare the downvotes if I stray too far from rationality and too close to self-help territory.

Missing the Trees for the Forest

You're absolutely right; it's the overuse of narrative we need to be concerned about. Humanity can't get by without it, but one inch too much and we're in self-delusion territory.

1thomblake13yAgreed. At least postmodernism got something right.
Deciding on our rationality focus

We seem to have a population here that already cares, and deeply, about rationality. I do trust them to upvote whatever has a lot to do with rationality and downvote whatever has too little to do with it. In fact, I'd go so far as to submit that we're doing something wrong if there aren't enough off-topic-ish, net-negative-karma posts; it would show that posters aren't taking quite enough risks as regards widening rationality's domain. I'm weary of the PUA and overly self-help-y talk, sure, but seeing nothing like it around here would be the dead canary in the coal mine.

Missing the Trees for the Forest

The more time I spend thinking about it, the more I come to realize that Narrative Is the Enemy, at least where attempts to see and reason clearly are concerned. One heuristic has proven surprisingly useful time and time again, in efforts of rationality as well as creativity: don't try to deliberately tell a story.

3thomblake13yBut narrative is our primary means for understanding; it's where we get the context for situating our ideas. Even the 'self' is a story we tell ourselves, to give narrative unity to the disparate actions we take. While many philosophers have written about this in recent years, I shall point to the one most likely to be respected here. Dan Dennett: The Self as a Center of Narrative Gravity [http://cogprints.org/266/]
Debate: Is short term planning in humans due to a short life or due to bias?

I would submit that it's less an issue of the biologically-imposed limit to our life spans than the biologically-imposed limit to our predictive abilities, to the amount of "moving part" data our brains can work with simultaneously. Considering that we only seem to achieve anything like accuracy when predicting events on a very, very small scale of both time and complexity, one might argue that we actually plan in too long a term.

Formalized math: dream vs reality

More expansion on the possibilities of such a solved computational might be in order here; even mathematicians will have to crank their imaginations a bit to think through the specific advantages afforded by the formalized-computer-mathematics future.