All of Subsumed's Comments + Replies

I feel the term "domain" is doing a lot of work in these replies. Define domain, what is the size limit of a domain? Might all of reality be a domain and thus a domain-specific algorithm be sufficient for anything of interest?

Has a dog that learns to open a box to get access to a food item not created knowledge according to this definition? What about a human child that has learned the same?

As I explained in the post, dog genes contain behavioural algorithms pre-programmed by evolution. The algorithms have some flexibility -- akin to parameter tuning -- and the knowledge contained in the algorithms is general purpose enough so it can be tuned for dogs to do things like open boxes. So it might look like the book is learning something but the knowledge was created by biological evolution, not the individual dog. The knowledge in the dog's genes is an example of what Popper calls knowledge without a knowing subject. Note that all dogs have approximately the same behavioural repertoire. They are kind of like characters in a video game. Some boxes a dog will never open, though a human will learn to do it. A child is a UKC so when a child learns to open a box, the child creates new knowledge afresh in their own mind. It was not put there by biological evolution. A child's knowledge of box-opening will grow, unlike a dog's, and they will learn to open boxes in ways a dog never can. And different children can be very different in terms of what they know how to do.
In CR, knowledge is information which solves a problem. CR criticizes the justified-true-belief idea of knowledge. Knowledge cannot be justified, or shown to be certain, but this doesn't matter for if it solves a problem, it is useful. Justification is problematic because it is ultimately authoritarian. It requires that you have some base, which itself cannot be justified except by an appeal to authority, such as the authority of the senses or the authority of self-evidence, or such like. We cannot be certain of knowledge because we cannot say if an error will be exposed in the future. This view is contrary to most people's intuition and for this reason they can easily misunderstand the CR view, which commonly happens. CR accepts something as knowledge which solves a problem if it has no known criticisms. Such knowledge is currently unproblematic but may become so in the future if an error is found. Critical rationalists are fallibilists: they don't look for justification, they try to find error and they accept anything they cannot find an error in. Fallibilists, then, expose their knowledge to tough criticism. Contrary to popular opinion, they are not wish-washy, hedging, or uncertain. They often have strong opinions.

Have you written up somewhere how you stay organized, what software you use, especially with regards to reference management, text editors and works in progress?

A brain, rational or not, can produce the "terminal value" state (or output, or qualia?) when presented with the habitat or biodiversity concepts. This can be independent of their instrumental value, which, on average, probably diminishes with technological progress. But it's also easy to imagine cases where the instrumental value of nature increases as our ability to understand and manipulate it grows.

Do we know how to reason about that other information?

Certainly to an extent we would. If we looked in the data above and noticed a dramatic difference between African and Middle Eastern based terrorist data, we may want to add that variable to our model such that it considers (size, age, location). Data modelling techniques are generally useful. Random Decision Forests and that sort of thing. Humans are pretty good at generating hypothesis from sparse data because they have good 'common sense' understanding of the causality structure of the world. I wouldn't claim that we'll have a particularly accurate result, but the above strikes me as the kind of conclusion that one might be certain about because of it's mathiness, and yet because reality is nonlinear, any extra considerations beyond two variables might swing the results around wildly.
Well, we have theories about how to work with it. But the study of terrorism has one of the highest words:applications ratios I've ever heard of, and unambiguous successes seem thin on the ground despite a large volume of theory. Of course, it's also possible that the limits of information availability are distorting my picture of the field. The US Army's FM 3-24 [] on counterinsurgency operations might be the best summary of the mainstream perspective (whatever that means in this context) that I've read, adjusting for its authorship, age, and goals.

I really like this sieve approach. I feel a big improvement would be to show the output of the sieve as two boxes (red and blue) as well to help emphasize visually just how many false+ pass through and the relative size of false+ to all that pass through.

Check the update. I'm not quite sure how to describe visually what's "left" in the sieve. I don't want to show both test+ and false+ as outputs, exactly, because a sieve is suppose to keep some stuff back while letting other stuff through. But, I think the idea of the circles above makes things more clear than the bars in terms of what's going on as well as proportionality. I do cover the equivalent of fasle+ verbally, but it would be nice to make it visually. I'll keep thinking about this. Part of it is that I was trying to set things up onto just one page. If I ditch that (it's already on two now), I could maybe spread things out even more and show what's left in the sieves for each group. Thanks for the suggestion.

There are lots of things I feel others ought to know (because after I knew them I felt I understood the world a lot better than before) but not many fall under procedural knowledge. Computer programming is one thing I really value having learned, mostly for non-procedural reasons (clarifies thinking, adds a large useful set of analogies etc.) which has also proved practically useful (e.g. writing scripts for repetitive things and understanding computer errors).

Another thing I've just recalled: If you run out of gas somewhere you can call a cab and ask the driver to bring a can of gas with him/her (this applies in Iceland at least, YMMV).

Reading the part about breathing reducing attention during reading caused me to pay attention to my breathing while reading which reduced my attention, suggesting that breathing during reading reduces attention. Very clever, Mr. Wenger! As JoshuaZ points out, breathing seems unnoticed when one isn't actively thinking about it.

One also has to take into account the probability that this training has negative consequences, which, knowing the effects of hypoxia on neurons, is not negligible.

(snort) Upvoted for the giggle-factor.