RationalObserver

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Hmm, perhaps I was reading too much into it, then. I already do that part, largely because I hate memorization and can fairly easily retain facts when they are within a conceptual framework.

It's intuitive that better understanding some concept or idea leads to better updating as well as better ability to see alternative routes involving the idea, but it seemed like there was something more being implied; it seemed like there he was making a special point of some plateau or milestone for "containment" of an idea, and I didn't understand what that meant. But, as I said, I was probably reading too much into it. Thanks, this was a pleasant discussion :)

I haven't read any of that yet, but it sounds interesting. I'm commenting on articles as I read them, going through the sequences as they are listed on the sequences page.

I think it makes a practical difference in actually understanding when you understand something. The practical advice given is to "contain" the "source" for each thought. The trouble is that I don't see how to understand when such a thing occurs, so the practical advice doesn't mean much to me. I don't see how to apply the advice given, but if I could I most definitely would, because I wish to understand everything I know. In part, writing my post was an attempt to make clear to myself why I didn't understand what was being said. I'm still kind of hoping I'm missing something important, because it would be awesome to have a better process for understanding what I understand.

The idea of a concept having or being a "source" seems odd to me. There are many ways of looking at the same concept or idea; oftentimes, the key to finding a new path is viewing an idea in a different way and seeing how it "pours", as you put it. The problem as I see it is that there are often many ways of deriving any particular idea, and no discernible reason to call any particular derivation the source. I find that my mind seems to work like a highly interconnected network, and deriving something is kind of like solving a system of equations, so that many missing pieces can be regenerated using the remaining pieces. My mind seems less like an ordered hierarchy and more like a graph in which ideas/concepts are often not individual nodes but instead highly connected subgraphs within the larger graph, such that there is the potential for vast overlap between concepts, no obvious ordering, and no obvious way to know when you truly "contain" all of some concept. I do understand that, at least for math, ability to derive something is a good measure for some level of understanding, but even within math there are many deep theorems or concepts that I hardly believe that I truly understand until I have analyzed (even if only briefly in my head) examples in which the theorem applies and (often more importantly, imo) examples in which the theorem does not apply. Even then, a new theorem or novel way of looking at it may enhance my understanding of the concept even further. The more I learn about the math, the more connections I make between different and even seemingly disparate topics. I don't see how to differentiate between 1) "containing" a thought and new connections "changing" it and 2) gaining new connections such that you contain more of the "source" for the thought.

Just my two cents.

I only recently got involved with LessWrong, and I'd like to explicitly point out that this is a tangent. I made this account to make an observation about the following passage:

Some clever fellow is bound to say, "Ah, but since I have hope, I'll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility. So the two events are related after all." At this point, we have to drag in Bayes's Theorem and measure the charge of entanglement quantitatively. Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only carries a very tiny charge of entanglement, will still mess up your mapping.

First, let me say that I agree with your dismissal of the instance, but I think the idea suggests another argument that is interesting and somewhat related. While the accuracy of an estimate relies very little upon an individual's beliefs or actions, similar to explanations of the Prisoner's Dilemma or why an individual should vote, I can see a reasonable argument that a person's beliefs can represent a class of individuals that can actually affect probabilities.

Arguing that hope makes the world better and so staves off war still seems silly, as the effect would still likely be very small, and instead I argue from the perspective of "reasonableness" of actions. I read "pure hope" as revealing a kind of desperation, representing an unwillingness to consider nuclear war a reasonable action in nearly any circumstance. A wide-spread belief that nuclear war is an unreasonable action would certainly affect the probability of a nuclear war occurring, both for political reasons (fallout over such a war) and statistical ones (government officials are drawn from the population), and so such a belief could actually have a noticeable effect on the possibility of a nuclear war. Furthermore, it can be argued that, for a flesh-and-blood emotional being with a flawed lens, viewing a result as likely could make it seem less unreasonable (more reasonable). As such, one possible argument for why nuclear war may happen later than earlier would look like: Nuclear war is widely regarded as an unreasonable action to take, and the clear potential danger of nuclear war makes this view unlikely to change in the forseeable future.

Following this, an argument that it is beneficial to believe that nuclear war will happen later: Believing that nuclear war is likely could erode the seeming "unreasonableness" of the action, which would increase the likelihood of such a result. As a representative of a class of individuals who are thus affected, I should therefore believe nuclear war is unlikely, so as to make it less likely.

I am not claiming I believe the conclusions of this argument, only that I found the argument interesting and wanted to share it. The second argument is also not an argument for why it is unlikely, and is rather an argument for why to believe it is unlikely, independent of actual likelihood, which is obviously something a perfect rationalist should never endorse (and why the argument relies on not being a perfect rationalist). If anyone is interested in making them, I'd like to hear any rebuttals. Personally, I find the "belief erodes unreasonableness" part the most suspect, but I can't quite figure out how to argue against it without essentially saying "you should be a better rationalist, then".