Typo in this sentence: "And probably I we would have had I not started working on The Machine."
I was in a similar position, but I am now at a point where I believe ADHD is negatively affecting my life in way that has overturned my desire to not take medication. It's hard to predict the future, but if you have a cheap or free way to get a diagnosis, I would recommend doing so for your own knowledge and to maybe make getting prescriptions in the future a smidge easier. I think it's really believable that in your current context there are no or nearly no negative repercussions to your ADHD if you have it, but it's hard to be certain of your future contexts, and even to know what aspects of your context would have to change for your symptoms to act (sufficiently) negatively.
To start, I propose a different frame to help you. Ask yourself not "How do I get intuition about information theory?" instead ask "How is information theory informing my intuitions?"
It looks to me like it's more central than is Bayes' Theorem, and that it provides essential context for why and how that theorem is relevant for rationality.
You've already noticed that this is "deep" and "widely applicable." Another way of saying these things is "abstract," and abstraction reflects generalizations over some domain of experience. These generalizations are the exact sort of things which form heuristics and intuitions to apply to more specific cases.
To the meat of question:
Step 1) grok the core technology (which you seem have already started)
Step 2) ask yourself the aforementioned question.
Step 3) try to apply it to as many domains as possible
Step 4) as you come into trouble with 3), restart from 1).
When you find yourself looking for more of 1) from where you are now, I recommend at least checking out Shannon's original paper on information. I find his writing style to be much more approachable than average for what is a highly technical paper. Be careful when reading though, because his writing is very dense with each sentence carrying a lot of information ;)
There's a tag for gears level and in the original post it looks like everyone in the comments was confused even then what gears-level meant, and in particular there were a lot of non-overlapping definitions given. In particular, the author, Valentine, also expresses confusion.
The definition given, however, is:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?
I'm not convinced that how people have been using it in more recent posts though. I think the one upside is that "gears-level" is probably easier to teach than "reductionist" but contingent on someone knowing the word "reductionism" it is clearly simpler to just use that word. In the history of the tag, there was also previously "See also: Reductionism" with a link.
In the original post, I think Valentine was trying to get at something complex/not fully encapsulated by an existing word or short phrase, but it's not clear to me that it was well communicated to others. I would be down for tabooing "gears-level" as a (general) term on lesswrong. I can't think of an instance after the original where someone used the term "gears-level" to not mean something more specific, like "mechanistic" or "reductionist."
That said, given I don't think I really understand what was meant by "gears-level' in the original, when there are suitable replacements, I would ideally like to hear from someone who thinks they do. In particular, like Valentine or brook. If there were no objections maybe clean-up the tag by removing it and/or linking to other related terms.
Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%, with the most recent studies showing heritability for IQ as high as 80%.
I enjoyed this post. Were you inspired by HCH at all? Both occupy the same mental space for me.
I really enjoyed the post, but something that maybe wasn't the focus of it really stuck out to me.
i think i felt a little bit of it with Collin when i was trying to help him find a way to exercise regularly. the memory is very hazy, but i think the feeling was focused on the very long list of physical activities that were ruled out; it seemed the solution could not involve Collin having to tolerate discomfort. much like with Gloria and the "bees", i experienced some kind of emotional impulse to be separate from him, to push him away, to judge him to be inadequate or unworthy. (it wasn't super strong in that case, and i did actually succeed in helping him find an exercise routine that he stuck with for years.)
I would find it really useful if you wrote an explanation of how you achieved this in particular, as exercising regularly is one of those 'canonically difficult' things to do.
I appreciate this post a lot. In particular, I think it's cool that you establish a meta-frame, or at least class of frames. Also, I've had debates that definitely have had reachability mismatches in the past and I hope that that I'll be able to just link to this post in the future.
The most frequent debate mismatch I have is on a subject you mention: climate change. I generally take the stance of Clara: the way I view it, it's a coordination problem, and individual action, no matter how reachable, I model as having completely unsubstantial effect. In some sense, one could claim that all arguments should only be about the nature of actions that either individual involved in the conversation could actually take. On the other hand (and the stance I take in this scenario), communication can be used as a signal to establish consensus on the actions that others should take. I expect that this sort of mismatch could be the cause reachability mismatches in general. One participant can prioritize the personal relevance of the conversation while another could try to prioritize arguments for actions that have the most effect, whether or not anyone can actually make them happen. Another way to view this problem is "working backwards" from the problem or "working forwards" from the actions we can take.
To offer a deeper explanation, I personally view the piece as doing the following things:
I don't see any mention of confidence in the article, so I'm having trouble seeing how the Dunning-Kruger effect is related.
More importantly for me, I would like to take for granted what you believe the piece to be about so that we can focus on a specific question. So, Isusr is focusing on their own intelligence in this post, why do you find that problematic?
When someone is smarter than you, you cannot tell if they're one level above you or fifty because you literally cannot comprehend their reasoning.
I take issue with this claim, as I believe it to be vastly oversimplified. You can often, if not always, still comprehend their reasoning with additional effort on your behalf. By analogy, a device capable of performing 10 FLOPS can check the calculation of a device that can perform 10 GFLOPS by taking an additional 10^9 factor of time. Even in cases of extreme differences in ability, I think there can be simple methodologies for evaluating at levels above your own, though admitted it can quickly become infeasible for sufficiently large differences. That said, in my experience I think that I've been able to evaluate up to probably 2-3 std deviations of g above my own. That said, I admittedly haven't taken the effort/social cost of asking these individuals their IQ as a proxy to semireliably validate my predictions.