In the day I would be reminded of those men and women,
Brave, setting up signals across vast distances,
Considering a nameless way of living, of almost unimagined values.
Edit: made it a post.
On my current models of theoretical[1] insight-making, the beginning of an insight will necessarily—afaict—be "non-robust"/chaotic. I think it looks something like this:
This maps to a fragile/chaotic high-energy "question phase" during which the violation of expectation is maximized (in order to adequately propagate the implications of the original discrepancy), followed by a compressive low-energy "solution phase" where correctness of expectation is maximized again.
In order to make this work, I think the brain is specifically designed to avoid being "robust"—though here I'm using a more narrow definition of the word than I suspect you intended. Specifically, there are several homeostatic mechanisms which make the brain-state hug the border between phase-transitions as tightly as possible. In other words, the brain maximizes dynamic correlation length between neurons[4], which is when they have the greatest ability to influence each other across long distances (aka "communicate"). This is called the critical brain hypothesis, and it suggests that good thinking is necessarily chaotic in some sense.
Another point is that insight-making is anti-inductive.[5] Theoretical reasoning is a frontier that's continuously being exploited based on the brain's native Value-of-Information-estimator, which means that the forests with the highest naively-calculated-VoI are also less likely to have any low-hanging fruit remaining. What this implies is that novel insights are likely to be very narrow targets—which means they could be really hard to hold on to for the brief moment between initial hunch and build-up of salience. (Concise handle: epistemic frontiers are anti-inductive.)
I scope my arguments only to "theoretical processing" (i.e. purely introspective stuff like math), and I don't think they apply to "empirical processing".
Harmonic (red) vs inharmonic (blue) waveforms. When a waveform is harmonic, efferent neural ensembles can quickly entrain to it and stay in sync with minimal metabolic cost. Alternatively, in the context of predictive processing, we can say that "top-down predictions" quickly "learn to predict" bottom-up stimuli.
I basically think musical pleasure (and aesthetic pleasure more generally) maps to 1) the build-up of expectations, 2) the violation of those expectations, and 3) the resolution of those violated expectations. Good art has to constantly balance between breaking and affirming automatic expectations. I think the aesthetic chills associates with insights are caused by the same structure as appogiaturas—the one-period delay of an expected tone at the end of a highly predictable sequence.
I highly recommend this entire YT series!
I think the term originates from Eliezer, but Q Home has more relevant discussion on it—also I'm just a big fan of their chaoticoptimal reasoning style in general. Can recommend! 🍵
personally, I try to "prepare decisions ahead of time". so if I end up in situation where I spend more than 10s actively prioritizing the next thing to do, smth went wrong upstream. (prev statement is exaggeration, but it's in the direction of what I aspire to lurn)
as an example, here's how I've summarized the above principle to myself in my notes:
(note: these titles is v likely cause misunderstanding if u don't already know what I mean by them; I try avoid optimizing my notes for others' viewing, so I'll never bother caveating to myself what I'll remember anyway)
I bascly want to batch process my high-level prioritization, bc I notice that I'm v bad at bird-level perspective when I'm deep in the weeds of some particular project/idea. when I'm doing smth w many potential rabbit-holes (eg programming/design), I set a timer (~35m, but varies) for forcing myself to step back and reflect on what I'm doing (atm, I do this less than once a week; but I do an alternative which takes longer to explain).
I'm prob wasting 95% of my time on unnecessary rabbit-holes that cud be obviated if only I'd spent more Manual Effort ahead of time. there's ~always a shorter path to my target, and it's easier to spot from a higher vantage-point/perspective.
as for figuring out what and how to distill…
Good points, but I feel like you're a bit biased against foxes. First of all, they're cute (see diagram). You didn't even mention that they're cute, yet you claim to present a fair and balanced case? Hedgehog hogwash, I say.
Anyway, I think the skills required for forecasting vs model-building are quite different. I'm not a forecaster, but if I were, I would try to read much more and more widely so I'm not blindsided by stuff I didn't even know that I didn't know. Forecasting is caring more about the numbers; model-building is caring more about how the vertices link up, whatever their weights. Model-building is for generating new hypotheses that didn't exist before; forecasting is discriminating between what already exists.
I try to build conceptual models, and afaict I get much more than 80% of the benefit from 20% of the content that's already in my brain. There are some very general patterns I've thought so deeply on that they provide usefwl perspectives on new stuff I learn weekly. I'd rather learn 5 things deeply, and remember sub-patterns so well that they fire whenever I see something slightly similar, compared to 50 things so shallowly that the only time I think about them is when I see the flashcards. Knowledge not pondered upon in the shower is no knowledge at all.
This is one of the most important reasons why hubris is so undervalued. People mistakenly think the goal is to generate precise probability estimates for frequently-discussed hypotheses (a goal in which deference can make sense). In a common-payoff-game research community, what matters is making new leaps in model space, not converging on probabilities. We (the research community) are bottlenecked by insight-production, not marginally better forecasts or decisions. Feign hubris if you need to, but strive to install it as a defense against model-dissolving deference.
Coming back to this a few showers later.
An "isthmus" and a "bottleneck" are opposites. An isthmus provides a narrow but essential connection between two things (landmass, associations, causal chains). A bottleneck is the same except the connection is held back by its limited bandwidth. In the case of a bottleneck, increasing its bandwidth is top priority. In the case of an isthmus, keeping it open or discovering it in the first place is top priority.
I have a habit of making up pretty words for myself to remember important concepts, so I'm calling it an "isthmus variable" when it's the thing you need to keep mentally keep track of in order to connect input with important task-relevant parts of your network.
When you're optimising the way you optimise something, consider that "isthmus variables" is an isthmus variable for this task.
I'm curious exactly what you meant by "first order".
Just that the trade-off is only present if you think of "individual rationality" as "let's forget that I'm part of a community for a moment". All things considered, there's just rationality, and you should do what's optimal.
First-order: Everyone thinks that maximizing insight production means doing IDA* over idea tree. Second-order: Everyone notices that everyone will think that, so it's no longer optimal for maximizing insights produces overall. Everyone wants to coordinate with everyone else in order to parallelize their search (assuming they care about the total sum of insights produced). You can still do something like IDA* over your sub-branches.
This may have answered some of your other questions. Assuming you care about the alignment problem being solved, maximizing your expected counterfactual thinking-contribution means you should coordinate with your research community.
And, as you note, maximizing personal credit is unaligned as a separate matter. But if we're all motivated by credit, our coordination can break down by people defecting to grab credit.
How much should you focus on reading what other people do, vs doing your own things?
This is not yet at practical level, but: Let's say we want to approach something like a community-wide optimal trade-off between exploring and exploiting, and we can't trivially check what everyone else is up to. If we think the optimum is something obviously silly like "75% of researchers should Explore, and the rest should Exploit," and I predict that 50% of researchers will follow the rule I follow, and all the uncoordinated researchers will all Exploit, then it is rational for me to randomize my decision with a coinflip.
It gets newcomblike when I can't check, but I can still follow a mix that's optimal given an expected number of cooperating researchers and what I predict they will predict in turn. If predictions are similar, the optimum given those predictions is a Schelling point. Of course, in the real world, if you actually had important practical strategies for optimizing community-level research strategies, you would just write it up and get everyone to coordinate that way.
I worry for people who are only reading other people's work, like they have to "catch up" to everyone else before they have any original thoughts of their own.
You touch on many things I care about. Part (not the main part) of why I want people to prioritize searching neglected nodes more is because Einstellung is real. Once you've got a tool in your brain, you're not going to know how to not use it, and it'll be harder to think of alternatives. You want to increase your chance of attaining neglected tools and perspectives to attack long-standing open problems with. After all, if the usual tools were sufficient, why are they long-standing open problems? If you diverge from the most common learning paths early, you're more likely to end up with a productively different perspective.
It's too easy to misunderstand the original purpose of the question, and do work that technically satisfies it but really doesn't do what was wanted in a broader context.
I've taken to calling this "bandwidth", cf. Owen Cotton-Barratt.
I feel like the terms for public/private beliefs are gonna crash with the fairly established terminology for independent impressions and all-things-considered beliefs (I've seen these referred to as "public" and "private" beliefs before, but I can't remember the source). The idea is that sometimes you want to report your independent impressions rather than your Aumann-updated model of the world, because if everyone does the latter it can lead to double-counting of evidence and information cascades.
Information cascades develop consistently in a laboratory situation in which other incentives to go along with the crowd are minimized. Some decision sequences result in reverse cascades, where initial misrepresentative signals start a chain of incorrect [but individually rational] decisions that is not broken by more representative signals received later. - (Anderson & Holt, 1998)
I don't want people to conflate the above socioepistemological ideas with the importantly different concepts in this post, so I prefer flagging my beliefs as "legible" or "illegible" to give a sense of how productive/educational I expect talking to me about them will be.
Bonus point: The failure mode of not admitting your own illegible/private beliefs can lead to myopic empiricism, whereby you stunt your epistemic growth by refusing to update on a large class of evidence. Severe cases often exhibit an unnatural tendency to consume academic papers over blog posts.
See also my other comment on all this list-related tag business. Linking it here in case you (the reader) is about to try to refactor stuff, and seeing this comment could potentially save you some time.
I was going to agree, but now I think it should just be split...
- The Resource tag can include links to single resources, or be a single resource (like a glossary).
- The Collections tag can include posts in which the author provides a list (e.g. bullet-points of writing advice), or links to a list.
- The tag should ideally be aliased with "List".[1]
- The Repository tag seems like it ought to be merged with Collections, but it carves up a specific tradition of posts on LessWrong. Specifically posts which elicit topical resources from user comments (e.g. best textbooks).
- The List of Links tag is usefwl for getting a higher-level overview of something, because it doesn't include posts which only point to a single resource.
- The List of Lists tag is usefwl for getting a higher-level overview of everything above. Also, I suggest every list-related tag should link to the List of Lists tag in the description. That way, you don't have to link all those tags to each other (which would be annoying to update if anything changes).
- I think the strongest case for merging is {List of Links, Collections} → {List}, since I'm not sure there needs to be separate categories for internal lists vs external lists, and lists of links vs lists of other things.
- I have not thought this through sufficiently to recommend this without checking first. If I were to decide whether to make this change, I would think on it more.
Selfish neuremes adapt to prevent you from reprioritizing
A technique for making the brain trust prioritization/perspectivization
So, in conclusion, maybe this technique could work:
By experience, I know something like this has worked for:
So it seems likely that something in this direction could work, even if this particular technique fails.
The "-eme" suffix inherits from "emic unit", e.g. genes, memes, sememes, morphemes, lexemes, etc. It refers to the minimum indivisible things that compose to serve complex functions. The important notion here is that even if the eme has complex substructure, all its components are selected as a unit, which means that all subfunctions hitchhike on the net fitness of all other subfunctions.