Flowers are selective about what kind of pollinator they attract. Diurnal flowers use diverse colours to stand out in a competition against their neighbours for visual salience. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.
O mysterious temple of chance, for what did the scattered parts of your whole link arms to shape you? And the mysterious ceasefire they declared to heed you? O quiescent eye of a hurricane, what holds you together? O molecular weave of water and sunlight, why even bother?
i googled it just now bc i wanted to find a wikipedia article i read ~9 years ago mentioning "deconcentration of attention", and this LW post came up. odd.
anyway, i first found mention of it via a blue-link on the page for Ithkuil. they've since changed smth, but this snippet remains:
After a mention of Ithkuil in the Russian magazine Computerra, several speakers of Russian contacted Quijada and expressed enthusiasm to learn Ithkuil for its application to psychonetics—
i wanted to look it up bc it relates to smth i tweeted abt yesterday:
unique how the pattern is only visible when you don't look at it. i wonder what other kind of stuff is like that. like, maybe a life-problem that's only visible to intuition, and if you try to zoom in to rationally understand it, you find there's no problem after all?
oh.
i notice that relaxing my attention sometimes works when eg i'm trying to recall smth at the limit of my memory (or when it's stuck on my tongue). sorta like broadening my attentional field to connect widely distributed patterns. another frame on it is that it enables anabranching trains of thought. (ht TsviBT for the word & concept)
An anabranch is a section of a river or stream that diverts from the main channel or stem of the watercourse and rejoins the main stem downstream.
here's my model for why it works:
(update: i no longer endorse this model; i think the whole framework of serial loops is bad, and think everything can be explained without it. still, there are parts of the below explanation that don't depend on it.)
in light of this, here some tentative takeaways:
Natural languages are adequate, but that doesn't mean they're optimal.
— John Quijada
i'm a fan of Quijada (eg this lecture) and his intensely modular & cognitive-linguistics-inspired conlang, Ithkuil.
that said, i don't think it sufficiently captures the essences of what enables language to be an efficient tool for thought. LW has a wealth of knowledge about that in particular, so i'm sad conlanging (and linguistics in general) hasn't received more attention here. it may not be that hard, EMH doesn't apply when ~nobody's tried.
We can think of a bunch of ideas that we like, and then check whether [our language can adequately] express each idea. We will almost always find that [it is]. To conclude from this that we have an adequate [language] in general, would [be silly].
— The possible shared Craft of Deliberate Lexicogenesis (freely interpreted)
Furthermore, a relationship with task performance was evident, indicating that an increased occurrence of harmonic locking (i.e., transient 2:1 ratios) was associated with improved arithmetic performance. These results are in line with previous evidence pointing to the importance of alpha–theta interactions in tasks requiring working memory and executive control. (Julio & Kaat, 2019)
when making new words, i try to follow this principle:
label concepts such that the label has high association w situations in which you want the concept to trigger.[1]
the usefwlness of a label can be measured on multiple fronts:
if you're optimising for b, you might label your concept "distributed boiling-frog attack" (DBFA). someone cud prob generate the whole idea fm those words alone, so it scores on highly on the criterion.
it scores poorly on c, however. if i'm in a situation in which it is helpfwl for me to notice that someone or something is DBFAing me, there are few semiotic/associative paths fm what i notice now to the label itself.
if i reflect on what kinds of situations i want this thought to reappear in, i think of something like "something is consistently going wrong w a complex system and i'm not sure why but it smells like a targeted hostile force".
maybe i'd call that the "invisible hand of malice" or "inimicus ex machina".
i rly liked the post btw! thanks!
i happen to call this "symptomatic nymation" in my notes, bc it's about deriving new word from the effects/symptoms of the referent concept/phenomenon. a good label shud be a solution looking for a problem.
deriving concept fm label is high-priority if you want the concept to gain popularity, however. i usually jst make words for myself and use them in my notes, so i don't hv to worry abt this.
here's the non-quantified meaning in terms of wh-movement from right to left:
for conlanging, i like this set of principles:
so to quantify sentence , i prefer ur suggestion "I think it'll rain tomorrow". the percentage is supposed to modify "I think" anyway, so it makes more sense to make them adjacent. it's just more work bc it's novel syntax, but that's temporary.
otoh, if we're specifying that subscripts are only used for credences anyway, there's no reason for us to invoke the redundant "I think" image. instead, write
it'll rain tomorrow
in fact, the whole circumfix operator is gratuitously verbose![1] just write:
rain tomorrow
my brain is insufficiently flexible to be able to surrender to social-status-incentives without letting that affect my ability to optimise purely for my goal. the costs of compromise (++) btn diff optimisation criteria are steep, so i would encourage more ppl to rebel against prevailing social dynamics. it helps u think more clearly. it also mks u miserable, so u hv to balance it w concerns re motivation. altruism never promised to be easy. 🍵
Related recommendation: Inward and outward steelmanning — LessWrong
Imagine that you encountered a car with square wheels
Inward steelmanning: "This is an abomination! It doesn't work! But maybe with round wheels it would be beautiful. Or maybe a different vehicle with square wheels could be beautiful."
Outward steelmanning: "This is ugly! It doesn't work! But maybe if I imagine a world where this car works, it will change my standards of beauty. Maybe I will gain some insight about this world that I'm missing."
If you want to be charitable, why not grant your opponent an entire universe with its own set of rules?
there has to be some point in time in which an agent acts like waiting just one more timestep before pressing wouldn’t be worth it even though it would.
if it's impossible to choose "jst one mor timestep" wo logically implying that u mk the same decision in other timesteps (eg due to indifferentiable contexts), then it's impossible to choose jst one mor timestep. optimal decision-mking also means recognising which opts u hv and which u don't—otherwise u'r jst falling fr illusory choices.
which brings to mind the principle, "u nvr mk decisions, u only evr decide btn strats". or fr the illiterate (:p):
You never make decisions, you only ever decide between strategies.
I was very very terrible at introspection just 2 years ago. Yet I did manage to learn how to make very good games, without really introspecting for years later about why the things work that I did.
More specifically, I mean progress wrt some long-term goal like AI alignment, altruism, factory farming, etc. Here, I think most ways of thinking about the problem are wildly off-target bc motivations get distorted by social incentives. Whereas goals in narrow games like "win at chess" or "solve a math problem" are less prone to this, so introspection is much less important.
This is among the top questions you ought to accumulate insights on if you're trying to do something difficult.
I would advise primarily focusing on how to learn more from yourself as opposed to learning from others, but still, here's what I think:
I. Strict confusion
Seek to find people who seem to be doing something dumb or crazy, and for whom the feeling you get when you try to understand them is not "I'm familiar with how someone could end up believing this" but instead "I've got no idea how they ended up there, but that's just absurd". If someone believes something wild, and your response is strict confusion, that's high value of information. You can only safely say they're low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.
II. Surprisingly popular
Alternatively, find something that is surprisingly popular—because if you don't understand why someone believes something, you cannot exclude that they believe it for good reasons.
The meta-trick to extracting wisdom from society's noisy chatter is learn to understand what drives people's beliefs in general; then, if your model fails to predict why someone believes something, you can either learn something about human behaviour, or about whatever evidence you don't have yet.
III. Sensitivity >> specificity
It's easy to relinquish old beliefs if you are ever-optimistic that you'll find better ideas than whatever you have now. If you look back at what you wrote a year ago, and think "huh, that guy really had it all figured out," you should be suspicious that you've stagnated. Strive to be embarrassed of your past world-model—it implies progress.
So trust your mind that it'll adapt to new evidence, and tune your sensitivity up as high as the capacity of your discriminator allows. False-positives are usually harmless and quick to relinquish—and if they aren't, then believing something false for as long as it takes for you to find the counter-argument is a really good way to discover general weaknesses in your epistemic filters.[1] You can't upgrade your immune-system without exposing yourself to infection every now and then. Another frame on this:
IV. Vingean deference limits
V. Confusion implies VoI, not stupidity
Here assuming that investing credence in the mistaken belief increased your sensitivity to finding its counterargument. For people who are still at a level where credence begets credence, this could be bad advice.
VI. Epistemic surface area / epistemic net / wind-wane models / some better metaphor
Every model you have internalised as truly part of you—however true or false—increases your ability to notice when evidence supports or conflicts with it. As long as you place your flag somewhere to begin with, the winds of evidence will start pushing it in the right direction. If your wariness re believing something verifiably false prevents you from making an epistemic income, consider what you're really optimising for. Beliefs pay rent in anticipated experiences, regardless of whether they are correct in the end.