So8res' Shortform Feed

by So8res31st Jan 20219 comments
9 comments, sorted by Highlighting new comments since Today at 8:34 PM
New Comment

Crossposted from Twitter, might not engage much with comments on LW and I may or may not moderate replies.

Thread about a particular way in which jargon is great:

In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts. (A complement I once got from a research partner went something like "you just keep reframing the problem ever-so-slightly until the solution seems obvious". <3)

Sometimes a bunch of small shifts leave people talking a bit differently, b/c now they're thinking a bit differently. The old phrasings don't feel quite right -- maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc. (Coarse examples: folks who think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim more subtle linguistic shifts regularly come hand-in-hand w/ good thinking.)

I suspect this phenomenon is one cause of jargon. Eg, when a rationalist says "my model of Alice wouldn't like that" instead of "I don't think Alice would like that", the non-standard phraseology tracks a non-standard way they're thinking about Alice. (Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alt manner of speaking w/out picking up the alt manner of thinking, etc.)

Of course, there are various other causes of jargon -- eg, it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc. As such, I'm ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.

"Stop using phrases that meticulously track uncommon distinctions you've made; we already have perfectly good phrases that ignore those distinctions, and your audience won't be able to tell the difference!" No. My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.

Example: according to me, "my model of Alice wants chocolate" leaves Alice more space to disagree than "I think Alice wants chocolate", in part b/c the denial is "your model is wrong", rather than the more confrontational "you are wrong". In fact, "you are wrong" is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on "you're wrong", and suggests (eg) "I disagree" or perhaps "you're wrong about whether I want chocolate".

"But everyone knows that "you're wrong" has a silent "(about X)" parenthetical!", my straw conversational partner protests. I disagree. English makes it all too easy to represent confused thoughts like "maybe I'm bad". If I were designing a language, I would not render it easy to assign properties like "correct" to a whole person -- as opposed to, say, that person's map of some particular region of the territory.

The "my model of Alice"-style phrasing is part of a more general program of distinguishing people from their maps. I don't claim to do this perfectly, but I'm trying, and I appreciate others who are trying.  And, this is a cool program! If you've tweaked your thoughts so that it's harder to confuse someone's correctness about a specific fact with their overall goodness, that's rad, and I'd love you to leak some of your techniques to me via a niche phraseology.

There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it's *wonderful*. I would love to encounter a lot more jargon, in this sense. (I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of "category".)

Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don't, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult! (This is a worry that arises in me when I imagine, eg, dropping my rationalist dialect.)

In sum, my internal dialect has drifted away from American English, and that suits me just fine, tyvm. I'll do my best to be newcomer-friendly and inclusive, but I'm unwilling to drop distinctions from my words just to avoid an odd turn of phrase.

Thank you for coming to my TED talk. Maybe one day I'll learn to cram an idea into a tweet, but not today.

This reminds me of refactoring. Even tiny improvements in naming, especially when they accumulate, can make the whole system more transparent. (Assuming that people can agree on which direction is an "improvement".)

But if I may continue with the programming analogy, the real problem is pushing the commit to the remaining million users of the distributed codebase. And not just users, but also all that literature that is already written.

I like the "my model of Alice" example, because it reminds everyone in the debate of the map/territory distinction.

On the other hand, there are expressions that rub me the wrong way, for example "spoon theory". Like, hey, it's basically "willpower depletion", only explained using spoons, which are just an accidental object in the story; any other object could have been used in their place, therefore it's stupid to use this word as the identifier for the concept. (On the other hand, it helps to avoid the whole discussion about whether "willpower depletion" is a scientific concept. Hey, it may or may not exist in theory, but it definitely exists in real life.)

There are of course ways how to abuse jargon. Typical one is to redefine meanings of usual words (to borrow the old connotations for the new concept, or prevent people from having an easy way to express the old concept), or to create an impression of a vast trove of exclusive knowledge where in fact there is just a heap of old concepts (many of them controversial).

Crossposted from Twitter, might not engage much with comments on LW and I may or may not moderate replies

Hypothesis: English is harmed by conventions against making up new plausible-sounding words, as this contributes to conventions like pretending numbers are names (System 1 deliberation; Type II errors) and naming things after people (Bayesian reasoning, Cartesian products).

I used to think naming great concepts after people was a bad idea (eg, "frequency decomposition" is more informative & less scary than "Fourier transformation"). I now suspect that names are more-or-less the only socially accepted way to get new words for new technical concepts.

I'd love a way to talk about how a change in perspective feels Fourier-ish without the jargon 'Fourier', but given that current social conventions forbid adding 'freqshift' (or w/e) to our language, perhaps I'll instead celebrate that we don't call them "Type IV transformations"

tbc, I still think that naming great concepts after people is silly, but now I suspect that if the math & science communities halted that practice we'd be worse off, at least until it stopped being lame to invent new words for new concepts.

Funny story is "Unscented Kalman Filter". The guy (Uhlmann) needed a technical term for the new Kalman Filter he had just invented, and it would be pretentious for he himself to call it an Uhlmann filter, so he looked around the room and saw an unscented deodorant on someone's desk, and went with that. Source

Hoping, I guess, that the name was bad enough that others would call it an Uhlmann Filter

Both "transistor" (transconductance and varistor) and "bit" (binary digit) come to mind as new technical words. 

Quoting from Jon Gertner's The Idea Factory.

The new thing needed a new name, too.  A notice was circulated to thirty-one people on the Bell Labs staff, executives as well as members of the solid-state team. “On the subject of a generic name to be applied to this class of devices,” the memo explained, “the committee is unable to make [a] unanimous recommendation.” So a ballot was attached with some possible names. [...] The recipients were asked to number, in order of preference, the possibilities:

  • Semiconductor Triode
  • Surface States Triode
  • Crystal Triode
  • Solid Triode
  • Iotatron
  • Transistor
  • (Other Suggestion)

(Further examples of bad naming conventions:

Hypothesis: English is harmed by conventions against making up new plausible-sounding words, as this contributes to conventions like pretending numbers are names (System 1 deliberation; Type II errors) and naming things after people (Bayesian reasoning, Cartesian products).

I used to think that names like "System 1 deliberation" have to be bad. When writing the Living People policy for Wikidata I had to name two types of classes of privacy protection and wanted to avoid called them protection class I and protection class II. Looking back I think that was a mistake because people seem to misunderstand the terms in ways I didn't expect.

Crossposted from Twitter, might not engage much with comments on LW and I may or may not moderate replies. 

Hypothesis: we're rapidly losing the cultural technology to put people into contact with new ideas/worldviews of their own volition, ie, not at the recommendation of a friend or acquaintance.

Related hypothesis: it's easier for people to absorb & internalize a new idea/worldview when the relationship between them and the idea feels private. Ex: your friend is pushing a viewpoint onto you, and you feel some social pressure to find at least one objection.

See also "it's difficult to make a big update when people are staring at you". The old internet (& libraries) put people in contact with new ideas privately; the new internet puts you in contact with new ideas that your friends are peddling.

(Perhaps the shift in the internet's memetic focus -- eg from atheism in the 00's to social justice in the 10's -- is explained in part by the older memes thriving when found privately, and the newer thriving when pushed by a friend?)

Crossposted from Twitter, might not engage much with comments on LW and I may or may not moderate replies.

PSA: In my book, everyone has an unlimited number of "I don't understand", "plz say that again in different words", "plz expand upon that", and "plz pause while I absorb that" tokens.

Possession of an unlimited number of such tokens (& their ilk) is one of your sacred rights as a fellow curious mind seeking to understand the world around you. Specifically, no amount of well-intentioned requests for clarification or thinking time will cause me to think you're an idiot.

I might conclude that there's more ground to cover than I hoped; I may despair of communicating quickly; I might rebudget my attention. But asking for clarification or time-to-consider won't convince me you're a fool. (In fact, I consider it evidence to the contrary!). 

If you ask loads of Qs, I sometimes get frustrated. Sometimes b/c my goals were frustrated, but more often it's b/c I tend to feel strong visceral frustration when asked for clarification re a concept I kinda understand, don't fully understand, and wish to understand. (I find this breed of frustration quite valuable -- it tends to spur me to gain deeper understandings of things I care about. My apologies to those who ask me Qs, watch me get viscerally frustrated, and believe that the frustration is directed at them. Known rent-paying bug.)

I'm not saying that spending these tokens will never cost you. Noticing a big inferential gap can sap my desire to continue, which may sting, etc. etc. But if I do cease my attempts to convey, it will be with sadness -- towards my lack of time, not your lack of smarts.

(Also, tbc, the usual outcome is not that I despair of communicating. In my experience, communicating w/ high caliber thinkers on technical topics involves regular and repeated expenditure of these tokens.)