by Qria
1 min read8th Jun 20219 comments
This is a special post for quick takes by Qria. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

9 comments, sorted by Click to highlight new comments since: Today at 8:14 AM

Writing prefers shorthand, whereas reading prefers full words.

Writing is thinking.

I sense some actionable insights that I can infer from these two statements, but I cannot find one.


Below are some explanation of above statements.

On "Writing Prefers shorthand":

Scientists prefer very short names to denote their variables.

Many Perl and Shell scripters use shorthand and boasts superior developer performance of launching stuff.

On "Reading prefers full words":

In python, usage of shorthand variable is discouraged because of the "Readability Counts" philosophy. There's a saying that you will spend sixfold more time reading the code than writing the code, therefore the code should be optimized for reading, not writing.

On "Writing is Thinking":

Many authors and scientists such as Feynmann wrote to think.

Amazon's 6 pager method is used as a tool to force the writer to think, as powerpoint slides can hide flawed thinking in plain sight, and is still used heavily as a method to clearly think through a proposal.


I can sense the ephiphany right around the corder, which will give me at least 5% increase on productivity for my current interests, which are daily thought organization, long term PKM, writing software, documenting software, and how to leverage LLMs in product development.

Some thoughts, partly cached and partly inspired by the above:

  • Modern algebraic notation—single-letter variables and stuff—is very useful for symbol manipulation.  Imagine if you had to write statements like "The area of the square whose side is the hypotenuse is equal to the sum of the areas of the squares whose sides are the two legs" instead of "a^2+b^2=c^2" and had to derive e.g. the quadratic equation.  Like doing long division with Roman numerals, as someone put it.
  • Note that, in the above example, "a^2+b^2=c^2" requires context explaining what a, b, and c are.  Shorter notation is to some extent necessarily more ambiguous: fewer symbols means fewer different combinations of them; context is the only way to disambiguate them to refer uniquely to a larger set of referents.
  • Pedagogy presumably optimizes for readability and minimizing ambiguity for young ignorant minds, as well as for the gradeability of the written output they produce.  Therefore, it'd probably be biased against introducing ambiguous notation.  Not that they never teach it—obviously they do teach algebraic notation, the merits of it are so high they can't avoid it—but there might be cases where a notation would be useful to the serious student, but the pedagogues choose a more cumbersome notation because their interests differ.
  • Published papers, meanwhile, probably optimize for readability and rigor.  I do believe it's common for them to introduce complicated definitions so as to make certain statements short... But also, I think paper writers have a strong incentive to prove their result rigorously but a weaker incentive to explain the mental models (and notations for playing with them) that led to their insight, if they're different from the shortest-path-to-victory proof (which I think is not rare).  Bill Thurston (Fields medalist) wrote about this, e.g.:
    • "We mathematicians need to put far greater effort into communicating mathematical ideas. To accomplish this, we need to pay much more attention to communicating not just our definitions, theorems, and proofs, but also our ways of thinking. We need to appreciate the value of different ways of thinking about the same mathematical structure.

      We need to focus far more energy on understanding and explaining the basic mental infrastructure of mathematics—with consequently less energy on the most recent results. This entails developing mathematical language that is effective for the radical purpose of conveying ideas to people who don’t already know them."
  • Therefore, what you receive, in education and in papers you read, probably has less in the way of "finding the optimal notation for symbol manipulation" than would best serve you.
  • To compensate for this, you should perhaps err on the side of imagining "Is there some more convenient way to write this, even if it doesn't generalize?".  For example, on a small scale, when I've found myself writing out a series of "f(x,y) = z" statements (with constant f), I might turn them into "x,y   z"; also I sometimes use spacing rather than parentheses to indicate order of operations.  In programming, this might correspond to having a bunch of aliases for commonly used programs ("gs" for "git status"; "gg" for a customized recursive-grep pipeline), or, on the more extreme side, writing a domain-specific language and then using that language to solve your problem.
  • Of course, there are drawbacks.  It may be more difficult to show a random person your work if it looks like the above; if you get pulled away and forget the context, you might be unable to decipher your own scribblings; when you need to write up your results, you will have a bigger cleanup job to do.  However, each of these can be solved, and one might remark that "taking longer to explain your insights" is a better problem to have than "having fewer insights to explain".

Why is longevity not the number 1 goal for most humans?

Any goal you'd have would be achieved better with sufficent longevity.

Naturally, eternal life is the first goal of my life.

But to achieve this, global cooperative effort would be required push the science forward.

Therefore nowadays I'm mostly thinking about why longevity seems not in most people's concern.

In my worldview, longevity should be up there with ESGs in decision making process.

But in reality, no one really talks about it.

In conclusion I have two questions:

Is putting longevity over any other goal a rational decision?

And if so, why isn't general population on board with it?

I think there are LOTS of goals one could have that don't require a guarantee of extended life.  Many moments of joy, positive impact on other people (current and future), sailing the Carribean, etc.   In fact, I don't support any goals that require you specifically to live for a long long time, as opposed to being part of a cooperative structure which lasts beyond any of it's current members.

I personally have a preference to live longer - I have a personal stake in my own experiences, which does not apply to other people.  That is a form of goal, but I see small enough chance of success that I don't prioritize it very highly - there's not much I'd give up over the next 30 years for a very tiny increase in the chance I'll live to 1000.

If AGI kills us all longevity in the sense of biological longevity doesn't give you much. 

In EA spheres there's an idea that it's easier to save lives through medical interventions in the third world then through longevity research. 

As far as the general society is concered there's what Aubreg de Grey calls the pro-death trance and you find plenty of discussion from him and others about why it exists.

Any goal you'd have would be achieved better with sufficent longevity.

That is false for a lot of goals, including goals that have a deadline.

Really good point. Though I would argue that most deadlined life goals have deadlines only because of mortality itself. I’m trying to think of an example of a life goal with deadline even if immortality is achieved, but it seems hard to find one.

Two versions of a goal:

World Peace

Preventing a war you think is going to happen


The 2nd may have a (close) deadline, the 1st might have a distant deadline like the sun burns out, or something closer like before you die, or 'an AGI revolution (like the industrial revolution) starts' (assuming you think AGI will happen before the sun burns out).

Surely, you've heard the adage that humans can adapt to anything? They have probably adapted to death, and removing that psychological adaption that has probably been with humans since they became smart enough to understand that death is a thing. I would expect it to be really hard to change or remove it (in fact, Terror Management Theory goes even further and argues that much of our psychology is built on the denial of or dealing with death).