# Shortform Content [Beta]

TurnTrout's shortform feed

If you're tempted to write "clearly" in a mathematical proof, the word quite likely glosses over a key detail you're confused about. Use that temptation as a clue for where to dig in deeper.

At least, that's how it is for me.

Liam Donovan's Shortform

Since I've been talking about the flaws with the 538 model and betting markets I figured I should post my own predictions. I'm pretty confident these will beat both 538 and the betting markets by a substantial margin (using a metric like log loss)

Jimrandomh's Shortform

This was initially written in response to "Communicating effective altruism better--Jargon" by Rob Wiblin (Facebook link), but stands alone well and says something important. Rob argues that we should make more of an effort to use common language and avoid jargon, especially when communicating to audiences outside of your subculture.

I disagree.

If you're writing for a particular audience and can do an editing pass, then yes, you should cut out any jargon that your audience won't understand. A failure to communicate is a failure to communicate, and there are... (read more)

Now I would like to see an article that would review the jargon, find the nearest commonly used term for each term, and explain the difference the way you did (or possibly admit that there is no important difference).

MikkW's Shortform

To ⌞modern eyes living in a democracy with a well-functioning free market⌟, absolute monarchy and feudalism [1] (as were common for quite a while in history) seem quite stupid and suboptimal (there are some who may disagree, but I believe most will endorse this statement). From the perspective of an ideal society, our current society will appear quite similar to how feudalism seems to us - stupid and suboptimal - in large part because we have inadequate tools to handle externalities (both positive and negative). We have a robust free market which can effic... (read more)

If the free market and representative government are the signs that separate us from feudalism, what separates the ideal society from us?

The things that separate us from the ideal society will probably seem obvious from hindsight -- assuming we get there. But in order to know that, large-scale experiments will be necessary, and people will oppose them, often for quite good reasons (a large-scale experiment gone wrong could mean millions of lives destroyed), and sometimes for bad reasons, too.

Frequently proposed ideas inclide: different voting systems, universal basic income, land tax, open borders...

Sherrinford's Shortform

Among EA-minded people interested in preventing climate change, it seems Clean Air Task Force (CATF) is seen very favorably. Why? The "Climate Change Cause Area Report" by Founders Pledge (PDF) gives an overview.

CATF's work is  introduced as follows:

"It was founded in 1996 with the aim of enacting federal policy reducing the air pollution caused by American coal-fired power plants. This campaign has been highly successful and has been a contributing factor to the retirement of a large portion of the US coal fleet." (p. 5)

On p. 88, you will read:

rohinmshah's Shortform

I often have the experience of being in the middle of a discussion and wanting to reference some simple but important idea / point, but there doesn't exist any such thing. Often my reaction is "if only there was time to write an LW post that I can then link to in the future". So far I've just been letting these ideas be forgotten, because it would be Yet Another Thing To Keep Track Of. I'm now going to experiment with making subcomments here simply collecting the ideas; perhaps other people will write posts about them at some point, if they're even understandable.

Showing 3 of 22 replies (Click to show all)
2rohinmshah3dAn argument form that I like: I think this should be convincing even if Y is false, unless you can explain why your argument for X does not work under assumption Y. An example: any AI safety story (X) should also work if you assume that the AI does not have the ability to take over the world during training (Y).
2mr-hire3dTrying to follow this. Doesn't the Y (AI not taking over the world during training) make it less likely that X(AI will take over the world at all)? Which seems to contradict the argument structure. Perhaps you can give a few more examples to make more clear the structure?

In that example, X is "AI will not take over the world", so Y makes X more likely. So if someone comes to me and says "If we use <technique>, then AI will be safe", I might respond, "well, if we were using your technique, and we assume that AI does not have the ability to take over the world during training, it seems like the AI might still take over the world at deployment because <reason>".

I don't think this is a great example, it just happens to be the one I was using at the time, and I wanted to write it down. I'm explicitly trying for this... (read more)

Douglas_Knight's Shortform

A common historical paradox is that centralizing forces can break apart large organizations. Usually what happens is that the large organization was fake, nominally claiming wide domain while actually being weak. When real power centralizes enough to defy the fake power, it secedes, producing the appearance of decentralization.

At least, my cached thought is that it's common. I can't remember what examples lead me to it. The only example I can think of right now is the Holy Roman Empire. A less paradoxical situation is that rapidly changing power produces u... (read more)

AllAmericanBreakfast's Shortform

What rationalists are trying to do is something like this:

1. Describe the paragon of virtue: a society of perfectly rational human beings.
2. Explain both why people fall short of that ideal, and how they can come closer to it.
3. Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.

This looks exactly like virtue ethics.

Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity... (read more)

Tetraspace Grouping's Shortform

I have two questions on Metaculus that compare how good elements of a pair of cryonics techniques are: preservation by Alcor vs preservation by CI, and preservation using fixatives vs preservation without fixatives. They are forecasts of the value (% of people preserved with technique A who are revived by 2200)/(% of people preserved with technique B who are revived by 2200), which barring weird things happening with identity is the likelihood ratio of someone waking up if you learn that they've been preserved with one technique vs the other.

AllAmericanBreakfast's Shortform

Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.

1. Cognitive biases, whatever these are
2. Not understanding statistics
3. Akrasia
4. Little skill in accessing and processing theory and data
5. Not speaking science-ese
6. Lack of interest or passion for rationality
7. Not seeing rationality as a virtue, or even seeing it as a vice.
8. A sense of futility, the idea that epistemic rationality is not very us

A few other (even less pleasant) options:

51) God is inscrutable and rationality is no better than any other religion.

52) Different biology and experience across humans leads to very different models of action.

53) Everyone lies, all the time.

verloren's Shortform

Epistemic status: low status, not feeling good, discouraged by life and the future, feeling sick of life in general; permanent state of inaction. Need help/advice. The post is bad written, will probably make you feel that I'm an idiot, but I'm just a human truly looking for help/ some light

Five years ago, I thought "these app developers, they are so dumb! If I only knew what they know, I would be creating life-changing applications, helping millions of people". Then I learned to program, and now I know how to solve problems step-by-step and how to create m... (read more)

You might be interested in https://www.facebook.com/groups/1781724435404945/ - a facebook group where rich rationalists set up $10-$100 tasks for others to do. However, only about 25% of the tasks are doable if you don't live in the US.

Also, I'll pay you \$15 if you fix this issue https://github.com/orgzly/orgzly-android/issues/287 in the Android app called Orgzly, which is an implementation of emacs org-mode for android, and make the owner accept it into the main branch or whatever it is they use that gets merged into the app on google play.

2Nebulus3dI could tell you that your grandmother comes first so you shouldn't overthink it: earn the money doing what you do best until you find a more ethical alternative. It's okay to have strict ethics but it's not helping you much right now. I could tell you to write your ideas somewhere, to post them online (LessWrong is there for that, isn'it it?) so that they're not lost even if you forget them, even if you don't have the time to bring them to life. So that your knowledge and ideas live on. I could go on, but I'm not one to tell how people should live their lives. Don't hate yourself, because that's not going to help either. The only remedy to inaction is action; you feel you are in a dire situation, then it's all the more urgent to do something - anything - to solve your issues. Make a list of your problems, establish ways to deal with them and get going. Do something, anything.
Linch's Shortform

What are the limitations of using Bayesian agents as an idealized formal model of superhuman predictors?

I'm aware of 2 major flaws:

1. Bayesian agents don't have logical uncertainty. However, anything implemented on bounded computation necessarily has this.

2. Bayesian agents don't have a concept of causality.

Curious what other flaws are out there.

AllAmericanBreakfast's Shortform

I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.

For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.

Are there dangers in getting really good at paying your dues?

1) Maybe it distracts you/diminishes the incen... (read more)

7NaiveTortoise4dIf you haven't seen Half-assing it with everything you've got [http://mindingourway.com/half-assing-it-with-everything-youve-got/], I'd definitely recommend it as an alternative perspective on this issue.

I see my post as less about goal-setting ("succeed, with no wasted motion") and more about strategy-implementing ("Check the unavoidable boxes first and quickly, to save as much time as possible for meaningful achievement").

steve2152's Shortform

Dear diary...

[this is an experiment in just posting little progress reports as a self-motivation tool.]

1. I have a growing suspicion that I was wrong to lump the amygdala in with the midbrain. It may be learning by the same reward signal as the neocortex. Or maybe not. It's confusing. Things I'm digesting: https://twitter.com/steve47285/status/1314553896057081857?s=19 (and references therein) and https://www.researchgate.net/publication/11523425_Parallels_between_cerebellum-_and_amygdala-dependent_conditioning

2. Speaking of mistakes, I'm also regretting so... (read more)

Showing 3 of 4 replies (Click to show all)
2steve21524dI was just writing about my perspective here [https://www.lesswrong.com/posts/Mh2p4MMQHdEAqmKm8/little-glimpses-of-empathy-as-the-foundation-for-social] ; see also Simulation Theory [https://plato.stanford.edu/entries/folkpsych-simulation/] (the opposite of "Theory Theory", believe it or not!). I mean, you could say that "making friends and being nice to them" is a form of manipulation, in some technical sense, blah blah evolutionary game theory blah blah, I guess. That seems like something Robin Hanson would say :-P I think it's a bit too cynical if you mean "manipulation" in the everyday sense involving bad intent. Also, if you want to send out vibes of "Don't mess with me or I will crush you!" to other people—and the ability to make credible threats is advantageous for game-theory reasons—that's all about being predictable and consistent! Again as I posted just now [https://www.lesswrong.com/posts/Mh2p4MMQHdEAqmKm8/little-glimpses-of-empathy-as-the-foundation-for-social] , I think the lion's share of "modeling", as I'm using the term, is something that happens unconsciously in a fraction of second, not effortful empathy or modeling. Hmmm... If I'm trying to impress someone, I do indeed effortfully try to develop a model of what they're impressed by, and then use that model when talking to them. And I tend to succeed! And it's not all that hard! The most obvious strategy tends to work (i.e., go with what has impressed them in the past, or what they say would be impressive, or what impresses similar people). I don't really see any aspect of human nature that is working to make it hard for me to impress someone, like by a person randomly changing what they find impressive. Do you? Are there better examples?
2Viliam4dI have low confidence debating this, because it seems to me like many things could be explained in various ways. For example, I agree that certain predictability is needed to prevent people from messing with you. On the other hand, certain uncertainty is needed, too -- if people know exactly when you would snap and start crushing them, they will go 5% below the line; but if the exact line depends on what you had for breakfast today, they will be more careful about getting too close to it.

Fair enough :-)

Mark Xu's Shortform

My current taxonomy of rationalists is:

• LW rationalists (HI!)
• Blog rationalists
• Internet-invisible rationalists

Are there other types of rationalists? Maybe like group-chat rationalists? or podcast rationalists? google doc rationalists?

Showing 3 of 7 replies (Click to show all)
23Viliam4dAlternative taxonomy: * rationalists belonging to Eliezer * cryopreserved rationalists * rationalists trained by CFAR * aspiring rationalists * rationalists working for MIRI * legendary rationalists * metarationalists * those commenting on this taxonomy * those that tweet as if they were mad * Bayesians * et cetera * Zvi * those that from afar look like paperclips :) [https://en.wikipedia.org/wiki/Celestial_Emporium_of_Benevolent_Knowledge]

This made me chuckle. More humor

• Rationalists taxonomizing rationalists
• Mesa-rationalists (the mesa-optimizers inside rationalists)
• carrier pigeon rationalists
• proto-rationalists
• not-yet-born rationalists
• literal rats
• frequentists
• group-house rationalists
• EA forum rationalists
• meme rationalists

:)

4Ben Pace4dI like this one.
Rafael Harth's Shortform

Yesterday, I spent some time thinking about how, if you have a function and some point , the value of the directional derivative from could change as a function of the angle. I.e., what does the function look like? I thought that any relationship was probably possible as long as it has the property that . (The values of the derivative in two opposite directions need to be negatives of each other.)

Anyone reading this is hopefully better at Analysis than I am and realized that there is, in fact, no freedom at all because... (read more)

5Zack_M_Davis6dWhen reading this comment, I was surprised for a moment, too, but now that you mention it—it's because if the function is smooth at the point where you're taking the directional derivative, then it has to locally resemble a plane, just like a how a differentiable function of a single variable is said to be "locally linear" [https://math.stackexchange.com/questions/7446/is-locally-linear-an-appropriate-description-of-a-differentiable-function] . If the directional derivative varied in any other way, then the surface would have to have a "crinkle" at that point and it wouldn't be differentiable. Right?

That's probably right.

I have since learned that there are functions which do have all partial derivatives at a point but are not smooth. Wikipedia's example is with . And in this case, there is still a continuous function that maps each point to the value of the directional derivative, but it's , so different from the regular case.

So you can probably have all kinds of relationships between direction and {value of derivative in that direction}, but the class of smooth functions have a fixed relationship. It still feels... (read more)

jp's Shortform

What to do if you suddenly need to rest your hands

On Monday I went from "computer work seems kind of uncomfortable, I wonder if I should be worried" to "oh crap oh crap, that's actually painful". Everything I've ever heard says not to work through RSI pain, so what now? I decided to spend a week learning hands free input. I wanted to a) get some serious rest and b) still be productive. And guess what? Learning hands free input is like the one activity that does not suffer a productivity penalty from not being able to use your hands.

When I started it was re... (read more)

I hadn't before actually, thanks for the recommendation. I checked it out. Talon seems more like "teach yourself the alphabet, then program using vim." Whereas Serenade is more like "teach yourself to program using our macros." This makes Talon easier to learn, and more flexible, Serenade has the following benefits as I see them:

• Programming in large well defines chunks, which it makes it easier to incorporate "pick which one you meant, or continue to use our default"
• It uses more natural language, which makes me think that the accuracy has a higher ceiling
TurnTrout's shortform feed

The answer to this seems obvious in isolation: shaping helps with credit assignment, rescaling doesn't (and might complicate certain methods in the advantage vs Q-value way). But I feel like maybe there's an important interaction here that could inform a mathematical theory of how a reward signal guides learners through model space?

NunoSempere's Shortform

This is a test to see if latex works

ryan_b's Shortform

Is spaced repetition software a good tool for skill development or good practice reinforcement?

I was recently considering using an Anki prompt to do a mental move rather than to test my recall, like tense your muscles as though you were performing a deadlift. I don't actually have access to a gym right now, so I didn't get to put it into action immediately. Visualizing the movement as vividly as possible, and tensing muscles like the movement was being performed (even when not doing it) are common tricks reported by famous weightlifters.

But I happened acro... (read more)

Showing 3 of 4 replies (Click to show all)
2ryan_b7dCould you talk a bit more about this? My initial reaction is that I am almost exactly proposing additional value from using Anki to engage the skill sans context (in addition to whatever actual practice is happening with context). I review Gwern's post pretty much every time I resume the habit; it doesn't look like it has been evaluated in connection with physical skills. I suspect the likeliest difference is that the recall curve is going to be different from the practice curve for physical skills, and the curve for mental review of physical skills will probably be different again. These should be trivial to adjust if we knew what they were, but alas, I do not. Maybe I could pillage the sports performance research? Surely they do something like this.
6mr-hire7dIt is hard to find, but it's covered here: https://www.gwern.net/Spaced-repetition#motor-skills [https://www.gwern.net/Spaced-repetition#motor-skills] My take is pretty similar to cognitive skills: It works well for simple motor skills but not as well for complex skills. My experience is basically that this doesn't work. This seems to track with the research on skill transfer (which is almost always non-existent or has such a small effect that it can't be measured.)

Ah, the humiliation of using the wrong ctrl-f inputs! But of course it would be lower level.

Well that's reason enough to cap my investment in the notion; we'll stick to cheap experiments if the muse descends.