Shortform Content [Beta]

Matt Goldenberg's Short Form Feed

CFAR's "Adjust Your Seat" principle and associated story is probably one of my most frequently referenced concepts when teaching rationality techniques.

I wish there was a LW post about it.

Buck's Shortform
Buck15h14Ω6

I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there's a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think ... (read more)

Because I'm dumb, I would have found it easier to interpret the graph if the takeoff curves crossed at "current capabilities" (and thus get to high levels of capabilities at different times) 

SDM's Shortform

Normative Realism

Normative Realism by Degrees

Normative Anti-realism is self-defeating

Normativity and recursive justification

Prescriptive Anti-realism

'Realism about rationality' is Normative Realism

'Realism about rationality' discussed in the context of AI safety and some of its driving assumptions may already have a name in existing philosophy literature. I think that what it's really referring to is 'normative realism' overall - the notion that there are any facts about what we have most reason to believe or do. Moral fa... (read more)

Showing 3 of 5 replies (Click to show all)
So, the mountain disanalogy: sometimes there are things we have opinions about, and yet there is no clean separation between us and the thing. We don't perceive it in a way that we can agree is trusted or privileged. We receive vague, sparse data about it, and the subject is plagued by disagreement, self-doubt, and claims that other people are doing it all wrong.
This isn't to say that we should give up entirely, but it means that we might have to shift our expectations of what sort of explanation or justification we are "entitled"
... (read more)
3SDM3dI appear to be accidentally writing [https://www.lesswrong.com/posts/hjPEw6HDnzmNvZAcH/sdm-s-shortform?commentId=geyRWDk6s4Th6ySfC] a sequence [https://www.lesswrong.com/posts/hjPEw6HDnzmNvZAcH/sdm-s-shortform?commentId=QPmz3yRXApLzYEPZs] on moral realism, or at least explaining what moral realists like about moral realism - for those who are perplexed about why it would be worth wanting or how anyone could find it plausible. Many philosophers outside this community have an instinct that normative anti-realism (about any irreducible facts about what you should do) is self-defeating, because it includes a denial that there are any final, buck-stopping answers to why we should believe something based on evidence, and therefore no truly, ultimately impartial way to even express the claim that you ought to believe something. I think that this is a good, but not perfect, argument. My experience has been that traditional analytic philosophers find this sort of reasoning appealing, in part because of the legacy of how Kant tried to deduce the logically necessary preconditions for having any kind of judgement or experience. I don't find it particularly appealing, but I think that there's a case for it here, if there ever was. IRREDUCIBLE NORMATIVITY AND RECURSIVE JUSTIFICATION On normative antirealism, what 'you shouldn't believe that 2+2=5' really means is just that someone else's mind has different basic operations to yours. It is obvious that we can't stop using normative concepts, and couldn't use the concept 'should' to mean 'in accordance with the basic operations of my mind', but this isn't an easy case of reduction like Water=H20. There is a deep sense in which normative terms really can't mean what we think they mean if normative antirealism is true. This must be accounted for by either a deep and comprehensive question-dissolving, or by irreducible normative facts. This 'normative indispensability' is not an argument, but it can be made into one: If you've rea
1SDM14hPRESCRIPTIVE ANTI-REALISM An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post: 1. Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare [https://www.deontology.com/] and similar philosophers, HJPEV 2. Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers [https://philpapers.org/surveys/results.pl], Dumbledore 3. Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?) [http://thinkingcomplete.blogspot.com/2018/09/rational-and-real.html], Lucas Gloor (?) [https://longtermrisk.org/the-case-for-suffering-focused-ethics/#IV_Other_values_have_diminishing_returns] most Error Theorists, Quirrell 4. Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists? The difference in practical, normative terms between 2), 4) and 3) is clear enough - 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic 'antirealist' who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference [https://www.hpmor.com/chapter/20]. The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn't believe in irreducible normativity. In particular, he's committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things. Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the cl
MichaelA's Shortform

Collection of discussions of epistemic modesty, "rationalist/EA exceptionalism", and similar

These are currently in reverse-chronological order.

Some thoughts on deference and inside-view models - Buck Shlegeris

Epistemic learned helplessness - Scott Alexander, 2019

AI, global coordination, and epistemic humility - Jaan Tallinn, 2018

In defence of epistemic modesty - Greg Lewis, 2017

Inadequate Equilibria - Eliezer Yudkowsky, 2017

Common sense as a prior - Nick Beckstead, 2013

From memory, I think a decent amount of Rationality: A-Z by Eliezer Yudkowsk... (read more)

Charlie Steiner's Shortform

It seems like there's room for the theory of logical-inductor-like agents with limited computational resources, and I'm not sure if this has already been figured out. The entire trick seems to be that when you try to build a logical inductor agent, it's got some estimation process for math problems like "what does my model predict will happen?" and it's got some search process to find good actions, and you don't want the search process to be more powerful than the estimator because then it will find edge cases. In fact, you want them to be linked somehow, ... (read more)

Adele Lopez's Shortform

Half-baked idea for low-impact AI:

As an example, imagine a board that's lodged directly by the wall (no other support structures). If you make it twice as wide, then it will be twice as stiff, but if you make it twice as thick, then it will be eight times as stiff. On the other hand, if you make it twice as long, it will be eight times more compliant.

In a similar way, different action parameters will have scaling exponents (or more generally, functions). So one way to decrease the risk of high-impact actions would be to make sure that the scaling expo... (read more)

Raemon's Scratchpad

Somewhat delighted to see that google scholar now includes direct links to PDFs when it can find them instead of making you figure out how to use a given journal website.

There's a plug in that will look for PDFs for you that match the page you're on or the text you have highlighted.

5Jason Gross1yThis has been true for years. At least six, I think? I think I started using Google scholar around when I started my PhD, and I do not recall a time when it did not link to pdfs.
Sherrinford's Shortform

The results of Bob Jacob's LessWrong survey are quite interesting. It's a pity the sample is so small.

The visualized results (link in his post) are univariate, but I would like to highlight some things:

49 out of 56 respondents identifying as "White",
53 out of 59 respondents born male and 46 out of 58 identifying male cisgender
47 of 59 identifying as heterosexual (comparison: https://en.wikipedia.org/wiki/Demographics_of_sexual_orientation)
1 out of 55 working in a "blue collar" profession
Most people identify as "left of c... (read more)

Showing 3 of 8 replies (Click to show all)
3Sherrinford2dInteresting. I had maybe read the Wikipedia article a long time ago, but it did not leave any impression in my memory. Now rereading it, I did not find it dramatic, but I see your point. Tbh, I stilĺ do not fully understand how Wikipedia works (that is, I do not have a model who determines how an article develops). And the "originated" (ok maybe that is only almost and not fully identical to "first grew") is just what I got from the article. The problem with the association is that it is hard to definitely determine what even makes things mentionable, but once somebody publibly has to distance himself from something, this indicates a public kind of association. Further reading the article, my impression is that it indeed cites things that in Wikipedia count as sources for its claims. If the impression of lesswrong is distorted, then this may be a problem of what kinds of thing on lesswrong are covered by media publications? Or maybe it is all just selective citing, but then it should be possible to cite other things.
3Viliam2dIn theory, Wikipedia strives to be impartial. In practice, the rules are always only as good as the judges who uphold them. (All legal systems involve some degree of human judgment somewhere in the loop, because it is impossible to write a set of rules that covers everything and doesn't allow some clever abuse. That's why we talk about the letter and the spirit of the law.) How to become a Wikipedia admin? You need to spend a lot of time editing Wikipedia in a way other admins consider helpful, and you need to be interested in getting the role. (Probably a few more technical details I forgot.) The good thing is that by doing a lot of useful work you send a costly signal that you care about Wikipedia. The bad thing is that if certain political opinion becomes dominant among the existing admins, there is no mechanism to fix this bias; it's actually the other way round, because edits disagreeing with the consensus would be judged as harmful, and would probably disqualify their author from becoming an admin in the future. I don't assume bad faith from most of Wikipedia editors. Being wrong about something feels the same from inside as being right; and if other people agree with you, that is usually a good sign. But if you have a few bad actors who can play it smart, who can pretend that their personal grudges are how they actually see the world... considering that other admins already see them as part of the same team, and the same political bias means they already roughly agree on who are the good guys and who are the bad guys... it is not difficult to defend their decisions in front of jury of their peers. An outsider has no chance in this fight, because the insider is fluent with local lingo. Whatever they want to argue, they can find a wiki-rule pointing in that direction; of course it would be just as easy for them to find a wiki-rule pointing in the opposite direction (e.g. if you want to edit an article about something you are personally involved with, you have

Thanks for the history overview! Very interesting. Concerning the wikipedia dynamics, I agree that this is plausible, as it is a plausible development of nearly every volunteer organization, in particular if they try to be grassroots-democratic. The wikipedia-media problem is known (https://xkcd.com/978/) though in this particular case I was a bit surprised about the "original research" and "reliable source" distinction. Many articles there did not seem very "serious". On the other hand, during this whole "lost in hypersp... (read more)

Sunny's Shortform

It's happened again: I've realized that one of my old beliefs (pre-LW) is just plain dumb.

I used to look around at all the various diet (Paleo, Keto, low carb, low fat, etc.) and feel angry at people for having such low epistemic standards. Like, there's a new theory of nutrition every two years, and people still put faith in them every time? Everybody swears by a different diet and this is common knowledge, but people still swear by diets? And the reasoning is that "fat" (the nutrient) has the same name as "fat" (the body part people are trying to get rid... (read more)

Showing 3 of 20 replies (Click to show all)
1TAG3dReductionism is a combination of three claims. 1. That many thing are made of smaller components 2. That those particular can be understood in terms of the operations of their components 3. There's an irreduciby basic level. Its not turtles all way down. If it's always the case that something that isn't explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice, reductionism is false. There isn't a consistent set of principles here. Continued .. Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim. There's a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that's the easy case. The hard case is when there is a basic dispute about whats true. In that case, it's not helpful to fix the other person's claims by making them more reasonable from your point of view. Anyway, thats how we ended up with "God must have superneurons in his superbrain".
2Viliam3dFeels like in the top-down universe, science shouldn't work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer... which means... what exactly, from the top-down perspective? Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects... Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds... as if the scientists are trying to make up silly excuses for why their latest magic works but totally "doesn't contradict" what the previous generations of scientists were telling us... Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the -- inherently meaningless -- details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe... or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything). More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously... might as well throw the logic aw

The basic claim of a top down universe is a short string that doesn't contain much information. About the same amount as a basic claim of reductionism.

The top down claim doesnt imply a universe of immutable physical law, but it doesn't contradict it either.

The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism.

But all this is rather beside the point. Even if supernaturalism is indefensible, you can't refute it by changing it into something else.

AllAmericanBreakfast's Shortform

Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?

One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.

On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.

I wish we could figure out ways of getting this stuff across that... (read more)

Showing 3 of 6 replies (Click to show all)

You're both assuming that you have a set of correct ideas coupled with bad PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?

5AllAmericanBreakfast2dYou're right. I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.
3Viliam2dThose opinions often have something in common -- respect for the scientific method, effort to improve one's rationality, concern about artificial intelligence -- and I like to believe it is not just a random idiosyncratic mix (a bunch of random things Eliezer likes), but different manifestations of the same underlying principle (use your intelligence to win, not to defeat yourself). However, not everyone is interested in all of this. And I would definitely like to see "somebody friendly, funny, empathic, a good performer, neat and practiced" promoting these values in a YouTube channel or in books. But that requires a talent I don't have, so I can only wait until someone else with the necessary skills does it. This reminded me of the YouTube channel of Julia Galef [https://www.youtube.com/user/measureofdoubt/videos], but the latest videos there are 3 years old.
ozziegooen's Shortform

The term for the "fear of truth" is alethophobia. I'm not familiar of many other great terms in this area (curious to hear suggestions).

Apparently "Epistemophobia" is a thing, but that seems quite different; Epistemophobia is more the fear of learning, rather than the fear of facing the truth.

One given definition of alethophobia is, 
"The inability to accept unflattering facts about your nation, religion, culture, ethnic group, or yourself"

This seems like a incredibly common issue, one that is especially talked about as of recent, but without much spec

... (read more)
Showing 3 of 7 replies (Click to show all)
10NunoSempere4d> The name comes straight from the Latin though From the Greek as it happens. Also, alethephobia would be a double negative, with a-letheia meaning a state of not being hidden; a more natural neologism would avoid that double negative. Also, the greek concept of truth has some differences to our own conceptualization. Bad neologism.

Ah, good to know. Do you have recommendations for other words?

4ChristianKl21dThe trend of calling things that aren't fears "-phobia" seems to me a trend that's harmful for clear communication. Adjusting the definition only leads to more confusion.
Raemon's Scratchpad

An interesting thing about Supernatural Fitness (a VR app kinda like Beat Saber) is that they are leaning hard into being a fitness app rather than a game. You don't currently get to pick songs, you pick workouts, which come with pep talks and stretching and warmups.

This might make you go "ugh, I just wanna play a song" and go play Beat Saber instead. But, Supernatural Fitness is _way_ prettier and has some conceptual advances over Beat Saber.

And... I mostly endorse this and think it was the right call. I am sympathetic to "if you give people the ability t... (read more)

Sunny's Shortform

Epistemic status: really shaky, but I think there's something here.

I naturally feel a lot of resistance to the way culture/norm differences are characterized in posts like Ask and Guess and Wait vs Interrupt Culture. I naturally want to give them little pet names, like:

  • Guess culture = "read my fucking mind, you badwrong idiot" culture.
  • Ask culture = nothing, because this is just how normal, non-insane people act.

I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share b... (read more)

Showing 3 of 5 replies (Click to show all)
1Sunny from QAD4dI couldn't parse this question. Which part are you referring to by "it", and what do you mean by "instead of asking you"?
2Pattern3dit (the negative experiences) - Are *they (the negative experiences) the result of (people with a "culture" who's rules rules you don't understand) expecting you to read *their mind, and go along with their "culture", instead of asking you to go along with their culture?

Aha, no, the mind reading part is just one of several cultures I'm mentioning. (Guess Culture, to be exact.) If I default to being an Asker but somebody else is a Guesser, I might have the following interaction with them:

Me: [looking at some cookies they just made] These look delicious! Would it be all right if I ate one?

Them: [obviously uncomfortable] Uhm... uh... I mean, I guess so...

Here, it's retroactively clear that, in their eyes, I've overstepped a boundary just by asking. But I usually can't tell in advance what things I'm allowed to ask and what t... (read more)

AllAmericanBreakfast's Shortform

I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.

To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:

OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.

DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to de... (read more)

TurnTrout's shortform feed

If you measure death-badness from behind the veil of ignorance, you’d naively prioritize well-liked, famous people with large families.

Would you prioritize the young from behind the veil of ignorance?

AllAmericanBreakfast's Shortform

I'm annoyed that I think so hard about small daily decisions.

Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?

Perhaps:

"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."

This heuristic cashes out to:

  • Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meal
... (read more)
5Dagon4dFor some (including younger-me), the opposite advice was helpful - I'd agonize over "big" decisions, without realizing that the oft-repeated small decisions actually had a much larger impact on my life. To account for that, I might recommend you notice cache-ability and repetition, and budget on longer timeframes. For monthly spending, there's some portion that's really $120X decade spending (you can optimize once, then continue to buy monthly for the next 10 years), a bunch that's probably $12Y of annual spending, and some that's really $Z that you have to re-consider every month. Also, avoid the mistake of inflexible permissions. Notice when you're spending much more (or less!) time optimizing a decision than your average, but there are lots of them that actually benefit from the extra time. And lots that additional time/money doesn't change the marginal outcome by much, so you should spend less time on.

I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes.

Daily choices have low switching and sunk costs. Everybody's always comparing, so one brand at a given price point tends to be about as good as another.

But big decisions aren't just big spends. They're typica... (read more)

snog toddgrass's Shortform

The problem with mansplaining -

Why do men mansplain and why do people (particularly women) hate it? People sometimes struggle to articulate what mansplaining is and why they dislike it, but I'm surely not the discoverer of this argument.

Recently I was talking to a colleage during a strategy game session. He said "You are bad because you made these mistakes" and I said "yes I am bad at these aspects of the game. Alsol, you should have invested more into anti-aircraft guns". He immediately began repeating a list of mistakes I had mad... (read more)

Showing 3 of 4 replies (Click to show all)
5Viliam5dThis is a great observation! But, as often happens in political debates, "mansplaining" is a motte-and-bailey term, which could mean the thing you just described, or it could mean "a man tried to say something", or anything in between, depending on who used it and in which context. Also, it is not an exclusively male behavior, despite the name. I have no strong opinion on whether it is a mostly male behavior, because I assume that most female dominance fights happen out of my sight, for various reasons. But I am pretty sure I have seen women fighting for dominance using supposedly factual arguments a few times. Depends on whether the specific woman finds dominance attractive. And that probably also depends on the type/degree of dominance, her mood, and how well you know each other. Yes, this "partially agree, partially disagree" strategy seems like the golden middle way between being disagreeable and boring.

I agree with all of those points.

Depends on whether the specific woman finds dominance attractive. And that probably also depends on the type/degree of dominance, her mood, and how well you know each other. Yes, this "partially agree, partially disagree" strategy seems like the golden middle way between being disagreeable and boring.

I think many women, perhaps a majority, find a more dominant man attractive. Basically ensure any fact-based dominance display doesn't make the other person feel stupid. Good rule for lots of interactions.

3snog toddgrass5dTrue. I should rephrase my thesis "What people often mean when they say "mansplaining" is explanations which are intended to express dominance rather than to mutually arrive at better understanding".
Matt Goldenberg's Short Form Feed

"Medium Engagement Activities" are the death of culture creation.

Expecting someone to show up for a ~1-hour or more event every week that helps shape your culture is great for culture creation, or requiring them to wear a dress code - large commitments are good in the early stages.

Removing trivial inconveniences to following your values and rules is great for building culture, doing things that require no or low engagement but help shape group cohesion.  Design does a lot here - no commitment tools to shape culture are great during early stages.

But me... (read more)

Raemon's Scratchpad

With some frequency, LW gets a new user writing a post that's sort of... in the middle of having their mind blown by the prospect of quantum immortality and MWI. I'd like to have a single post to link them to that makes a fairly succinct case for "it adds up to normality", and I don't have a clear sense of what to do other that link to the entire Quantum Physics sequence. 

Any suggestions? Or, anyone feel like writing said post if it doesn't exist yet?

Gurkenglas's Shortform

The wavefunctioncollapse algorithm measures whichever tile currently has the lowest entropy. GPT-3 always just measures the next token. Of course in prose those are usually the same, but I expect some qualitative improvements once we get structured data with holes such that any might have low entropy, a transformer trained to fill holes, and the resulting ability to pick which hole to fill next.

Until then, I expect those prompts/GPT protocols to perform well which happen to present the holes in your data in the order that wfc would have picked, ie ask it to show its work, don't ask it to write the bottom line of its reasoning process first.

Long shortform short: Include the sequences in your prompt as instructions :)

Load More