This is a special post for quick takes by Raemon. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either:

  1. don't feel ready to be written up as a full post
  2. I think the process of writing them up might make them worse (i.e. longer than they need to be)

I ask people not to create top-level comments here, but feel free to reply to comments like you would a FB post.

Raemon's Shortform
535 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Raemon8552

Reading through Backdoors as an analogy for deceptive alignment prompted me to think about a LW feature I might be interested in. I don't have much math background, and have always found it very effortful to parse math-heavy posts. I expect there are other people in a similar boat.

In modern programming IDEs it's common to have hoverovers for functions and variables, and I think it's sort of crazy that we don't have that for math. So, I'm considering a LessWrong feature that:

  • takes in a post (i.e. when you save or go to publish a draft)
  • identifies the LaTeX terms in the post
  • creates a glossary for what each term means. (This should probably require confirmation by the author)
  • makes a hoverover for each term so when you mouseover it reminds you.

On "Backdoors", I asked the LessWrong-integrated LLM: "what do the Latex terms here mean"?

It replied :

The LaTeX symbols in this passage represent mathematical notations. Let me explain each of them:

  1.  : This represents a class of functions. The curly F denotes that it's a set or collection of functions.
  2.  : This means that  is a function that belongs to (is an element of) the class .
  3.  : The asterisk superscript typ
... (read more)
9Alexander Gietelink Oldenziel
Wow! This would be awesome! I didn't look at this case in particular. In my experience, LLMs are quite good at regurgitating definitions from long math texts - which otherwise can take some effort to find by hand. 
8Jacob_Hilton
The LLM output looks correct to me.
4habryka
This does seem pretty cool. I also think it wouldn't be too hard to personalize the LLM to whatever level of math background you do have. So the things that you wrote make sense with a basic undergraduate background, but people with more formal math experience might want something that is more tailored to their higher-level understanding.  This does make confirmation by the author harder, which I do think seems kind of important.
4Raemon
In the examples above, there two pieces of "here's what this terminology typically means" (which seems less useful if you're already quite familiar), and "here's what it represents in this context". Would this be as simple as letting mathematicians toggle the first part off?
3Drake Thomas
I've fantasized about a good version of this feature for math textbooks since college - would be excited to beta test or provide feedback about any such things that get explored! (I have a couple math-heavy posts I'd be down to try annotating in this way.)
3Zolmeister
Along the same lines, I found this analogy by concrete example exceptionally elucidative.
2Thane Ruthenis
That seems like it'd be very helpful, yes! Other related features that'd be easy to incorporate into this are John's ideas from here: I think those would also be pretty useful, including for people writing the math-heavy posts.
[-]Raemon5029

The “prompt shut down” clause seemed like one of the more important clauses in the SB 1047 bill. I was surprised other people I talked to didn't think seem to think it mattered that much, and wanted to argue/hear-arguments about it.

The clauses says AI developers, and compute-cluster operators, are required to have a plan for promptly shutting down large AI models.

People's objections were usually:

"It's not actually that hard to turn off an AI – it's maybe a few hours of running around pulling plugs out of server racks, and it's not like we're that likely to be in the sort of hard takeoff scenario where the differences in a couple hours of manually turning it off will make the difference."

I'm not sure if this is actually true, but, assuming it's true, it still seems to me like the shutdown clause is the one of the more uncomplicatedly-good parts of the bill.

Some reasons:

1. I think the ultimate end game for AI governance will require being able to quickly notice and shut down rogue AIs. That's what it means for the acute risk period to end. 

2. In the more nearterm, I expect the situation where we need to stop running an AI to be fairly murky. Shutting down an AI is going to be ve... (read more)

[-]aysja106

Largely agree with everything here. 

But, I've heard some people be concerned "aren't basically all SSP-like plans basically fake? is this going to cement some random bureaucratic bullshit rather than actual good plans?." And yeah, that does seem plausible. 

I do think that all SSP-like plans are basically fake, and I’m opposed to them becoming the bedrock of AI regulation. But I worry that people take the premise “the government will inevitably botch this” and conclude something like “so it’s best to let the labs figure out what to do before cementing anything.” This seems alarming to me. Afaict, the current world we’re in is basically the worst case scenario—labs are racing to build AGI, and their safety approach is ~“don’t worry, we’ll figure it out as we go.” But this process doesn’t seem very likely to result in good safety plans either; charging ahead as is doesn’t necessarily beget better policies. So while I certainly agree that SSP-shaped things are woefully inadequate, it seems important, when discussing this, to keep in mind what the counterfactual is. Because the status quo is not, imo, a remotely acceptable alternative either.

Afaict, the current world we’re in is basically the worst case scenario

the status quo is not, imo, a remotely acceptable alternative either

Both of these quotes display types of thinking which are typically dangerous and counterproductive, because they rule out the possibility that your actions can make things worse.

The current world is very far from the worst-case scenario (even if you have very high P(doom), it's far away in log-odds) and I don't think it would be that hard to accidentally make things considerably worse.

2Raemon
I think on alternative here that isn't just "trust AI companies" is "wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards." I don't know that that's the right call, but I don't think it's a crazy position from a safety perspective.
[-]Akash106

I largely agree that the "full shutdown" provisions are great. I also like that the bill requires developers to specify circumstances under which they would enact a shutdown:

(I) Describes in detail the conditions under which a developer would enact a full shutdown.

In general, I think it's great to help governments understand what kinds of scenarios would require a shutdown, make it easy for governments and companies to enact a shutdown, and give governments the knowledge/tools to verify that a shutdown has been achieved.

3Michael Roe
If your AI is doing something that's causing harm to third parties that you are legally liable for .. chances are, whatever it is doing, it is doing it at Internet speeds, and even small delays are going to be very, very expensive.   I am imagining that all the people who got harmed after the first minute or so after the AI went rogue are going to be pointing at SB1047 to argue that you are negligent, and therefore liable for whatever bad thing it did.
3Michael Roe
With a nod to the recent Crowdstrike incident .... if your AI is sending out packets to other people;s Windows systems, and bricking them about as fast it can send packets through its ethernet interface, your liability may be expanding rapidly. An additional billion dollars for each hour you dont shut it down sounds possible.

There was a particular mistake I made over in this thread. Noticing the mistake didn't change my overall position (and also my overall position was even weirder than I think people thought it was). But, seemed worth noting somewhere.

I think most folk morality (or at least my own folk morality), generally has the following crimes in ascending order of badness:

  • Lying
  • Stealing
  • Killing
  • Torturing people to death (I'm not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)

But this is the conflation of a few different things. One axis I was ignoring was "morality as coordination tool" vs "morality as 'doing the right thing because I think it's right'." And these are actually quite different. And, importantly, you don't get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool.

There's actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:

  • harming the ingroup
  • harming the outgroup (who you may benefit from trading with)
  • harming powerless people who don't have the ability to trade or col
... (read more)

On the object level, the three levels you described are extremely important:

  • harming the ingroup
  • harming the outgroup (who you may benefit from trading with)
  • harming powerless people who don't have the ability to trade or collaborate with you

I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing. I think there's a lot of misinformation out there about how well we've done the first thing, and I think that in practice utilitarian ethical discourse tends to raise the message length of making that distinction, by implicitly denying that there's an outgroup.

I don't think ingroups should be arbitrary affiliation groups. Or, more precisely, "ingroups are arbitrary affiliation groups" is one natural supergroup which I think is doing a lot of harm, and there are other natural supergroups following different strategies, of which "righteousness/justice" is one that I think is especially important. But pretending there's no outgroup is worse than honestly trying to treat foreigners decently as foreigners who can't be c... (read more)

4eukaryote
Wait, why do you think these have to be done in order?

Some beliefs of mine, I assume different from Ben's but I think still relevant to this question are:

At the very least, your ability to accomplish anything re: helping the outgroup or helping the powerless is dependent on having spare resources to do so.

There are many clusters of actions which might locally benefit the ingroup and leave the outgroup or powerless in the cold, but which then enable future generations of ingroup more ability to take useful actions to help them. i.e. if you're a tribe in the wilderness, I much rather you invent capitalism and build supermarkets than that you try to help the poor. The helping of the poor is nice but barely matters in the grand scheme of things.

I don't personally think you need to halt *all* helping of the powerless until you've solidified your treatment of the ingroup/outgroup. But I could imagine future me changing my mind about that.

A major suspicion/confusion I have here is that the two frames:

  • "Help the ingroup, so that the ingroup eventually has the bandwidth and slack to help the outgroup and the powerless", and
  • "Help the ingroup, because it's convenient and they're the ingroup"

Look... (read more)

8Benquo
Attention is scarce and there are lots of optimization processes going on, so if you think the future is big relative to the present, interventions that increase the optimization power serving your values are going to outperform direct interventions. This doesn't imply that we should just do infinite meta, but it does imply that the value of direct object-level improvements will nearly always be via how they affect different optimizing processes.
2Raemon
A lot of this makes sense. Some of it feels like I haven't quite understood the frame you're using (and unfortunately can't specify further which parts those are because it's a bit confusing) One thing that seems relevant: My preference to "declare staghunts first and get explicit buy in before trying to do anything cooperatively-challenging" feels quite related to "ambiguity over who is in the ingroup causes problems" thing.

This feels like the most direct engagement I've seen from you with what I've been trying to say. Thanks! I'm not sure how to describe the metric on which this is obviously to-the-point and trying-to-be-pin-down-able, but I want to at least flag an example where it seems like you're doing the thing.

Periodically I describe a particular problem with the rationalsphere with the programmer metaphor of:

"For several years, CFAR took the main LW Sequences Git Repo and forked it into a private branch, then layered all sorts of new commits, ran with some assumptions, and tweaked around some of the legacy code a bit. This was all done in private organizations, or in-person conversation, or at best, on hard-to-follow-and-link-to-threads on Facebook.

"And now, there's a massive series of git-merge conflicts, as important concepts from CFAR attempt to get merged back into the original LessWrong branch. And people are going, like 'what the hell is focusing and circling?'"

And this points towards an important thing about _why_ think it's important to keep people actually writing down and publishing their longform thoughts (esp the people who are working in private organizations)

And I'm not sure how to actually really convey it properly _without_ the programming metaphor. (Or, I suppose I just could. Maybe if I simply remove the first sentence the description still works. But I feel like the first sentence does a lot of important work in communicating it clearly)

We have enough programmers that I can basically get away with it anyway, but it'd be nice to not have to rely on that.

[-]Raemon295

There's a skill of "quickly operationalizing a prediction, about a question that is cruxy for your decisionmaking."

And, it's dramatically better to be very fluent at this skill, rather than "merely pretty okay at it."

Fluency means you can actually use it day-to-day to help with whatever work is important to you. Day-to-day usage means you can actually get calibrated re: predictions in whatever domains you care about. Calibration means that your intuitions will be good, and _you'll know they're good_.

Fluency means you can do it _while you're in the middle of your thought process_, and then return to your thought process, rather than awkwardly bolting it on at the end.

I find this useful at multiple levels-of-strategy. i.e. for big picture 6 month planning, as well as for "what do I do in the next hour."

I'm working on this as a full blogpost but figured I would start getting pieces of it out here for now.

A lot of this skill is building off on CFAR's "inner simulator" framing. Andrew Critch recently framed this to me as "using your System 2 (conscious, deliberate intelligence) to generate questions for your System 1 (fast intuition) to answer." (Whereas previously, he'd known System 1 ... (read more)

5Viliam
Looking forward to specific examples, pretty please.
4romeostevensit
Tracing out the chain of uncertainty. Lets say that I'm thinking about my business and come up with an idea. I'm uncertain how much to prioritize the idea vs the other swirling thoughts. If I thought it might cause my business to 2x revenue I'd obviously drop a lot and pursue it. Ok, how likely is that based on prior ideas? What reference class is the idea in? Under what world model is the business revenue particularly sensitive to the outputs of this idea? What's the most uncertain part of that model? How would I quickly test it? Who would already know the answer? etc.
2romeostevensit
My shorthand has been 'decision leverage.' But that might not hit the center of what you're aiming at here.

I disagree with this particular theunitofcaring post "what would you do with 20 billion dollars?", and I think this is possibly the only area where I disagree with theunitofcaring overall philosophy and seemed worth mentioning. (This crops up occasionally in her other posts but it is most clear cut here).

I think if you got 20 billion dollars and didn't want to think too hard about what to do with it, donating to OpenPhilanthropy project is a pretty decent fallback option.

But my overall take on how to handle the EA funding landscape has changed a bit in the past few years. Some things that theunitofcaring doesn't mention here, which seem at least warrant thinking about:

[Each of these has a bit of a citation-needed, that I recall hearing or reading in reliable sounding places, but correct me if I'm wrong or out of date]

1) OpenPhil has (at least? I can't find more recent data) 8 billion dollars, and makes something like 500 million a year in investment returns. They are currently able to give 100 million away a year.

They're working on building more capacity so they can give more. But for the foreseeable future, they _can't_ actually spend more m... (read more)

[-]Raemon242

A major goal I had for the LessWrong Review was to be "the intermediate metric that let me know if LW was accomplishing important things", which helped me steer.

I think it hasn't super succeeded at this.

I think one problem is that it just... feels like it generates stuff people liked reading, which is different from "stuff that turned out to be genuinely important."

I'm now wondering "what if I built a power-tool that is designed for a single user to decide which posts seem to have mattered the most (according to them), and, then, figure out which intermediate posts played into them." What would the lightweight version of that look like?

Another thing is, like, I want to see what particular other individuals thought mattered, as opposed to a generate aggregate that doesn't any theory underlying it. Making the voting public veers towards some kind of "what did the cool people think?" contest, so I feel anxious about that, but, I do think the info is just pretty useful. But like, what if the output of the review is a series of individual takes on what-mattered-and-why, collectively, rather than an aggregate vote?

91a3orn
So Alasdair MacIntyre, says that all enquiry into truth and practical rationality takes place within a tradition, sometimes capital-t Tradition, that provides standards for things like "What is a good argument" and "What things can I take for granted" and so on. You never zoom all the way back to simple self-evident truths or raw-sense data --- it's just too far to go. (I don't know if I'd actually recommend MacIntyre to you, he's probably not sufficiently dense / interesting for your projects, he's like a weird blend of Aquinas and Kuhn and Lakatos, but he is interesting at least, if you have a tolerance for.... the kind of thing he is.) What struck me with a fair number of reviews, at this point, was that they seemed... kinda resigned to a LW Tradition, if it ever existed, no longer really being a single thing? Like we don't have shared standards any more for what is a good argument or what things can be taken for granted (maybe we never did, and I'm golden-age fallacying). There were some reviews saying "idk if this is true, but it did influence people" and others being like "well I think this is kinda dumb, but seems important" and I know I wrote one being like "well these are at least pretty representative arguments of the kind of things people say to each other in these contexts." Anyhow what I'm saying is that -- if we operate in a MacIntyrean frame -- it makes sense to be like "this is the best work we have" within a Tradition, but humans start to spit out NaNs / operation not defined if you try to ask them "is this the best work we have" across Traditions. I don't know if this is true of ideal reasoners but it does seem to be true of... um, any reasoners we've ever seen, which is more relevant.
5Elizabeth
I wonder if dramatically shrinking the review's winners' circle would help? Right now it feels huge to me. 
2Raemon
What do you mean by winner's circle? Like top 10 instead of top 50, or something else?
2Elizabeth
yeah, top 10 or even just top 5. 
4ryan_greenblatt
Skimming the review posts for 2022, I think about 5/50 taught me something reasonably substantial and useful. I think another 10/50 provide a useful short idea and a label/pointer for that idea, but don't really provide a large valuable lesson. Perhaps 20/50 are posts I might end up refering to at some point or recommending someone read. Overall, I think I tend to learn way more in person talking to people than from LW posts, but I think LW posts are useful to reference reasonably often.
4Raemon
Those numbers sound reasonable to me (i.e. I might give similar numbers, although I'd probably list different posts than you) Another angle I've had here: in my preferred world, the "Best of LessWrong" page leaves explicit that, in some sense, very few (possibly zero?) posts actually meet the bar we'd ideally aspire to. The Best of LessWrong page highlights the best stuff so far, but I think it'd be cool if there was a deliberately empty, aspirational section. But, then I feel a bit stuck on "what counts for that tier?" Here's another idea: Open Problems  (and: when voting on Best of LessWrong, you can 'bet' that a post will contribute to solving an Open Problem) Open Problems could be a LessWrong feature which is basically a post describing an important, unsolved problem. They'd each be owned by a particular author or small group, who get to declare when they consider the problem "solved." (If you want people to trust/care about the outcome of particular Open Problem, you might choose two co-owners who are sort of adversarial collaborators, and they have to both agree it was solved) Two use-cases for Open Problems could be: * As a research target for an individual researcher (or team), i.e. setting the target they're ultimately aiming for. * As a sort of X-Prize, for others to attempt to contribute to. So we'd end up with problem statements like: * "AI Alignment for superintelligences is solved" (maybe Eliezer and Paul cosign a problem statement on that) * You (Ryan) and Buck could formulate some kind of Open Problem on AI Control * I'd like to be some kind of "we have a rationality training program that seems to demonstrably work" And then there's a page that highlights "these are the open problems people on LessWrong have upvoted the most as 'important'", and "here are the posts that people are betting will turn out to be relevant to the final solution." (maybe this is operationalized as, like, a manifold market bet about whether the problem-autho
2ryan_greenblatt
I don't think that a solution to open problems being posted on LW would indicate that LW (the website and org, not the surrounding community) was accomplishing something useful. E.g., imagine using the same metric for arXiv. (This case is more extreme, but I think it corresponds somewhat.) Awkwardly, I think the existence of good posts is unlikely to track LW's contribution. This seems especially true for posts about solutions to technical problems. The marginal contribution of LW is more in making it more likely that better posts are read and in making various conversations happen (with a variety of other diffuse potential advantages). I don't know what a good metric for LW is.
2Raemon
I'm not 100% sure I got your point.  I think (but am unsure) that what I care about is more like a metric for "is useful intellectual progress getting made" (whether or not LessWrong-the-website was causal in that progress).  The point here is not to evaluate the Lightcone team's work, but for the community to have a better benchmark for it's collective progress (which then hopefully, like, improves credit-assignment which then hopefully improves our ability to collectively focus on useful stuff as the community scales) This point does seem interesting though and maybe a different frame than I had previously been thinking in:
2ryan_greenblatt
Seems reasonable. From my perspective LW review is very bad for measuring overall (human) progress on achieving good things, though plausibly better than any other specific review or ranking process that has a considerable amount of buy in.
2Raemon
I wasn't quite sure from your phrasings:  Do you think replacing (or at least combining) LW Review with the Open Problems frame would be an improvement on that axis? Also: does it seem useful to you to measure overall progress on [the cluster of good things that the rationality and/or alignment community are pointed at?]?
2ryan_greenblatt
Uh, maybe for combining? I think my main complaint with LW review as a metric is more just that I disagree with the preferences of other people and think that a bunch of work is happening on places other than LW. I don't really think Open Problems helps much with this from my perspective. (In many cases I can't name a clear and operationalized open problem and more just think "more progress here would be good.)

Something struck me recently, as I watched Kubo, and Coco - two animated movies that both deal with death, and highlight music and storytelling as mechanisms by which we can preserve people after they die.

Kubo begins "Don't blink - if you blink for even an instant, if you a miss a single thing, our hero will perish." This is not because there is something "important" that happens quickly that you might miss. Maybe there is, but it's not the point. The point is that Kubo is telling a story about people. Those people are now dead. And insofar as those people are able to be kept alive, it is by preserving as much of their personhood as possible - by remembering as much as possible from their life.

This is generally how I think about death.

Cryonics is an attempt at the ultimate form of preserving someone's pattern forever, but in a world pre-cryonics, the best you can reasonably hope for is for people to preserve you so thoroughly in story that a young person from the next generation can hear the story, and palpably feel the underlying character, rich with inner life. Can see the person so clearly that he or she comes to live inside them.

Realistical... (read more)

8weft
One of the things that makes Realistically Probably Not Having Kids sad is that I'm pretty much the last of the line on my Dad's side. And I DO know stories (not much, but some) of my great-great-grandparents. Sure, I can write them down, so they exist SOMEWHERE. But in reality, when I die, that line and those stories die with me.

I wanted to just reply something like "<3" and then became self-conscious of whether that was appropriate for LW.

7habryka
Seems good to me.

In particular, I think if we make the front-page comments section filtered by "curated/frontpage/community" (i.e. you only see community-blog comments on the frontpage if your frontpage is set to community), then I'd feel more comfortable posting comments like "<3", which feels correct to me.

[-]Raemon236

Yesterday I was at a "cultivating curiosity" workshop beta-test. One concept was "there are different mental postures you can adopt, that affect how easy it is not notice and cultivate curiosities."

It wasn't exactly the point of the workshop, but I ended up with several different "curiosity-postures", that were useful to try on while trying to lean into "curiosity" re: topics that I feel annoyed or frustrated or demoralized about.

The default stances I end up with when I Try To Do Curiosity On Purpose are something like:

1. Dutiful Curiosity (which is kinda fake, although capable of being dissociatedly autistic and noticing lots of details that exist and questions I could ask)

2. Performatively Friendly Curiosity (also kinda fake, but does shake me out of my default way of relating to things. In this, I imagine saying to whatever thing I'm bored/frustrated with "hullo!" and try to acknowledge it and and give it at least some chance of telling me things)

But some other stances to try on, that came up, were:

3. Curiosity like "a predator." "I wonder what that mouse is gonna do?"

4. Earnestly playful curiosity. "oh that [frustrating thing] is so neat, I wonder how it works! what's it gonna ... (read more)

I started writing this a few weeks ago. By now I have other posts that make these points more cleanly in the works, and I'm in the process of thinking through some new thoughts that might revise bits of this.

But I think it's going to be awhile before I can articulate all that. So meanwhile, here's a quick summary of the overall thesis I'm building towards (with the "Rationalization" and "Sitting Bolt Upright in Alarm" post, and other posts and conversations that have been in the works).

(By now I've had fairly extensive chats with Jessicata and Benquo and I don't expect this to add anything that I didn't discuss there, so this is more for other people who're interested in staying up to speed. I'm separately working on a summary of my current epistemic state after those chats)

  • The rationalsphere isn't great at applying rationality to its own internal politics
    • We don't seem to do much better than average. This seems like something that's at least pretty sad, even if it's a true brute fact about the world.
    • There have been some efforts to fix this fact, but most of it has seemed (to me) to be missing key
... (read more)

In that case Sarah later wrote up a followup post that was more reasonable and Benquo wrote up a post that articulated the problem more clearly. [Can't find the links offhand].

"Reply to Criticism on my EA Post", "Between Honesty and Perjury"

4Raemon
Thanks! I do still pretty* much endorse "Between Honesty and Perjury." *avoiding making a stronger claim here since I only briefly re-read it and haven't re-thought-through each particular section and claim. But the overall spirit it's pointing to is quite important. [Edit: Ah, well, in the comments there I apparently expressed some specific agreements and disagreements that seems... similar in shape to my current agreement and disagreement with Ben. But I think in the intervening years I've updated a bit towards "EA's epistemic standards should be closer to Ben's standards than I thought in 2017."]
9Dagon
Thank you for the effort and clarity of thought you're putting into this. One thing you may already be considering, but I haven't seen it addressed directly: Hobbyists vs fanatics vs professionals (or core/periphery, or founders/followers/exploiters, or any other acknowledgement of different individual capabilities and motives). What parts of "the community" are you talking about when you address various issues? You hint at this in the money/distortion topic, but you're in danger of abstracting "motivation" way too far, and missing the important details of individual variation. Also, it's possible that you're overestimating the need for legibility of reasoning over correctness of action (in the rational sense, of furthering one's true goals). I very much dispute "We don't seem to do much better than average", unless you're seriously cherry-picking your reference set. We do _WAY_ better than average both in terms of impact and in terms of transparency of reasoning. I'd love to explore some benchmarks (and copy some behaviors) if you can identify groups with similar composition and similar difficult-to-quantify goals, that are far more effective

Conversation with Andrew Critch today, in light of a lot of the nonprofit legal work he's been involved with lately. I thought it was worth writing up:

"I've gained a lot of respect for the law in the last few years. Like, a lot of laws make a lot more sense than you'd think. I actually think looking into the IRS codes would actually be instructive in designing systems to align potentially unfriendly agents."

I said "Huh. How surprised are you by this? And curious if your brain was doing one particular pattern a few years ago that you can now see as wrong?"

"I think mostly the laws that were promoted to my attention were especially stupid, because that's what was worth telling outrage stories about. Also, in middle school I developed this general hatred for stupid rules that didn't make any sense and generalized this to 'people in power make stupid rules', or something. But, actually, maybe middle school teachers are just particularly bad at making rules. Most of the IRS tax code has seemed pretty reasonable to me."

7Jiro
I think there's a difference between "Most of the IRS tax code is reasonable" and "Most of the instances where the IRS tax code does something are instances where it does reasonable things." Not all parts of the tax code are used equally often. Furthermore, most unreasonable instances of a lot of things will be rare as a percentage of the whole because there is a large set of uncontroversial background uses. For instance, consider a completely corrupt politician who takes bribes--he's not going to be taking a bribe for every decision he makes and most of the ones he does make will be uncontroversial things like "approve $X for this thing which everyone thinks should be approved anyway".

Over in this thread, Said asked the reasonable question "who exactly is the target audience with this Best of 2018 book?"

By compiling the list, we are saying: “here is the best work done on Less Wrong in [time period]”. But to whom are we saying this? To ourselves, so to speak? Is this for internal consumption—as a guideline for future work, collectively decided on, and meant to be considered as a standard or bar to meet, by us, and anyone who joins us in the future? 

Or, is this meant for external consumption—a way of saying to others, “see what we have accomplished, and be impressed”, and also “here are the fruits of our labors; take them and make use of them”? Or something else? Or some combination of the above?

I'm working on a post that goes into a bit more detail about the Review Phase, and, to be quite honest, the whole process is a bit in flux – I expect us (the LW team as well as site participants) to learn, over the course of the review process, what aspects of it are most valuable. 

But, a quick "best guess" answer for now.

I see the overall review process as having two "major phases":

  • Phase 1: Nomination/Review/Voting/Post-that-summarizes-the-voting
  • Phase 2: Compila
... (read more)

Thank you, this is a useful answer.

7[anonymous]
I'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.
5Hazard
+1 excitement about bookshelves :)

I've posted this on Facebook a couple times but seems perhaps worth mentioning once on LW: A couple weeks ago I registered the domain LessLong.com and redirected it to LessWrong.com/shortform. :P

A thing I might have maybe changed my mind about:

I used to think a primary job of a meetup/community organizer was to train their successor, and develop longterm sustainability of leadership.

I still hold out for that dream. But, it seems like a pattern is:

1) community organizer with passion and vision founds a community

2) they eventually move on, and pass it on to one successor who's pretty closely aligned and competent

3) then the First Successor has to move on to, and then... there isn't anyone obvious to take the reins, but if no one does the community dies, so some people reluctantly step up. and....

...then forever after it's a pale shadow of its original self.

For semi-branded communities (such as EA, or Rationality), this also means that if someone new with energy/vision shows up in the area, they'll see a meetup, they'll show up, they'll feel like the meetup isn't all that good, and then move on. Wherein they (maybe??) might have founded a new one that they got to shape the direction of more.

I think this also applies to non-community organizations (i.e. founder hands the reins to a new CEO who hands the reins to a new CEO who doesn't quite know what to do)

So... I'm kinda wonde... (read more)

2Pattern
What if the replacement isn't a replacement? If only a different person/people with a different vision/s can be found then...why not that? Or, what does the leader do, that can't be carried on?
2MikkW
Reading this makes me think of organizations which manage to successfully have several generations of  competent leadership. Something that has struck me for a while is the contrast in long-term competence between republics (not direct democracies) and hereditary monarchies. Reading through history, hereditary monarchies always seem to fall into the problem you describe, of incompetent and (physically and mentally) weak monarchs being placed at the head of a nation, leading to a lot of problems. Republics, in contrast, almost always have competent leaders - one might disagree with their goals, and they are too often appointed after their prime, when their health is declining [1], but the leaders of republics are almost always very competent people. This makes life much better for the people in the republic, and may be in part responsible for the recent proliferation of republics (though it does raise the question of why that hasn't happened sooner. Maybe the robust safeguards implemented by the Founding Fathers of the USA in their constitution were a sufficiently non-obvious, but important, social technology, to be able to make republics viable on the world stage? [2]). A key difference between monarchies and republics is that each successive generation of leadership in a republic must win an intense competition to secure their position, unlike the heirs of a monarchy. Not only this, but the competitions are usually held quite often (for example, every 4 years in Denmark, every 3 years in New Zealand), which ensures that the competitive nature of the office is kept in the public mind very frequently, making it hard to become a de facto hereditary position. By holding a competition to fill the office, one ensures that, even if the leader doesn't share the same vision as the original founder, they still have to be very competent to be appointed to the position. I contend that the usual way of appointing successors to small organizations (appointment by the previou
2Pattern
[1] Does this demonstrate: * a lack of younger leaders * older people have better shown themself (more time in which to do so, accumulate trust, etc.) * ? * Elections (by means of voters) intentionally choose old leaders because that limits how long they can hold the position, or forces them to find a successor or delegate? [2] George Washington's whole, only twice thing, almost seems more deliberate here. Wonder what would have happened if a similar check had been placed on political parties.
1MikkW
Regarding [1], people tend to vote for candidates they know, and politicians start out with 0 name recognition, which increases monotonically with age, always increasing but never decreasing, inherently biasing the process towards older candidates. The two-term limit was actually not intended by Washington to become a tradition, he retired after his second term because he was declining in health. It was only later that it became expected for presidents not to serve more than 2 terms. I do think the term limit on the presidency is an important guard in maintaining the competitive and representative nature of the office, and I think it's good to wonder if extending term limits to other things can be beneficial, though I am also aware of arguments pushing in the opposite direction
2Raemon
Citation? (I've only really read American Propaganda about this so not very surprised if this is the case, but hadn't heard it before)

From Wikipedia: George Washington, which cites Korzi, Michael J. (2011). Presidential Term Limits in American History: Power, Principles, and Politics page 43, -and- Peabody, Bruce G. (September 1, 2001). "George Washington, Presidential Term Limits, and the Problem of Reluctant Political Leadership". Presidential Studies Quarterly. 31 (3): 439–453:

At the end of his second term, Washington retired for personal and political reasons, dismayed with personal attacks, and to ensure that a truly contested presidential election could be held. He did not feel bound to a two-term limit, but his retirement set a significant precedent. Washington is often credited with setting the principle of a two-term presidency, but it was Thomas Jefferson who first refused to run for a third term on political grounds.

A note on the part that says "to ensure that a truly contested presidential election could be held": at this time, Washington's health was failing, and he indeed died during what would have been his 3rd term if he had run for a 3rd term. If he had died in office, he would have been immediately succeeded by the Vice President, which would set an unfortunate precedent of presidents serving until they die, then being followed by an appointed heir until that heir dies, blurring the distinction between the republic and a monarchy.

2Raemon
Thanks!
2Dagon
What's different for the organizer and first successor, in terms of their ability to do the primary job of finding their successor?  I also note the pattern you mention (one handoff mostly succeeds, community degrades rapidly around the time the first successor leaves with no great second successor).  But I also have seen a lot of cases where the founder fails to hand off in the first place, and some where it's handed off to a committee or formal governance structure, and then eventually dies for reasons that don't seem caused by succession. I wonder if you've got the causality wrong - communities have a growth/maintenance/decline curve, which varies greatly in the parameters, but not so much in the shape.  It seems likely to me that the leaders/organizers REACT to changes in the community by joining, changing their involvement, or leaving, rather than causing those changes.
3lincolnquirk
I'm not Ray, but I'll take a stab -- The founder has a complete vision for the community/meetup/company/etc. They were able to design a thing that (as long as they continue putting in energy) is engaging, and they instinctively know how to change it so that it continues being great for participants. The first successor has an incomplete, operational/keep-things-running-the-way-they-were type vision. They cargo-cult whatever the founder was doing. They don't have enough vision to understand the 'why' behind all the decisions. But putting your finger on their precise blind spot is quite hard. It's their "fault" (to the extent that we can blame anyone) that things go off the rails, but their bad decision-making doesn't actually have short term impacts that anyone can see. Instead, the impacts come all at once, once they disappear, and there becomes common knowledge that it was a house of cards the whole time. (or something. my models are fairly imprecise on this.) Anyway, why did the founder get fooled into anointing the first successor even though they don't have the skills to continue the thing? My guess is that there's a fairly strong selection effect for founders combined with "market fit" -- founders who fail to reach this resonant frequency don't pick successors, they just fail. Whatever made them great at building this particular community doesn't translate into skills at picking a successor, and that resonance may not happen to exist in any other person. Another founder-quality person would not necessarily have resonated with the existing community's frequency, so there could also be an anti-selection effect there.
3MikkW
My model differs from yours. In my view, the first successor isn't the source of most problems. The first successor usually has enough interaction and knowledge transfer from the founder, that they are able to keep things working more-or-less perfectly fine during their tenure, but they aren't able to innovate and create substantial new value, since they lack the creativity and vision of the founder. In your terms, they are cargo-culting, but they are able to cargo-cult sufficiently well to keep the organization running smoothly; but when the second (and nth) successor comes in, they haven't interacted much directly with the original founder, but instead are basing their decisions based, at most, on a vague notion of what the founder was like (though are often better served when they don't even try to follow in the footsteps of the founder), and so are unable to keep things working according to the original vision. They are cargo-culting a cargo-cult, which isn't enough to keep things working the way they need to work, at which point the organization stops being worth keeping around. During the reign of the founder, the slope of the value created over time is positive, during the reign of the first successor, the slope is approximately zero, but once the second successor and beyond take over, the slope will be negative.
1MikkW
My read on this is that it's still obviously worthwhile to train a successor, but to consider giving them clear instructions to shut down the group when it's time for them to move on, to avoid the problems that come with 3rd-generational leadership.

Crossposted from my Facebook timeline (and, in turn, crossposted there from vaguely secret, dank corners of the rationalsphere)

“So Ray, is LessLong ready to completely replace Facebook? Can I start posting my cat pictures and political rants there?”

Well, um, hmm....

So here’s the deal. I do hope someday someone builds an actual pure social platform that’s just actually good, that’s not out-to-get you, with reasonably good discourse. I even think the LessWrong architecture might be good for that (and if a team wanted to fork the codebase, they’d be welcome to try)

But LessWrong shortform *is* trying to do a bit of a more nuanced thing than that.

Shortform is for writing up early stage ideas, brainstorming, or just writing stuff where you aren’t quite sure how good it is or how much attention to claim for it.

For it to succeed there, it’s really important that it be a place where people don’t have to self-censor or stress about how their writing comes across. I think intellectual progress depends on earnest curiosity, exploring ideas, sometimes down dead ends.

I even think it involves clever jokes sometimes.

But... I dunno, if looked ahead 5 years and saw that the Future People were using ... (read more)

Just spent a weekend at the Internet Intellectual Infrastructure Retreat. One thing I came away with was a slightly better sense of was forecasting and prediction markets, and how they might be expected to unfold as an institution.

I initially had a sense that forecasting, and predictions in particular, was sort of "looking at the easy to measure/think about stuff, which isn't necessarily the stuff that connected to stuff that matters most."

Tournaments over Prediction Markets

Prediction markets are often illegal or sketchily legal. But prediction tournaments are not, so this is how most forecasting is done.

The Good Judgment Project

Held an open tournament, the winners of which became "Superforecasters". Those people now... I think basically work as professional forecasters, who rent out their services to companies, NGOs and governments that have a concrete use for knowing how likely a given country is to go to war, or something. (I think they'd been hired sometimes by Open Phil?)

Vague impression that they mostly focus on geopolitics stuff?

High Volume and Metaforecasting

Ozzie described a vision where lots of forecasters are predicting things all the time... (read more)

More in neat/scary things Ray noticed about himself.

I set aside this week to learn about Machine Learning, because it seemed like an important thing to understand. One thing I knew, going in, is that I had a self-image as a "non technical person." (Or at least, non-technical relative to rationality-folk). I'm the community/ritual guy, who happens to have specialized in web development as my day job but that's something I did out of necessity rather than a deep love.

So part of the point of this week was to "get over myself, and start being the sort of person who can learn technical things in domains I'm not already familiar with."

And that went pretty fine.

As it turned out, after talking to some folk I ended up deciding that re-learning Calculus was the right thing to do this week. I'd learned in college, but not in a way that connected to anything and gave me a sense of it's usefulness.

And it turned out I had a separate image of myself as a "person who doesn't know Calculus", in addition to "not a technical person". This was fairly easy to overcome since I had already given myself a bunch of space to explore and change this week, and I'd spent the past few months transitioning into being ready for it. But if this had been at an earlier stage of my life and if I hadn't carved out a week for it, it would have been harder to overcome.

Man. Identities. Keep that shit small yo.

[-]Zvi120

Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.

(Presumably not the full 'know how to do all the particular integrals and be able to ace the final' perhaps, but definitely 'grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that's harder than that.' To ace the final you'll need two weeks.)

3Raemon
Very confused about why this was downvoted.
4habryka
Maybe someone thinks that the meme of "everyone can learn calculus" is a really bad one? I remember you being similarly frustrated at the "everyone can be a programmer" meme.

I didn't downvote, but I agree that this is a suboptimal meme – though the prevailing mindset of "almost nobody can learn Calculus" is much worse.

As a datapoint, it took me about two weeks of obsessive, 15 hour/day study to learn Calculus to a point where I tested out of the first two courses when I was 16. And I think it's fair to say I was unusually talented and unusually motivated. I would not expect the vast majority of people to be able to grok Calculus within a week, though obviously people on this site are not a representative sample.

Quite fair. I had read Zvi as speaking to typical LessWrong readership. Also, the standard you seem to be describing here is much higher than the standard Zvi was describing.

-5Elo
4Pamela Fox
I went on a 4-month Buddhist retreat, and one week covered "Self-images". We received homework that week to journal our self-images - all of them. Every time I felt some sense of self, like "The self that prides itself on being clean" or "The self that's playful and giggly", I'd write it down in my journal. I ended up filling 20 pages over a month period, and learning so much about the many selves my mind/body were trying to convey to the world. I also discovered how often two self-images would compete with each other. Observing the self-images helped them to be less strongly attached. It sounds like you discovered that yourself this week. You might find such an exercise useful for discovering more of that.

High Stakes Value and the Epistemic Commons

I've had this in my drafts for a year. I don't feel like the current version of it is saying something either novel or crisp enough to quite make sense as a top-level post, but wanted to get it out at least as a shortform for now.

There's a really tough situation I think about a lot, from my perspective as a LessWrong moderator. These are my personal thoughts on it.

The problem, in short: 

Sometimes a problem is epistemically confusing, and there are probably political ramifications of it, such that the most qualified people to debate it are also in conflict with billions of dollars on the line and the situation is really high stakes (i.e. the extinction of humanity) such that it really matters we get the question right.

Political conflict + epistemic murkiness means that it's not clear what "thinking and communicating sanely" about the problem look like, and people have (possibly legitimate) reasons to be suspicious of each other's reasoning.

High Stakes means that we can't ignore the problem.

I don't feel like our current level of rationalist discourse patterns are sufficient for this combo of high stakes, political conflict, and epistemi... (read more)

21a3orn
This intersects sharply with your prior post about feedback loops, I think. As it is really hard / maybe impossible (???) for individuals to reason well in situations where you do not have a feedback loop, it is really hard / maybe impossible to make a community of reasoning well in a situation without feedback loops. Like at some point, in a community, you need to be able to point to (1) canonical works that form the foundation of further thought, (2) examples of good reasoning to be imitated by everyone. If you don't have those, you have a sort of glob of memes and ideas and shit that people can talk about to signal that they "get it," but it's all kinda arbitrary and conversation cannot move on because nothing is ever established for sure. And like -- if you never have clear feedback, I think it's hard to have canonical works / examples of good reasoning other than by convention and social proof. There are works in LW which you have to have read in order to continue various conversations, but whether these works are good or not is highly disputed. I of course have some proposed ideas for how to fix the situation -- this -- but my proposed ideas would clean out the methods of reasoning and argument with which I disagree, which is indeed the problem.
2Raemon
I don't have a super strong memory of this, did you have a link? (not sure how directly relevant but was interested)
81a3orn
Your memory is fine, I was writing badly -- I meant the ideas I would propose rather than the ideas I have proposed by "proposed ideas." The flavor would be something super-empiricist like this, not that I endorse that as perfect. I do think ideas without empirical restraint loom too large in the collective.
2Chris_Leong
Have you considered hosting a discussion on this topic? I'm sure you've already had some discussions on this topic, but a public conversation could help surface additional ideas and/or perspectives that could help you make sense of this.

Posts I vaguely want to have been written so I can link them to certain types of new users:

  • "Why you can chill out about the basilisk and acausal blackmail." (The current Roko's Basilisk kinda tries to be this, but there's a type of person who shows up on LessWrong regularly who's caught in an anxious loop that keeps generating more concerns, and I think the ideal article here is more trying to break them out of the anxious loop than comprehensively explain the game theory.)
  • "FAQ: Why you can chill out about quantum immortality and everything adds up to normality." (Similar, except the sort of person who gets worked up about this is usually having a depressive spiral and worried about being trapped in an infinite hellscape)

Seems like different AI alignment perspectives sometimes are about "which thing seems least impossible."

Straw MIRI researchers: "building AGI out of modern machine learning is automatically too messy and doomed. Much less impossible to try to build a robust theory of agency first."

Straw Paul Christiano: "trying to get a robust theory of agency that matters in time is doomed, timelines are too short. Much less impossible to try to build AGI that listens reasonably to me out of current-gen stuff."

(Not sure if either of these are fair, or if other camps fit this)

5Rob Bensinger
'Straw MIRI researchers' seems basically right to me. Though if I were trying to capture all MIRI research I'd probably replace "try to build a robust theory of agency" with "try to get deconfused about powerful general-purpose intelligence/optimization" or "try to ensure that the future developers of AGI aren't flying blind; less like the black boxes of current ML, more like how NASA has to deal with some chaotic wind and weather patterns but the principles and parts of the rocket are fundamentally well-understood". 'Straw Paul Christiano' doesn't sound right to me, but I'm not sure how to fix it. Some things that felt off to me (though maybe I'm wrong about this too): * Disagreements about whether MIRI's approach is doomed or too-hard seem smaller and less cruxy to me than disagreements about whether prosaic AGI alignment is doomed. * "Timelines are too short" doesn't sound like a crux I've heard before. * A better example of a thing I think Paul thinks is pretty doomed is "trying to align AGI in hard-takeoff scenarios". I could see takeoff speed/continuity being a crux: either disagreement about the likelihood of hard takeoff, or disagreement about the feasibility of alignment given hard takeoff.

(I got nerd-sniped by trying to develop a short description of what I do. The following is my stream of thought)

+1 to replacing "build a robust theory" with "get deconfused," and with replacing "agency" with "intelligence/optimization," although I think it is even better with all three. I don't think "powerful" or "general-purpose" do very much for the tagline.

When I say what I do to someone (e.g. at a reunion) I say something like "I work in AI safety, by doing math/philosophy to try to become less confused about agency/intelligence/optimization." (I dont think I actually have said this sentence, but I have said things close.)

I specifically say it with the slashes and not "and," because I feel like it better conveys that there is only one thing that is hard to translate, but could be translated as "agency," "intelligence," or "optimization."

I think it is probably better to also replace the word "about" with the word "around" for the same reason.

I wish I had a better word for "do." "Study" is wrong. "Invent" and "discover" both seem wrong, because it is more like "invent/discover", but that feels like it is overusing the slashes. Maybe "develop"? I think I like "invent" best. (Note... (read more)

2Raemon
The thing the "timelines are too short" was trying to get at was "it has to be competitive with mainstream AI in order to work" (pretty sure Paul has explicitly said this), with, what I thought was basically a followup assumption of "and timelines are too short to have time to get a competitive thing based off the kind of deconfusion work that MIRI does."
4Rob Bensinger
I'd have thought the Paul-argument is less timeline-dependent than that -- more like 'even if timelines are long, there's no reason to expect any totally new unexplored research direction to pay off so spectacularly that it can compete with the state of the art n years from now; and prosaic alignment seems like it may work, so we should focus more on that until we're confident it's a dead end'. The base rate of new ideas paying off in a big way, even if they're very promising-seeming at the outset, is super low. It may be useful for some people to pursue ideas like this, but (on my possibly-flawed Paul-model) the bulk of the field's attention should be on AI techniques that already have a proven track record of competitiveness, until we know this is unworkable. Whereas if you're already confident that scaled-up deep learning in the vein of current ML is unalignable, then base rates are a bit of a moot point; we have to find new approaches one way or another, even if it's hard-in-expectation. So "are scaled-up deep nets a complete dead end in terms of alignability?" seems like an especially key crux to me.
6Rob Bensinger
Caveat: I didn't run the above comments by MIRI researchers, and MIRI researchers aren't a monolith in any case. E.g., I could imagine people's probabilities in "scaled-up deep nets are a complete dead end in terms of alignability" looking like "Eliezer ≈ Benya ≈ Nate >> Scott >> Abram > Evan >> Paul", or something?
2Raemon
Okay, that is compatible with the rest of my Paul model. Does still seem to fit into the ‘what’s least impossible’ frame.

My personal religion involves two* gods – the god of humanity (who I sometimes call "Humo") and the god of the robot utilitarians (who I sometimes call "Robutil"). 

When I'm facing a moral crisis, I query my shoulder-Humo and my shoulder-Robutil for their thoughts. Sometimes they say the same thing, and there's no real crisis. For example, some naive young EAs try to be utility monks, donate all their money, never take breaks, only do productive things... but Robutil and Humo both agree that quality intellectual world requires slack and psychological health. (Both to handle crises and to notice subtle things, which you might need, even in emergencies)

If you're an aspiring effective altruist, you should definitely at least be doing all the things that Humo and Robutil agree on. (i.e. get to to the middle point of Tyler Alterman's story here).

But Humo and Robutil in fact disagree on some things, and disagree on emphasis. 

They disagree on how much effort you should spend to avoid accidentally recruiting people you don't have much use for.

They disagree on how many high schoolers it's acceptable to accidentally fuck up psychologically, while you experiment with a new program to... (read more)

3Dagon
Hmm.  Does this fully deny utilitarianism?  Are these values sacred (more important that calculable tradeoffs), in some way? I'm not utilitarian for other reasons (I don't believe in comparability of utility, and I don't value all moral patients equally, or fairly, or objectively), but I think you COULD fit those priorities into a utilitarian framework, not by prioritizing them for their own sake, but acknowledging the illegibility of the values and taking a guess at how to calculate with them, and then adjusting as circumstances change.

I’ve noticed myself using “I’m curious” as a softening phrase without actually feeling “curious”. In the past 2 weeks I’ve been trying to purge that from my vocabulary. It often feels like I'm cheating, trying to pretend like I'm being a friend when actually I'm trying to get someone to do something. (Usually this is a person I'm working with it and it's not quite adversarial, we're on the same team, but it feels like it degrades the signal of true open curiosity)

2Matt Goldenberg
Have you tried becoming curious each time you feel the urge to say it? Seems strictly better than not being curious.
2Raemon
Dunno about that. On one hand, being curious seems nice on the margin. But, the whole deal here is when I have some kinda of agenda I'm trying to accomplish. I do care about accomplishing the agenda in a friendly way. I don't obviously care about doing it in a curious way – the reason I generated the "I'm curious" phrase is because it was an easy hack for sounding less threatening, not because curiosity was important. I think optimizing for curiosity here is more likely to fuck up my curiosity than to help with anything.
4Matt Goldenberg
I went through something similar with phrases like "I'm curious if you'd be willing to help me move." While I really meant "I hope that you'll help me move." My personal experience was that shifting this hope/expectation toba real sense of curiosity "Hmm, Does this person want to help me move?" Made it more pleasant for both of us. I became genuinely curious about their answer, and there was less pressure both internally and externally.
2Zack_M_Davis
The direct approach: "I'm curious [if/why ...]" → "Tell me [if/why ...]"
3Raemon
I do still feel flinchy about that because it does come across less friendly / overly commanding to me. (For the past few weeks I've been often just deciding the take the hit of being less friendly, but am on the lookout for phrases that feel reasonable on all dimensions)
4DanielFilan
"Can you tell me [if/why]..."?
2sapphire
It basically is a command. So maybe it's a feature that the phrase feels commanding. Though it is a sort of 'soft command' in that you would accept a good excuse to not answer (like 'I am too busy, I will explain later').
2Raemon
I think it's not the case that I really want it to be a command, I want it to be "reveal culture", where, it is a fact that I want to know this thing, and that I think it'd be useful if you told me. But, it's also the case that we are friends and if you didn't want to tell me for whatever reason I'd find a way to work with that. (the line is blurry sometimes, there's a range of modes I'm in when I make this sort of phrase, some more commandlike than others. But, I definitely frequently want to issue a non-command. The main thing I want to fix is that "I'm curious" in particular is basically a lie, or at least has misleading connotes)

Hmm, sure seems like we should deploy "tagging" right about now, mostly so you at least have the option of the frontpage not being All Coronavirus All The Time.

So there was a drought of content during Christmas break, and now... abruptly... I actually feel like there's too much content on LW. I find myself skimming down past the "new posts" section because it's hard to tell what's good and what's not and it's a bit of an investment to click and find out.

Instead I just read the comments, to find out where interesting discussion is.

Now, part of that is because the front page makes it easier to read comments than posts. And that's fixable. But I think, ultimately, the deeper issue is with the main unit-of-contribution being The Essay.

A few months ago, mr-hire said (on writing that provokes comments)

Ideas should become comments, comments should become conversations, conversations should become blog posts, blog posts should become books. Test your ideas at every stage to make sure you're writing something that will have an impact.

This seems basically right to me.

In addition to comments working as an early proving ground for an ideas' merit, comments make it easier to focus on the idea, instead of getting wrapped up in writing something Good™.

I notice essays on the front page starting with flo... (read more)

9Raemon
Relatedly, though, I kinda want aspiring writers on LW to read this Scott Alexander Post on Nonfiction Writing.
4Hazard
I ended up back here because I just wrote a short post that was an idea, and then went, "Hmmm, didn't Raemon do a Short Form feed thing? How did that go?" It might be nice if one could pin their short form feed to their profile.
6Raemon
Yeah, I'm hoping in the not-too-distant future we can just make shortform feeds an official part of less wrong. (Although, I suppose we may also want users to be able to sticky their own posts on their profile page, for various reasons, and this would also enable anyone who wants such a feed to create one, while also being able to create other things like "important things you know about me if you're going to read my posts" or whatever.)
3Raemon
(It's now the distant future, and... maybe we'll be finally gettin around to this!)

Is... there compelling difference between stockholm syndrome and just, like, being born into a family?

4ChristianKl
There's little evidence for the stockholm syndrome effect in general. I wonder whether there's evidence that being born in a family does something.
4leggi
That made me laugh! Can't think of much difference in the early years.
1Pattern
Perhaps degree of investment. Consider the amount of time it takes for someone to grow up, and the effort involved in teaching them (how to talk, read, etc.). (And before that, pregnancy.) There is at least one book that plays with this - the protagonist finds out they were stolen from 'their family' as a baby (or really small child), and the people who stole them raised them, and up to that point they had no idea. I don't remember the title.
[-]Raemon1310

Using "cruxiness" instead of operationalization for predictions.

One problem with making predictions is "operationalization." A simple-seeming prediction can have endless edge cases.

For personal predictions, I often think it's basically not worth worrying about it. Write something rough down, and then say "I know what I meant." But, sometimes this is actually unclear, and you may be tempted to interpret a prediction in a favorable light. And at the very least it's a bit unsatisfying for people who just aren't actually sure what they meant.

One advantage of cruxy predictions (aside from "they're actually particularly useful in the first place), is that if you know what decision a prediction was a crux for, you can judge ambiguous resolution based on "would this actually have changed my mind about the decision?"

("Cruxiness instead of operationalization" is a bit overly click-baity. Realistically, you need at least some operationalization, to clarify for yourself what a prediction even means in the first place. But, I think maybe you can get away with more marginal fuzziness if you're clear on how the prediction was supposed to inform your decisionmaking)

⚖ A year from now, in the three months prior, will I have used "cruxiness-as-operationalization" on a prediction, and found it helpful. (Raymond Arnold: 50%)

I notice that academic papers have stupidly long, hard-to-read abstracts. My understanding is that this is because there is some kind of norm about papers having the abstract be one paragraph, while the word-count limit tends to be... much longer than a paragraph (250 - 500 words).

Can... can we just fix this? Can we either say "your abstract needs to be a goddamn paragraph, which is like 100 words", or "the abstract is a cover letter that should be about one page long, and it can have multiple linebreaks and it's fine."

(My guess is that the best equilibrium is "People keep doing the thing currently-called-abstracts, and start treating them as 'has to fit on one page', with paragraph breaks, and then also people start writing a 2-3 sentence thing that's more like "the single actual-paragraph that you'd read if you were skimming through a list of papers.")

4avturchin
Some journals, like Futures, require 5 short phrases as highlights summarising key ideas as addition to the abstract. See e.g. here: https://www.sciencedirect.com/science/article/pii/S0016328719303507?via%3Dihub   "Highlights   The stable climate of the Holocene made agriculture and civilization possible. The unstable Pleistocene climate made it impossible before then. • Human societies after agriculture were characterized by overshoot and collapse. Climate change frequently drove these collapses. • Business-as-usual estimates indicate that the climate will warm by 3°C-4 °C by 2100 and by as much as 8°–10 °C after that. • Future climate change will return planet Earth to the unstable climatic conditions of the Pleistocene and agriculture will be impossible. • Human society will once again be characterized by hunting and gathering."
3adamShimi
Another reason is that you're not supposed to put references in the abstract. So if you want people outside your narrow subfield to have a chance at understanding the abstract, you need to reexplain the basic ideas behind the whole research approach. That takes space, and is usually very weird. 
2DanielFilan
My sense is that they are not that hard to read for people in the relevant discipline, and there's absolutely no pressure for the papers to be legible to people outside the relevant discipline.
2Raemon
I feel like paragraph breaks in a 400 word document seem straightforwardly valuable for legibility, however well versed you are in a field. In someone posts a wall of text in LW I tell them to break it up even if it's my field.
3Raemon
Okay it looks like for the particular thing I most recently was annoyed by, it's 150 words. This thing: Really seems to me like it's supposed to be this thing:
3DanielFilan
RIP the concept of copy-pasting from a PDF.
2DanielFilan
I admit that that is a little more legible to me, although I'm not a researcher in the field of primatology.
2Raemon
I do think, like, man, I wanted to know about primatology, and it seems pretty silly to assume that science should only be relevant to specialists in a field. Especially when the solution is literally just inserting two paragraph breaks. (I might also make claims that academic papers should be doing more effortful things to be legible, but this just seemed like a fairly straightforward thing that was more of an obviously-bad-equilibrium than a "there's a big effortful thing I think other people should do for other-other-people's benefit.")

I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).

Disclaimer: I am not making much effort to not ramble in this post.

A couple takeaways:

1. Working Memory Limits

One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks").

It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress.

So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4.

(Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about)

...

2. I just don't finish sente

I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a

... (read more)
3Michaël Trazzi
re working memory: never thought of it during conversations, interesting. it seems that we sometime hold the nodes of the conversation tree to go back to them afterward. and maybe if you're introducing new concepts while you're talking people need to hold those definitions in working memory as well.
1Alaric
Could you explain (or give a link) what is "Mindful Cognition Tuning"?
3Raemon
Here you go! http://bewelltuned.com/tune_your_cognitive_strategies

[not trying to be be comprehensible people that don't already have some conception of Kegan stuff. I acknowledge that I don't currently have a good link that justifies Kegan stuff within the LW paradigm very well]

Last year someone claimed to me is that a problem with Kegan is that there really are at least 6 levels. The fact that people keep finding themselves self-declaring as "4.5" should be a clue that 4.5 is really a distinct level. (the fact that there are at least two common ways to be 4.5 also is a clue that the paradigm needs clarification)

My garbled summary of this person's conception is:

  • Level 4: (you have a system of principles you are subject to, that lets you take level 3 [social reality??] as object)
  • Level 5: Dialectic. You have the ability to earnestly dialogue between a small number of systems (usually 2 at a time), and either step between them, or work out new systems that reconcile elements from the two of them.
  • Level 6: The thing Kegan originally meant by "level 5" – able to fluidly take different systems as object.

Previously, I had felt something like "I basically understand level 5 fine AFAICT, but maybe don't have the skills do so fluidly. I can imagine there bei

... (read more)
5romeostevensit
I think the 4.5 thing splits based on whether you mostly skipped 3 or 4.
4Raemon
Which is which?
2romeostevensit
I don't know how others are splitting 4.5 so I don't know mapping.
2Gordon Seidoh Worley
I'm not sure what you have in mind by "skipping" here, since the Kegan and other developmental models explicitly are based on the idea that there can be no skipping because each higher level is built out of new ways of combining abstractions from the lower levels. I have noticed ways in which people can have lumpy integration of the key skills of a level (and have noticed this in various ways in myself); is that the sort of thing you have in mind by "skipping", like made it to 4 without ever having fully integrated the level 3 insights.
4Matt Goldenberg
I generally think that mindspace is pretty vast, and am predisposed to be skeptical of the claim that there's only one path to a certain way of thinking. I buy that most people follow a certain path, but wouldn't be suprised if for instance there's a person in history who never went directly from Kegan 3 to 4.5 by never finding a value system that could stand up to their chaotic environment.
2Kaj_Sotala
David Chapman says that achieving a particular level means that the skills associated with it become logically possible for you, which is distinct from actually mastering those skills; and that it's possible for you to e.g. get to stage 4 while only having poor mastery of the skills associated with stage 3. So I would interpret "skipped stage N" as shorthand for "got to stage N+X without developing any significant mastery of stage N skills".
4Gordon Seidoh Worley
I think this is right, although I stand by the existing numbering convention. My reasoning is that the 4.5 space is really best understood in the paradigm where the thing that marks a level transition is gaining a kind of naturalness with that level, and 4.5 is a place of seeing intellectually that something other than what feels natural is possible, but the higher level isn't yet the "native" way of thinking. This is not to diminish the in between states because they are important to making the transition, but also to acknowledge that they are not the core thing as originally framed. For what it's worth I think Michael Common's approach is probably a bit better in many ways, especially in that Kegan is right for reasons that are significantly askew of the gears in the brain that make his categories natural. Luckily there's a natural and straightforward mapping between different developmental models (see Integral Psychology and Ken Wilber's work for one explication of this mapping between these different models), so you can basically use whichever is most useful to you in a particular context without missing out on pointing at the general feature of reality these models are all convergent to. Also perhaps interestingly, there's a model in Zen called the five ranks that has an interpretation that could be understood as a developmental model of psychology, but it also suggests an inbetween level, although between what we might call Kegan 5 and a hypothetical Kegan 6 if Kegan had described such a level. I don't think there's much to read into this, though, as the five ranks is a polymorphic model that explains multiple things in different ways using the same structure, so this is as likely an artifact as some deep truth that there is something special about the 5 to 6 transition, but it is there so it suggests others have similarly noticed it's worth pointing out cases where there are levels between the "real" levels. Similarly it's clear from Common's model that Ke

After a recent 'doublecrux meetup' (I wasn't running it but observed a bit), I was reflecting on why it's hard to get people to sufficiently disagree on things in order to properly practice doublecrux.\

As mentioned recently, it's hard to really learn doublecrux unless you're actually building a product that has stakes. If you just sorta disagree with someone... I dunno you can do the doublecrux loop but there's a sense where it just obviously doesn't matter.

But, it still sure is handy to have practiced doublecruxing before needing to do it in an important situation. What to do?

Two options that occur to me are

  • Singlecruxing
  • First try to develop a plan for building an actual product together, THEN find a thing to disagree about organically through that process.

[note: I haven't actually talked much with the people who's major focus is teaching doublecrux, not sure how much of this is old hat, or if there's a totally different approach that sort of invalidates it]

SingleCruxing

One challenge about doublecrux practice is that you have to find something you have strong opinions about and also someone else has strong opinions about. So..... (read more)

4Matt Goldenberg
Another useful skill you can practice is *actually understanding people's models*. Like, find something someone else believes, guess what their model, is then ask them "so your model is this?", then repeat until they agree that you understand their model. This sort of active listening around models is definitely a prerequisite doublecrux skill and can be practiced without needing someone else to agree to doublecrux with you.
2Raemon
Nod. I haven't actually been to CFAR recently, not sure how they go about it there. But I think for local meetups doing practice breaking it down into subskills seems pretty useful and I agree with active listening being another key one.
1Matthew Barnett
As someone who may or may not have been part of the motivation for this shortform, I just want to say that it was my first time doing double crux and so I'm not sure whether I actually understood it.
3Raemon
Heh, you were not the motivating person, and more generally this problem has persisted on most doublecrux meetups I've been to. (There were at least 3 people having this issue yesterday)
2Raemon
I'm also curious, as a first-time-doublecruxer, what ended up being particular either confusions or takeaways or anything like that.
[-]Raemon129

I notice some people go around tagging posts with every plausible tag that possible seems like it could fit. I don't think this is a good practice – it results in an extremely overwhelming and cluttered tag-list, which you can't quickly skim to figure out "what is this post actually about"?, and I roll to disbelieve on "stretch-tagging" actually helping people who are searching tag pages.

6Joseph Miller
There should probably be guidance on this when you go to add a tag. When I write a post I just randomly put some tags and have never previously considered that it might be prosocial to put more or less tags on my post.
4Viliam
I think people vote on tags, so if more people agree that the tag is relevant, the article gets higher in the list. So extra tags (that people won't vote for) do create some noise, but only at the bottom of the list. This is how I think this works; I may be wrong.

I just briefly thought you could put a bunch of AI researchers on a spaceship, and accelerate it real fast, and then they get time dilation effects that increase their effective rate of research.

Then I remembered that time dilation works the other way 'round – they'd get even less time.

This suggested a much less promising plan of "build narrowly aligned STEM AI, have it figure out how to efficiently accelerate the Earth real fast and... leave behind a teeny moon base of AI researchers who figure out the alignment problem."

7gwern
More or less the plot of https://en.wikipedia.org/wiki/Orthogonal_(series) incidentally.
2Dagon
+1 for thinking of unusual solutions.  If it's feasible to build long-term very-fast-relative-to-earth habitats without so much AI support that we lose before it launches, we should do that for random groups of humans.  Whether you call them colonies or backups doesn't matter.  We don't have to save all people on earth, just enough of humanity that we can expand across the universe fast enough to rescue the remaining victims of unaligned AI sometime.
2Donald Hobson
I think an unaligned AI would have a large enough strategic advantage that such attempt is hopeless without aligned AI. So these backup teams would need to contain alignment researchers. But we don't have enough researchers to crew a bunch of space missions, all of which need to have a reasonable chance of solving alignment. 

Man, I watched The Fox and The Hound a few weeks ago. I cried a bit.

While watching the movie, a friend commented "so... they know that foxes are *also* predators, right?" and, yes. They do. This is not a movie that was supposed to be about predation except it didn't notice all the ramifications about its lesson. This movie just isn't taking a stand about predation.

This is a movie about... kinda classic de-facto tribal morality. Where you have your family and your tribe and a few specific neighbors/travelers that you welcomed into your home. Those are your people, and the rest of the world... it's not exactly that they aren't *people*, but, they aren't in your circle of concern. Maybe you eat them sometimes. That's life.

Copper the hound dog's ingroup isn't even very nice to him. His owner, Amos, leaves him out in a crate on a rope. His older dog friend is sort of mean. Amos takes him out on a hunting trip and teaches him how to hunt, conveying his role in life. Copper enthusiastically learns. He's a dog. He's bred to love his owner and be part of the pack no matter what.

My dad once commented that this was a movie that... seemed remarkably realistic about what you can expect from ani... (read more)

Sometimes the subject of Kegan Levels comes up and it actually matters a) that a developmental framework called "kegan levels" exists and is meaningful, b) that it applies somehow to The Situation You're In.

But, almost always when it comes up in my circles, the thing under discussion is something like "does a person have the ability to take their systems as object, move between frames, etc." And AFAICT this doesn't really need to invoke developmental frameworks at all. You can just ask if a person has a the "move between frames" skill.*

This still suffers a bit from the problem where, if you're having an argument with someone, and you think the problem is that they're lacking a cognitive skill, it's a dicey social move to say "hey, your problem is that you lack a cognitive skill." But, this seems a lot easier to navigate than "you are a Level 4 Person in this 5 Level Scale".

(I have some vague sense that Kegan 5 is supposed to mean something more than "take systems as object", but no one has made a great case for this yet, and in case it hasn't been the thing I'm personally running into)

2Richard_Kennaway
Kegan levels lend themselves to being used like one of those irregular verbs, like "I am strong minded, you are stubborn, he is a pig-headed fool." "I am Kegan level 5, you are stuck on Kegan level 4, and all those dreadful normies and muggles around us are Kegan 3 or worse."
2Viliam
Seems to me that the main problem with linear systems where you put yourself at the top (because, who doesn't?), is that the only choice it gives everyone else is either to be the same as you, or to be inferior. Disagreeing with the system probably makes one inferior, too. Feels a bit ironic, if this is considered to be a pinnacle of emotional development... But of course now I am constructing a frame where I am at the top and those people who like the Kegan scale are silly, so... I guess this is simply what humans do: invent classifications that put them on the top. ;) And it doesn't even mean that those frames are wrong; if there is a way to put people on a linear scale, then technically, someone has to be on the top. And if the scale is related to understanding, then your understanding of the scale itself probably should correlate with your position on it. So, yes, it is better to not talk about the system itself, and just tell people where specifically they made a mistake.
2Gordon Seidoh Worley
The original formulation definitely mixes in a bunch of stuff along with it, the systems as object thing is meant to be characteric, but it's not all of the expected stuff. Most people don't push the hard version that taking systems as object is not just characteric but causally important (I say this even though I do push this version of the theory). It is actually kinda rude to psychologize other people, especially if you miss the mark, and especially especially if you hit the mark and they don't like it, so it's probably best to just keep your assessment of their Kegan level to yourself unless it's explicitly relevant since bringing it up will probably work against you even if in a high-trust environment it wouldn't (and you are unlikely to be in a high-trust enough environment for it to work even if you think you are). As for asking people if they have the skill, I don't expect that to work since it's easy to delude yourself that you do because you can imagine doing it or can do it in an intellectual way, which is better than not being able to do it at all but is also not the real deal and will fall apart the moment anything overloads global memory or otherwise overtaxes the brain.
2Raemon
I actually was not expecting the process to be "ask if they have the skill", I was expecting the sequence to be: 1. get into an argument 2. notice it feels stuck 3. notice that your conversation partner seems stuck in a system 4. make some effort to convey that you're trying to talk about a different system 5. say (some version of) "hey man, it looks like you don't have the 'step outside your current frame' skill, and I don't think the argument is worth having until you do." (well, that's probably an unproductive way to go about it, but, I'm assuming the 'notice they don't have the skill' part comes from observations while arguing rather than something you ask them and they tell you about.')
4Viliam
Maybe a more diplomatic way could be: "hey man, for the sake of thought experiment, could we for a moment consider this thing from a different frame?" They may agree or refuse, but probably won't feel offended.
2Gordon Seidoh Worley
Something about this feels like what I used to do but don't do now, and I realized what it is. If they're stuck I don't see it as their problem, I see it as my problem that I can't find a way to take my thing and make it sensible to them within their system, or at least find an entry point, since all systems are brittle and you just have to find the right thread to pull if you want to untangle it so they can move towards seeing things in ways beyond what their current worldview permits. But maybe my response looks the same if I can't figure it out and/or don't feel like putting in the energy to do that, which is some version of "hey, looks like we just disagree in some fundamental way here I'm not interested in trying to resolve, sorry", which I regret is kinda rude still and wish I could find a way to be less rude about.
6Raemon
I think I don't feel too bad about "hey, looks like we just disagree in some fundamental way here I'm not interested in trying to resolve, sorry". It might be rude in some circles but I think I'm willing to bite the bullet on "it's pretty necessary for that to be an okay-move to pull on LW and in rationalist spaces." I think "we disagree in a fundamental way" isn't quite accurate, and there's a better version that's something like "I think we're thinking in pretty different frames/paradigms and I don't think it makes sense to bridge that disconnect." A thing making it tricky (also relevant to Viliam's comment) is that up until recently there wasn't even a consensus that different-frames were a thing, that you might need to translate between.

There's a problem at parties where there'll be a good, high-context conversation happening, and then one-too-many-people join, and then the conversation suddenly dies.

Sometimes this is fine, but other times it's quite sad.

Things I think might help:

  • If you're an existing conversation participant:
    • Actively try to keep the conversation small. The upper limit is 5, 3-4 is better. If someone looks like they want to join, smile warmly and say "hey, sorry we're kinda in a high context conversation right now. Listening is fine but probably don't join."
    • If you do want to let a newcomer join in, don't try to get them up to speed (I don't know if I've ever seen that actually work). Instead, say "this is high context so we're not gonna repeat the earlier bits, maybe wait to join in until you've listened enough to understand the overall context", and then quickly get back to the conversation before you lose the Flow.
  • If you want to join a conversation:
    • If there are already 5 people, sorry, it's probably too late. Listen if you find it interesting, but if you actively join you'll probably just kill the conversation.
    • Give them the opportunity to gracefully keep the conversation small if they choose. (s
... (read more)
4Dagon
+lots. Some techniques: * physically separate the group. Go into another room or at least corner. Signal that you're not seeking additional participants. * When you notice this, make it explicit - "I'm really enjoying the depth of this conversation, should we move into the lounge for a brandy and a little more quiet?" * Admit (to yourself) that others may feel excluded, because they are. At many gatherings, such discussions/situations are time-bound and really can't last more than 10-45 minutes. The only solution is to have more frequent, smaller gatherings. * Get good at involved listening - it's different than 1:1 active listening, but has similar goals: don't inject any ideas, but do give signals that you're following and supporting. This is at least 80% as enjoyable as active participation, and doesn't break the flow when you join a clique in progress. I wonder what analogs there are to online conversations. I suspect there's a lot of similarity for synchronous chats - too many people make it impossible to follow. For threaded, async discussions, the limits are probably much larger.
3Tobias H
[EDIT, was intended as a response to Raemon, not Dagon.] Maybe it's the way you phrase the responses. But as described, I get the impression that this norm would mainly work for relatively extroverted persons with low rejection sensitivity. I'd be much less likely to ever try to join a discussion (and would tend to not attend events with such a norm). But maybe there's a way to avoid this, both from "my side" and "yours".
2Raemon
Hmm, seems like important feedback. I had specifically been trying to phrase the responses in a way that addressed this specific problem. Sounds like it didn't work. There is some intrinsic rejection going on here, which probably no amount of kind wording can alleviate for a rejection-sensitive person. For my "sorry, we're keeping the convo small" bit, I suggested: The Smile Warmly part was meant to be a pretty active ingredient, helping to reassure them it isn't personal.  Another thing that seems pretty important, is that this applies to all newcomers, even your friends and High Status People. (i.e. hopefully if Anxious Alex gets turned away, but later sees High Status Bob also get turned away, they get reassured a bit that this wasn't about them)
2Raemon
FYI, the actual motivating example here was at a party in gather.town, (formerly online.town, formerly town.siempre), which has much more typical "party" dynamics. (i.e people can wander around an online world and video chat with people nearby). In this case there were actually some additional complexities – I had joined a conversation relatively late, I did lurk for quite awhile, and wait for the current set of topics to die down completely before introducing a new one. And then the conversation took a turn that I was really excited by, and at least 1-2 other people were interested in, but it wasn't obvious to me that it was interesting to everyone else (I think ~5 people involved total?) And then a new person came in, and asked what we were talking about and someone filled them in... ...and then immediately the conversation ended. And in this case I don't know if the issue was more like "the newcomer killed the conversation" or "the convo actually had roughly reached it's natural end, and/or other people weren't that interested in the first place." But, from my own perspective, the conversation had just finished covering all the obvious background concepts that would be required for the "real" conversation to begin, and I was hoping to actually Make Real Progress on a complex concept. So, I dunno if this counted as "an interesting conversation" yet, and unfortunately the act of asking the question "hey, do we want to continue diving deep into this, or wrap up and transition into some other convo?" also kinda kills the conversation. Conversations are so god damn fragile. What I really wished was that everyone already had common knowledge of the meta-concept, wherein: * Party conversations are particularly fragile * Bringing a newcomer up to speed is usually costly if the conversation is doing anything deep * We might or might not want to continue delving into the current convo (but we don't currently have common knowledge of this in either direction) And
2Matt Goldenberg
I hosted an online-party using zoom breakout rooms a few weeks ago and ran into similar problems. Half-way through the party I noticed people were clustering in suboptimal size conversations and bringing high-context conversations to a stop, so I actually brought everybody backed to the lobby then randomly assigned them to groups of 2 or 3 - and when I checked 10 minutes later everyone was in the same two rooms again with groups of 8 - 10 people. AFAICT this was status/feelings driven - there were a few people at the party who were either existing high-status to the participants, or who were very charismatic, and everyone wanted to be in the same conversation as them. I think norm-setting around this is very hard, because it's natural to want to be around high-status and charismatic people, and it's also natural to want to participate in a conversation you're listening to. I'm going to try to add your suggestions to the top of the shared google doc next time I host one of these and see how it goes.
2Raemon
Agreed with the status/feelings cause. And I'm not 100% sure the solution is "prevent people from doing the thing they instinctively want to do" (especially "all the time.") My current guess is "let people crowd around the charismatic/and/or/interesting people, but treat it more like a panel discussion or fireside chat, like you might have at a conference, where mostly 2-3 people are talking and everyone else is more formally 'audience.'" But doing that all the time would also be kinda bad in different ways. In this case... you might actually be able to fix this with technology? Can you literally put room-caps on the rooms, so if someone wants to be the 4th or 6th person in a room they... just... can't?

I'm not sure why it took me so long to realize that I should add a "consciously reflect on why I didn't succeed at all my habits yesterday, and make sure I don't fail tomorrow" to my list of daily habits, but geez it seems obvious in retrospect.

2Raemon
Following up to say that geez any habit practice that doesn't include this now feels super silly to me.
2[anonymous]
Just don't get trapped in infinite recursion and end up overloading your habit stack frame!
3Raemon
I mean, the whole thing only triggers once per day, so I can't go farther than a single loop of "why didn't I reflect on my habit-failure yesterday?" :P (But yeah I think I can handle up-to-one-working-memory-load of habits at a time)
1[anonymous]
Uh, what if you forget to do your habit troubleshooting habit and then you have to troubleshoot why you forgot it? And then you forget it twice and you have to troubleshoot why you forgot to troubleshoot forgetting to troubleshoot! (I'm joking about all this in case it's not obvious.)

Strategic use of Group Houses for Community Building

(Notes that might one day become a blogpost. Building off The Relationship Between the Village and the Mission. Inspired to go ahead and post this now because of John Maxwell's "how to make money reducing loneliness" post, which explores some related issues through a more capitalist lens)

  • A good village needs fences:
    • A good village requires doing things on purpose. 
    • Doing things on purpose requires that you have people who are coordinated in some way
    • Being coordinated requires you to be able to have a critical mass of people who are actually trying to do effortful things together (such as maintain norms, build a culture, etc)
    • If you don't have a fence that lets some people in and doesn't let in others, and which you can ask people to leave, then your culture will be some random mishmash that you can't control
  • There are a few existing sets of fences. 
    • The strongest fences are group houses, and organizations. Group houses are probably the easiest and most accessible resource for the "village" to turn into a stronger culture and coordination point. 
  • Some things you might coordinate using group houses f
... (read more)
6Vaniver
A thing that I have seen work well here is small houses nucleating out of large houses. If you're living in a place with >20 people for 6 months, probably you'll make a small group of friends that want similar things, and then you can found a smaller place with less risk. But of course this requires there being big houses that people can move into and out of, and that don't become the lower-common-denominator house that people can't form friendships in because they want to avoid the common spaces. But of course the larger the house, the harder it is to get off the ground, and a place with deliberately high churn represents even more of a risk.

Lately I've been noticing myself getting drawn into more demon-thready discussions on LessWrong. This is in part due to UI choice – demon threads (i.e. usually "arguments framed through 'who is good and bad and what is acceptable in the overton window'") are already selected for getting above-average at engagement. Any "neutral" sorting mechanism for showing recent comments is going to reward demon-threads disproportionately.

An option might be to replace the Recent Discussion section with a version of itself that only shows comments and posts from the Questions page (in particular for questions that were marked as 'frontpage', i.e. questions that are not about politics).

I've had some good experiences with question-answering, where I actually get into a groove where the thing I'm doing is actual object-level intellectual work rather than "having opinions on the internet." I think it might be good for the health of the site for this mode to be more heavily emphasized.

In any case, I'm interested in making a LW Team internal option where the mods can opt into a "replace recent discussion with recent question act... (read more)

I still want to make a really satisfying "fuck yeah" button on LessWrong comments that feels really good to press when I'm like "yeah, go team!" but doesn't actually mean I want to reward the comment in our longterm truthtracking or norm-tracking algorithms.

I think this would seriously help with weird sociokarma cascades.  

5Lao Mein
You should just message them directly. "Your comment was very based." would feel quite nice in my inbox.
5Raemon
It needs to be less effort than upvoting to accomplish the thing I want.
3Viliam
Ah, I imagine a third set of voting buttons, with large colorful buttons "yay, ingroup!!!" and "fuck outgroup!!!", with the following functionality: * in your personal settings,you can replace the words "ingroup" and "outgroup" by a custom text * only the votes that agree with you are displayed; for example if there are 5 "yay" votes and 7 "boo" votes, if you voted "yay", you will only see "5 people voted yay on this comment" (not the total -2) * the yay/boo votes have no impact on karma * if you make a yay/boo vote, the other two sets of voting buttons are disabled for this comment What I expect from this solution: * to be emotionally deeply satisfying * without having any impact on karma (actually it would take mindkilling votes away from the karma buttons)
2Dagon
What longterm truthtracking or norm-tracking algorithms are you talking about?  Can you give a few examples of sociokarma cascades that you think will improved by this complexity?  Would adding agree/disagree to top-level posts be sufficient (oh, wait, you're talking about comments.  How does agree/disagree not solve this?) More fundamentally, why do you care about karma, aside from a very noisy short-term input into whether a post or comment is worth thinking about? Now if you say "do away with strong votes, and limit karma-based vote multiples to 2x", I'm fully onboard.

Can democracies (or other systems of government) do better by more regularly voting on meta-principles, but having those principles come into effect N years down the line, where N is long enough that the current power structures have less clarity over who would benefit from the change?

Some of the discussion on Power Buys You Distance From the Crime notes that campaigning to change meta principles can't actually be taken at face value (or at least, people don't take it at face value), because it can be pretty obvious who would benefit from a particular meta principle. (If the king is in power and you suggest democracy, obviously the current power structure will be weakened. If people rely on Gerrymandering to secure votes, changing the rules on Gerrymandering clearly will have an impact on who wins next election)

But what if people voted on changing rules for Gerrymandering, and the rules wouldn't kick in for 20 years. Is that more achievable? Is it better or worse?

The intended benefit is that everyone might roughly agree it's better for the system to be more fair, but not if that fairness will clearly directly cost them. If a rule change occurs far enough in the... (read more)

9habryka
I have a bunch of thoughts on this. A lot of the good effects of this actually happened in space-law, because nobody really cared about the effects of the laws when they were written. Other interesting contracts that were surprisingly long-lasting is the ownership of Hong-Kong for Britain, which was returned after 90 years. However, I think there are various problems with doing this a lot. One of them is that when you make a policy decision that's supposed to be useful in 20 years, then you are making a bid on that policy being useful in the environment that will exist in 20 years, over which you have a lot of uncertainty. So by default I expect policy-decisions made for a world 20 years from now to be worse than decisions made for the current world. The enforcability of contracts over such long time periods is also quite unclear. What prevents the leadership 15 years from now from just calling off the policy implementation? This requires a lot of trust and support for the meta-system, which is hard to sustain over such long periods of time. In general, I have a perspective that lots of problems could be solved if people could reliably make long-term contracts, but that there are no reliably enforcement mechanisms for long-term contracts at the national-actor level.
7Dagon
I think lack of long-term contract enforcement is one part of it - the US congress routinely passes laws with immediate costs and delayed revenue, and then either continually postpones or changes it's mind on the delayed part (while keeping the immediate part). I'd classify it as much as deception as of lack of enforcement. It's compounded by the fact that the composition of the government changes a bit every 2 years, but the fundamental problem is that "enforcement" is necessary, because "alignment" doesn't exist. Trying to go meta and enforce far-mode stated values rather than honoring near-mode actual behaviors is effectively forcing people into doing what they say they want, as opposed to inferring what they actually want. I'm actually sympathetic to that tactic, but I do recognize that it's coercion (enforcement of ill-considered contract) rather than actual agreement (where people do what they want, because that's what they want).
7Gordon Seidoh Worley
Good example: the US tried to go metric and then canceled its commitment.

Musings on ideal formatting of posts (prompted by argument with Ben Pace)

My thoughts:

1) Working memory is important.

If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head.

2) Less Wrong is for thinking

This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them.

3) You can expand working memory with visual reference

Having larger monitors or notebooks to jot down thoughts makes it easier to think.

The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain).

But regardless of font-size:

4) Optimizing a post for re-skimmability makes it easier to refer to.

This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Su... (read more)

8Zvi
I pushed Oliver for smaller font size when I first saw the LW 2.0 design (I'd prefer something like the comments font), partly for the words-in-mind reason. I agree that bigger words work against complex and deep thinking, and also think that any time you force someone to scroll, you risk disruption (when you have kids you're trying to deal with, being forced to interact with the screen can be a remarkably large negative). I avoid bold and use italics instead because of the skimming effect. I feel like other words are made to seem less important when things are bolded. Using it not at all is likely a mistake, but I would use it sparingly, and definitely not use it as much as in the comment above. I do think that using variable font size for section headings and other similar things is almost purely good, and give full permission for admins to edit such things in if I'm being too lazy to do it myself.
4habryka
The current plan is to allow the authors to choose between a smaller sans-serif that is optimized for skimmability, and a larger serif that is optimized for getting users into a flow of reading. Not confident about that yet though. I am hesitant about having too much variance in font-sizes on the page, and so don't really want to give authors the option to choose their own font-size from a variety of options, but having a conceptual distinction between "wiki-posts" that are optimized for skimmability and "essay-posts" that are optimized for reading things in a flow state seems good to me. Also not sure about the UI for this yet, input is welcome. I want to keep the post-editor UI as simple as possible.
2Raemon
FYI it's been a year and I still think this is pretty important
3Raemon
Hmm. Here's the above post with italics instead, for comparison: ... Musings on ideal formatting of posts (prompted by argument with Ben Pace) My thoughts: 1) Working memory is important. If a post talks about too many things, then in order for me to respond to the argument or do anything useful with it, I need a way to hold the entire argument in my head. 2) Less Wrong is for thinking This is a place where I particularly want to read complex arguments and hold them in my head and form new conclusions or actions based on them, or build upon them. 3) You can expand working memory with visual reference Having larger monitors or notebooks to jot down thoughts makes it easier to think. The larger font-size of LW main posts works against this currently, since there are fewer words on the screen at once and scrolling around makes it easier to lose your train of thought. (A counterpoint is that the larger font size makes it easier to read in the first place without causing eyestrain). But regardless of font-size: 4) Optimizing a post for re-skimmability makes it easier to refer to. This is why, when I write posts, I make an effort to bold the key points, and break things into bullets where applicable, and otherwise shape the post so it's easy to skim. (See Sunset at Noon for an example)
4Raemon
I think it works reasonably for the bulleted-number-titles. I don't personally find it working as well for interior-paragraph things. Using the bold makes the document function essentially as it's own outline, whereas italics feels insufficient for that - when I'm actually in skimming/hold-in-working-memory mode, I really want something optimized for that. The solution might just to provide actual outlines after-the-fact. Part of what I liked with my use of bold and headers was that it'd be fairly easy to build a tool that auto-constructs an outline.
5gjm
For what it's worth, my feeling is pretty much the opposite. I'm happy with boldface (and hence feel no need to switch to italics) for structural signposts like headings, but boldface is too prominent, relative to ordinary text, to use for emphasis mid-paragraph unless we actively want readers to read only the boldface text and ignore everything else. I would probably not feel this way if the boldface text were less outrageously heavy relative to the body text. (At least for me, in the browser I'm using now, on the monitor I'm using now, where the contrast is really extreme.)
8Said Achmiz
Some comparisons and analysis: (1) Using bold for emphasis When the font size is small, and the ‘bold’ text has a much heavier weight than the regular text (left-hand version), the eye is drawn to the bold text. This is both because (a) reading the regular text is effortful (due to the small size) and the bold stands out and thus requires greatly reduced effort, and (b) because of the great contrast between the two weights. But when the font size is larger, and the ‘bold’ text is not so much heavier in weight than the regular text (right-hand version), then the eye does not slide off the regular text, though the emphasized lines retains emphasis. This means that emphasis via bolding does not seriously impact whether a reader will read the full text. (2) Using italics for emphasis Not much to say here, except that how different the italic variant of a font is from the roman variant is critical to how well italicizing works for the purpose of emphasis. It tends to be the case that sans-serif fonts (such as Freight Sans Pro, the font currently used for comments and UI elements on LW) have less distinctive italic variants than serif fonts (such as Charter, the font used in the right-hand part of the image above)—though there are some sans-serif fonts which are exceptions. (3) Skimmability Appropriate typography is one way to increase a post’s navigability/skimmability. A table of contents (perhaps an auto-generated one—see image) is another. (Note that the example post in this image has its own table of contents at the beginning, provided by Raemon, though few other posts do.) (4) Bold vs. italic for emphasis This is a perfect case study of points (1) and (2) above. Warnock Pro (the font you see in the left-hand part of the image above) has a very distinctive italic variant; it’s hard to miss, and works very well for emphasis. Charter (the font you see in the right-hand part of the image) has a somewhat less distinctive italic variant (though still
6Said Achmiz
Here, for reference, is a brief list of reasonably readable sans-serif fonts with not-too-heavy boldface and a fairly distinctive italic variant (so as to be suitable for use as a comments text font, in accordance with the desiderata suggested in my previous comment): * Alegreya Sans * FF Scala * Frutiger Next * * IBM Plex Sans * Merriweather Sans * Myriad Pro * Optima nova * (Fonts marked with an asterisk are those I personally am partial to.) Edit: Added links to screenshots.
4Raemon
One thing that's worth noting here is there's an actual difference of preference between me and (apparently a few, perhaps most) others. When I use bold, I'm specifically optimizing for skimmability because I think it's important to reference a lot of concepts at once, and I'm not that worried about people reading every word. (I take on the responsibility of making sure that the parts that are most important not to miss are bolded, and the non-bold stuff is providing clarity and details for people who want them) So, for my purposes I actually prefer bold that stands out well enough that my eyes easily can see it at a glance.
[-]Raemon101

New concept for my "qualia-first calibration" app idea that I just crystallized. The following are all the same "type":

1. "this feels 10% likely"

2. "this feels 90% likely"

3. "this feels exciting!"

4. "this feels confusing :("

5. "this is coding related"

6. "this is gaming related"

All of them are a thing you can track: "when I observe this, my predictions turn out to come true N% of the time".

Numerical-probabilities are merely a special case (tho it still gets additional tooling, since they're easier to visualize graphs and calculate brier scores for)

And then a major goal of the app is to come up with good UI to help you visualize and compare results for the "non-numeric-qualia".

Depending on circumstances, it might seem way more important to your prior "this feels confusing" than "this feels 90% likely". (I'm guessing there is some actual conceptual/mathy work that would need doing to build the mature version of this)

[-]Raemon104

"Can we build a better Public Doublecrux?"

Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format.

Public Doublecrux is a more truthseeking oriented version of Public Debate. (The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice vera")

Reasons to want to do _public_ doublecrux include:

  • it helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer)
  • there's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting.

Historically I think public doublecruxes have had some problems:

  • two people actually changing *their* minds tend to get into idiosyncratic frames that are hard for observers to understand. You're chasing *your* cruxes, rather than presenting "generally compelling arguments." This tends to get into weeds and go down rabbit holes
  • – having the audie
... (read more)
1keltan
Ramble dot points of thoughts I had around this. 1. I like this idea 2. When I listen to very high power or smart people debate, what I’m looking for is to absorb their knowledge. 1. Tacit and semantic. 3. Instead, as the debate heats up, I feel myself being draw into one of the sides. 1. I spend more time thinking about my bias than the points being made. 2. I’m not sure what I’m picking up from heated debate is as valuable as it could be. 4. If the interlocutors are not already close friends, perhaps having them complete a quick bonding exercise to gain trust? 1. I image playing on the same team in a video game or solving a physical problem together. 2. Really let them settle into a vibe of being friends. Let them understand what it feels like to work with this new person toward a common goal.
[-]Raemon100

Two interesting observations from this week, while interviewing people about their metacognitive practies.

  • @Garrett Baker said that he had practiced memorizing theorems for linear algera awhile back, and he thinks this had (a side effect?) of creating a skill of "memorizing stuff quickly", which then turned into some kind of "working memory management" tool. It sounded something like "He could quickly memorize things and chunk them, and then he could do that on-the-fly while reading math textbooks".
     
  • @RobinGoins had an experience of not being initially able to hold all their possible plans/goals/other in working memory, but then did a bunch of Gendlin Focusing on them, and then had an easier time holding them all. It sounds like the Gendlin Focusing was playing a similar role to the "fast memorization" thing, of "finding a [nonverbal] focusing handle for a complex thing", where the focusing handle was able to efficiently unpack into the full richness of the thing they were trying to think about.

Both of these are interesting because they hint at a skill of "rapid memorization => improved working memory". 

@gwern has previously written about Dual N Back not actually working... (read more)

I think a bunch of discussion of acausal trade might be better framed as "simulation trade." It's hard to point to "acausal" trade in the real world because, well, everything is at least kinda iterated and at least kinda causally connected. But, there's plenty of places where the thing you're doing is mainly trading with a simulated partner. And this still shares some important components with literal-galaxy-brains making literal acausal trade.

2Dagon
I’d love to see a worked example. The cases I come up with are all practice for or demonstrations of feasibility for casual normal trade/interactions.
2Gunnar_Zarncke
I think I know at least some of the examples you refer to. I think the causality in these cases is a shared past of the agents making the trade. But I'm not sure that breaks the argument in cases where the agents involved are not aware of that, for example but not limited to, having forgotten about it or intentionally removed the memory. 
4Dagon
There is convoluted-causality in a lot of trust relationships.  "I trust this transaction because most people are honest in this situation", which works BECAUSE most people are, in fact, honest in that situation.  And being honest does (slightly) reinforce that for future transactions, including transactions between strangers which get easier only to the degree they're similar to you. But, while complex and involving human social norms and "prediction", it's not comparable to Newcomb (one-shot, high-stakes, no side-effects) or acausal trade (zero-shot, no path to specific knowledge of outcome).
2Gunnar_Zarncke
In which way is sharing some common social knowledge relevantly different from sharing the same physical universe?
2Dagon
Common social knowledge has predictive power and causal pathways to update the knowledge (and others' knowledge of the social averages which contain you).  Acausal trade isn't even sharing the same physical universe  - it's pure theory, with no way to adjust over time.
2Raemon
"Casual norm trade/interactions" does seem like most of the obvious example-space. The generator for this thought comes from chatting with Andrew Critch. See this post for some reference: http://acritch.com/deserving-trust/ 
2Dagon
Typo: s/casual/causal/ - these seem to be diffuse reputation cases, where one recognizes that signaling is leaky, and it’s more effective to be trustworthy than to only appear trustworthy. Not for subtle Newcombe or acausal reasons, but for highly evolved betrayal detection mechanisms.

So, AFAICT, rational!Animorphs is the closest thing CFAR has to publicly available documentation. (The characters do a lot of focusing, hypothesis generation-and-pruning. Also, I just got to the Circling Chapter)

I don't think I'd have noticed most of it if I wasn't already familiar with the CFAR material though, so not sure how helpful it is. If someone has an annotated "this chapter includes decent examples of Technique/Skill X, and examples of characters notably failing at Failure Mode Y", that might be handy.

In response to lifelonglearner's comment I did some experimenting with making the page a bit bolder. Curious what people think of this screenshot where "unread" posts are bold, and "read" posts are "regular" (as opposed to the current world, where "unread" posts "regular", and read posts are light-gray).

8Rob Bensinger
I'd be interested in trying it out. At a glance, it feels too much to me like it's trying to get me to read Everything, when I can tell from the titles and snippets that some posts aren't for me. If anything the posts I've already read are often ones I want emphasized more? (Because I'm curious to see if there are new comments on things I've already read, or I may otherwise want to revisit the post to link others to it, or finish reading it, etc.) The bold font does look aesthetically fine and breaks things up in an interesting way, so I like the idea of maybe using it for more stuff?
4Raemon
Alternate version where only the title and karma are bolded:
4Sunny from QAD
I think I prefer the status quo design, but not very strongly. Between the two designs pictured here, I at first preferred the one where the authors weren't bolded, but now I think I prefer the one where the whole line is bolded, since "[insert author whose posts I enjoy] has posted something" is as newsworthy as "there's a post called [title I find enticing]". Something I've noticed about myself is that I tend to underestimate how much I can get used to things, so I might end up just as happy with whichever design is chosen.
3Adam Scholl
Fwiw, for reasons I can't explain I vastly prefer just the title bolded to the entire line bolded, and significantly prefer the status quo to title bolded.
2Rob Bensinger
I think I prefer bolding full lines b/c it makes it easier to see who authored what?
4Raemon
I initially wanted "bold everywhere" because it helped my brain reliably parse things as "this is a bold line" instead of "this is a line with some bold parts but you have to hunt for them". But, after experimenting a bit I started to feeling having bold elements semi-randomly distributed across the lines made it a lot busier.
2Raemon
The LW team has been trying this out the "bolded unread posts" a few days as an admin-only setting. I think pretty much everyone isn't liking it. But I personally am liking the fact that most posts aren't grey, and I'm finding myself wondering whether it's even that important to highlight unread posts. Obviously there's some value to it, but: a) a post being read isn't actually that much evidence about whether I want to read it again – I find myself clicking on old posts about as often as new posts. (This might be something you could concretely look into with analytics) b if I don't want to read a post, marking it as read is sort of annoying c) I still really dislike having most of my posts be grey d) it's really hard to make an "unread" variant that doesn't scream out for disproportionate attention. (I suppose there's also an option for this to be a user-configurable setting, since most users don't read so many posts that they all show up grey, and the few who do could maybe just manually turn it off)