All of Ruby's Comments + Replies

LessWrong FAQ

That syntax is for the Markdown editor (enabled through your user settings). For the LW Docs editor, use >! to start your paragraph.

The Rationalists of the 1950s (and before) also called themselves “Rationalists”

Curated. I really like seeing this, seeing how LessWrong's intellectual tradition is part of larger rational tradition and culture. There's a certain fun to it.

Frame Control

I should have been clear in my message above, I am leaving the initial ban in place for the original accounts and for the subsequent accounts being made. The ongoing comment pattern continues to seem quite bad to me.

Frame Control

I've responded overall to your comments here.

Regarding where I banned you without reason: I have a certain amount of trust in LessWrong's members that my prior is that severely downvoted comments are probably quite bad*, especially if a quick glance makes it seem likely. It's not enough to reach a final verdict, but it's enough for me to want to hit the brakes even before I get a chance to full judge for myself. In this case, I think the voters were correct in direct in direction although perhaps not in magnitude.

Fwiw, while I think at least part of the co... (read more)

Frame Control

I've thought about these comments now over several hours and I do indeed think they are quite bad. I have deleted one and may delete/lock others others in the subthreads.

It's tricky to precisely specify the badness with explicit principles, but won't succeed entirely here. A very strong element of the badness in the first comment above ("Your whole career...") is leveraging negative intuitions and associations about working in porn that feel misguided and also without justification or explanation of relevance. Similarly, while I feel that accusations that ... (read more)

-1Ruby5dI should have been clear in my message above, I am leaving the initial ban in place for the original accounts and for the subsequent accounts being made. The ongoing comment pattern continues to seem quite bad to me.
Frame Control

I banned your account. It would have been ideal to message/comment about it, but I didn't have the time/attention to properly review or comment yesterday. Seeing an account at -168 karma with multiple severely negative comments, it seemed better to intervene before things got more out of hand, instead of doing nothing, even though I didn't have the time to get all the context on the relevant thread.

I will hopefully get another chance in another couple of hours to review the thread and think about your comments. Please don't comment in the meantime. If you want to contact me, use ruby@lesswrong.com, or the Intercom in the bottom right corner.

I mean what you're writing has the right intellectual aesthetic but this is no different to just banning me for no reason. The stuff I'm saying really isn't that bad and I am really the only seriously negative poster here. Aella knows how to get people to do things, anyone on the edge of the Bay Area community will know this, and the rationalists are really bad at ejecting manipulative people. It's dangerous.

Just look at how she is controlling the narrative on her twitter account. She's quoting stuff that nobody ever said in a way that makes people sympath... (read more)

Base Rates and Reference Classes

I converted the post from the html import in LW Docs editor and manually fixed up the LaTex, which handles it for today.

Welcome to LessWrong!

Check out the starting guide in the FAQ!

The Maker of MIND

You may use it for fiction!

Ngo and Yudkowsky on alignment difficulty

Curated. The treatment of how cognition/agents/intelligence work alone makes this post curation-worthy, but I want to further commend how much it attempts to bridges [large] inferential distances notwithstanding Eliezer's experience of it being difficult to bridge all the distance. Heck, just bridging some distance about the distance is great.

I think good things would happen if we had more dialogs like this between researchers. I'm interested in making it is easier to conduct and publish them on LessWrong, so thanks to all involved for the inspiration.

Open & Welcome Thread November 2021

Welcome! That's an interesting path you've followed.

9Jon Garcia21dThanks. I think it's important not to forget the path I've taken. It's a major part of my identity even though I no longer endorse what were once my most cherished beliefs, and I feel that it helps connect me with the greater human experience. My parents and (ironically) my training in apologetics instilled in me a thirst for truth and an alertness toward logical fallacies that took me quite far from where I started in life. I guess that a greater emphasis on overcoming confirmation bias would have accelerated my truth-seeking journey a bit more. Unfortunately and surprisingly for a certain species of story-telling social primates, the truth is not necessarily what is believed and taught by the tribe. An idea is not true just because people devote lifetimes to defending it. And an idea is not false just because they spend lifetimes mocking it. The one thing that held me back the most, I think, is my rather strong deontological instinct. I always saw it as my moral duty to apply the full force of my rational mind to defending the Revealed Truth. I was willing to apply good epistemology to modify my beliefs arbitrarily far, as long as it did not violate the moral constraint that my worldview remain consistent with the holistic biblical narrative. Sometimes that meant radically rethinking religious doctrines in light of science (or conflicting scriptures), but more often it pushed me to rationalize scientific evidence to fit with my core beliefs. I always recognized that all things that are true are necessarily mutually consistent, that we all inhabit a single self-consistent Reality, and that the Truth must be the minimum-energy harmonization of all existing facts. However, it wasn't until I was willing to let go of the moral duty to retain the biblical narrative in my set of brute facts that the free energy of my worldview dropped dramatically. It was like a thousand high-tension cables binding all my beliefs to a single (misplaced) epistemological hub were all rel
Concentration of Force

Curated. I like this concept. It's not the total force theoretically but how much can be brought to bear. I especially appreciate the later application to moderation.

Study Guide

Curated. I wish I could drop everything and devote myself to all the topics listed here, I love the sheer love of knowledge I perceive here. 

I'm curating this because there are some people for whom this is invaluable guidance, and because I'd like to see more of this from other cutting-edge researchers. This post is more than a list of the topics the author happened to study, rather it comes with a whole worldview that I think is just as important as the list. I'd love to see more like this.

 

not just a long list, but a paradigm,

I notice this one came out in May (and the comments are from then too). It would be nice to see a post along these lines that's post-Delta.

Speaking of Stag Hunts

I did indeed mean "dissatisfied" in a "counting down" sense.

Speaking of Stag Hunts

Although this isn't how I think about karma, on reflection, I think it's a good and healthy frame, and I'm glad you have it and brought it up with your detailed suggestion.

-2JenniferRM1moYeah, my larger position is that karma (and upboats and so on) are brilliant gamifications [https://slejournal.springeropen.com/articles/10.1186/s40561-019-0098-x] of "a way to change the location of elements on a webpage". Reddit is a popular website, that many love, for a reason. I remember Digg. I remember K5. I remember Slashdot. There were actual innovations in this space, over time, and part of the brilliance in the improvements was in meeting the needs of a lot of people "where they are currently at" and making pro-social use of many tendencies that are understandably imperfect. Social engineering is a thing, and it is a large part of why our murder rate is so low, and our material prosperity is so high. It is super important and, done well, is mostly good. (I basically just wish that more judges and lawyers and legislators in modern times could program computers, and brought that level of skill to the programming of society.) However, I also think that gamification ultimately should be understood as a "mere" heuristic... as a hack that works on many humans who are full of passions and confusions in predictable ways... If everyone was a sage, I think gamification would be pointless or even counter-productive. A contextually counter-productive heuristic is a bias. In a deep sense we have biases because we sometimes have heuristics that are being applied outside of their training distribution by accident. The context where gamification might not work: Eventually you know you are both the rider and the elephant [https://www.creativehuddle.co.uk/post/the-elephant-and-the-rider]. Your rider has trained (and is still training) your elephant pretty well, and sometimes even begins to ruefully be thankful that the elephant had some good training, because sometimes the rider falls asleep and it was only the luck of a well-trained elephant that kept them from tragedy. Anyone who can get to this point (and I'm nowhere close to perfect here, but sometimes in some d
Speaking of Stag Hunts

Please don't make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.

For the record, as "arch-moderator", I care about karma for more reasons than just that, in line with Oli's list here.

[This comment is no longer endorsed by its author]Reply
5Ruby1moAlthough this isn't how I think about karma, on reflection, I think it's a good and healthy frame, and I'm glad you have it and brought it up with your detailed suggestion.
Open & Welcome Thread November 2021

A Song for Two Voices does kind of have a rationalist protagonist, but other characters also try to be rational too (but are less advanced, generally). I guess in that way it's a bit like HPMOR, though I do think the other characters are trying a bit harder.

Open & Welcome Thread November 2021

Thanks for the report! Yeah, a good deal of weirdness lives in editor edge cases.

A GitHub issue would be good, but here is also fine if it's easier.

4MondSemmel1moHave created a Github issue here [https://github.com/LessWrong2/Lesswrong2/issues/4248].
Speaking of Stag Hunts

Not sure if this is what you're getting at. My estimate is that only a few dozen people participated and that I would ascribe to most of them either a desire for good organizations, a desire to protect people or a desire for truth and good process to be followed. I'd put entertainment seeking as a non-trivial motivation for many, and to be responsible for certain parts of the conversation, but not the overall driver.

9Duncan_Sabien1moFor me personally, they're multiplied terms in the Fermi. Like, engagement = [desire for good]*["entertainment"]*[several other things]. I wouldn't have been there at all just for the drama. But also if there was zero something-like-pull, zero something-like-excitement, I probably wouldn't have been there either. I don't feel great about this.
EfficientZero: human ALE sample-efficiency w/MuZero+self-supervised
Ruby1mo10Ω5

Curated. Although this isn't a LessWrong post, it seems like a notable result for AGI progress.  Also see this highly-upvoted, accessible explanation of why EfficientZero is a big deal. Lastly,  I recommend the discussion in the comments here.

Speaking of Stag Hunts

Drama

I object to describing recent community discussions as "drama". Figuring out what happened within community organizations and holding them accountable is essential for us to have a functioning community. [I leave it unargued that we should have community.]

I agree that figuring out what happened and holding people/orgs accountable is important. That doesn't make the process (at least the process as it worked this time) not drama. I certainly don't think that the massive amount of attention the recent posts achieved can be attributed to thousands of people having a deeply-held passion for building effective organizations.

3MondSemmel1moThis is a weird orphaned comment, with some weird technical details: it has a "Show previous comment" button, and when I open that previous comment and click its link [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=qMs69yCKkyQFvWnTh#qMs69yCKkyQFvWnTh] , its "See in context" button doesn't work. Something maybe went wrong with a mod action?
Speaking of Stag Hunts

Strong upvote. Thank you for writing this, it articulates the problems better than I had them in my head and enhances my focus. This deserves a longer reply, but I'm not sure if I'll get to write it today, so I'll respond with my initial thoughts.

What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps

... (read more)
3hg001moI think this is usually done subconsciously -- people are more motivated to find issues with arguments they disagree with.

Regarding the three threads you list, I, others involved in managing of LessWrong, and leading community figures who've spoken to me are all dissatisfied with how those conversations went and believe it calls for changes in LessWrong.

I'm deeply surprised by this. If there is a consensus among the LW managers and community figures, could one of them write a post about it laying out what was dissatisfactory and what changes they feel need to be made, or at least the result they want from the changes? I know you're a highly conscientious person with too much on zir hands already, so please don't take this upon yourself.

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is ... (read more)

6Chris_Leong1mo"If you can find me people capable of being these moderators, I will hire them. I think the number of people who have mastered the standards you propose and are also available is...smaller than I have been able to locate so far." I think the best way to do this would be to ask people to identify a few such comments and how they would have rewritten the comment.
6MondSemmel1moIf I may add something, I wish users occasionally had to explain or defend their karma votes a bit. To give one example that really confuses me, currently the top three comments on this thread are: 1. a clarification [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=odxmyFMJJFnEdpB4Q] by OP (Duncan) - makes sense 2. a critical comment [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=4sqEffYTmrpMoHjEC] which was edited after I criticized it [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=oiXMgpHY4sJgzwE4n] ; now my criticism is at ~0 karma, without any comments indicating why. This would all be fine, except the comment generated no other responses, so now I don't even understand why I was the only one who found the original objectionable, or why others didn't like my response to it; and I don't remotely understand the combination of <highly upvoted OP> and <highly upvoted criticism which generates no follow-up discussion>. (Also, after a comment is edited, is there even a way to see the original? Or was my response just doomed to stop making sense once the original was edited?) 3. another critical comment [https://www.lesswrong.com/posts/D5BP9CxKHkcjA7gLv/speaking-of-stag-hunts?commentId=XLPxu8ALLzGojtudn] , which did generate the follow-up discussion I expected (EDIT: Have fixed broken links.)
3Yoav Ravid1moHow you would test if someone fits the criteria? Can those people be trained?
2Duncan_Sabien1mo(Was this comment meant to be on Speaking of Stag Hunts? It's currently under Cup-Stacking Skills.)
[Book Review] "The Bell Curve" by Charles Murray

As I mentioned elsethread, if I'd written the book review I would have done what you describe. But I didn't and probably never would have written it out of timidness, and that makes me reluctant to tell someone less timid who did something valuable that they did it wrong.

9Lukas_Gloor1moI was just commenting on the general norm. I haven't read the OP and didn't mean to voice an opinion on it. I'm updating that I don't understand how discussions work. It happens a lot that I object only to a particular feature of an argument or particular argument, yet my comments are interpreted as endorsing an entire side of a complicated debate. FWIW, I think the "caving in" discussed/contemplated in Rafael Harth's comments is something I find intuitively repugnant. It feels like giving up your soul for some very dubious potential benefits. Intellectually I can see some merits for it but I suspect (and very much like to believe) that it's a bad strategy. Maybe I would focus more on criticizing this caving in mentality if I didn't feel like I was preaching to the choir. "Open discussion" norms feel so ingrained on Lesswrong that I'm more worried that other good norms get lost / overlooked. Maybe I would feel different (more "under attack") if I was more emotionally invested in the community and felt like something I helped build was under attack with norm erosion. I feel presently more concerned about dangers from evaporative cooling where many who care a not-small degree about "soft virtues in discussions related to tone/tact/welcomingness, but NOT in a strawmanned sense" end up becoming less active or avoiding the comment sections. Edit: The virtue I mean is maybe best described as "presenting your side in a way that isn't just persuasive to people who think like you, but even reaches the most receptive percentage of the outgroup that's predisposed to be suspicious of you."
[Book Review] "The Bell Curve" by Charles Murray

Fwiw, I bet adding the author's name was an intentional move because it'd be controversial.

6Ben Pace1moOkay. Maybe not the ideal goal, not sure, but I think it's pretty within range of fine things to do. There's a fairly good case that people will search the author's name and want to understand their ideas because he's well-known, so it helps as a search term.
[Book Review] "The Bell Curve" by Charles Murray

We've have a norm against discussing politics since before LessWrong 2.0, which doesn't seem to have had any noticeable negative effects on our ability to discuss other topics.

I'm not sure whether that's true, but separately, the norm against politics has definitely impact our ability to discuss politics. Perhaps that's a necessary sacrifice, but it's a sacrifice. In this particular case, both the object level (why is our society the way it is) and the meta-level (what are the actual views in this piece that got severe backlash) are relevant to our modelin... (read more)

2Rafael Harth1moI agree that the politics ban is a big sacrifice (regardless of whether the benefits outweigh it or not), and also that this particular post has a lot of value. But if you look at the set of all books for which (1) a largely positive reivew could plausibly been written by a super smart guy like lsusr, and (2) the backlash could plausibly be really bad, I think it literally contains a single element. It's only TBC. There are a bunch of non-bookreview posts that I also wouldn't want, but they're very rare. It seems like we're talking about a much smaller set of topics than what's covered by the norm around politics. I feel like if we wanted to find the optimal point in the value-risk space, there's no way it's "ban on all politics but no restriction on social justice". There have got to be political areas with less risk and more payoff, like just all non-US politics or something.
[Book Review] "The Bell Curve" by Charles Murray

You might be interested in the comment I posted on the other thread.

[Book Review] "The Bell Curve" by Charles Murray

In my capacity as moderator, I saw this post this morning and decided to leave it posted (albeit as Personal blog with reduced visibility). 

I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what's true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations. That said, there are conversations that wouldn't be worth having on LessWrong, topics that I expect would attract attention just not worth it–those I would block. However,... (read more)

1[comment deleted]1mo
6Rafael Harth1moThanks for being transparent. I'm very happy to see that I was wrong in saying no-one else is taking it seriously. (I didn't notice that the post wasn't on the frontpage, which I think proves that you did take it seriously.) I don't understand this concern (which I classify as the same kind of thing voiced by Zack many times and AAB just a few comments up [https://www.lesswrong.com/posts/vvc2MiZvWgMFaSbhx/book-review-the-bell-curve-by-charles-murray?commentId=5zCKjRZWnaAq5f6TA] .) We've have a norm against discussing politics since before LessWrong 2.0, which doesn't seem to have had any noticeable negative effects on our ability to discuss other topics. I think what I'm advocating for is to extend this norm by a pretty moderate amount? Like, the set of interesting topics in politics seems to me to be much larger than the set of interesting [topics with the property that they risk significant backlash from people who are concerned about social justice]. (I do see how this post is useful, but the bell curve is literally in a class that contains a single element. There seem to be < 5 posts per year which I don't want to have on LW for these kinds of reasons, and most of them are less useful than this one.) My gears-level prediction for how much that would degrade discussion in other areas is basically zero, but at this point I must be missing something? A difference I can see is that disallowing this post would be done explicitly out of fear or backlash whereas the norm against politics is because politics is the mind killer, but i guess I don't see why that makes a difference (and doesn't the mind killer argument extend to these kinds of topics anyway?) I do think that if we order all posts by where they appear on this spectrum, I would put this farther to the right than any other post I remember, so we genuniely seem to differ in our judgment here. I echo anon03 in that the title is extremely provocative, but minus the claim that this is only a descriptive state
[Book Review] "The Bell Curve" by Charles Murray

This is weekly comments for LessWrong over the last year. Last we counted, something like 300 on a SSC post? So if there are two SSC posts/week, LessWrong is coming out ahead.

4philh1moI think ACX is ahead of LW here. In October, it got 7126 comments in 14 posts, which is over 1600/week. (Two of them were private with 201 between them, still over 1500/week if you exclude them. One was an unusually high open thread, but still over 1200/week if you exclude that too.) In September it was 10350 comments, over 2400/week. I can't be bothered to count August properly but there are 10 threads with over 500 comments and 20 with fewer, so probably higher than October at least. Not too far separate though, like maybe 2x but not 10x. (E: to clarify this is "comments on posts published in the relevant month" but that shouldn't particularly matter here)
Feature Selection

Also conveying the simultaneous alienness and reliability of a classifier/optimizer.

Zoe Curzi's Experience with Leverage Research

The observation might be correct but I don't love the tone. It has some feeling of "haha, got you!" that doesn't feel appropriate to these discussions.

3Richard_Kennaway1moPoint taken, but I stand by the observation.
Self-Integrity and the Drowning Child

Curated. To generalize, as the stakes continue to seem high ("most important century"-level high), it's easy to feel an immense obligation to act and to give it all up for the sake of the future. This meta-parable reminds us that humans aren't made solely of parts that give everything up, and that it's a matter of self-integrity to not do so.

9Dagon1moSuccess! Or maybe Fail! if you hoped it not to be visible.
Petrov Day Retrospective: 2021

I appreciate the Fermi. It's a fair point that only the frontpage is taken down.

I think that a well-designed ritual ought to create $10,000 of value if it's to justify the amount of time spent on it (1-2 person-weeks). Or at least if I didn't think it would create that much value, I wouldn't choose to do it.

I don't see why sending a 100 emails and announcing that generally couldn't have a very large effect size. All depends on the email.

Petrov Day Retrospective: 2021

The vast, vast majority of people won't start a nuclear war when it doesn't benefit them

But there are more people than Petrov who faced incentives to push us into [nuclear] war do so but didn't. Say, the Cuban missile crisis. There were pressures to escalate and I think we should also be celebrating the virtues of leaders who didn't choose to escalate in those circumstances. E.g. people who deescalate even when there's a force pushing in the direction of "better strike first before they day do".

Even if in all those cases deescalation was the only sane move... (read more)

Lies, Damn Lies, and Fabricated Options

Curated. I wouldn't normally curate two posts from the same author in rapid succession, but in fact both are well-worthy of it. This post introduces a new Rationality technique/frame into shared knowledge that despite its simplicity, just seems powerful and great and I'm glad to have it in the toolbox.

Deleted comments archive?

My initial thought is that this is probably fine and in fact should be protected, that is, assuming we're talking about another user.

Another user has the right to moderate (including deleting comments) on their posts, but they don't have the right to moderate what comments you put on your own posts. So I don't see on what grounds they could stop you posting the deleted comment on your own post any different from you posting any other comment.

On the other hand, if the comment was deleted by moderator for in some way having a strong negative effect [1] on th... (read more)

4Vladimir_Nesov1moI think downvoting things to minus infinity is almost always better than deleting them. (One exception is purging all content posted by new users banned for spam/nonsense.) A warning before temporary suspension of posting privileges if it's not heeded should be equally effective where a comment would normally be deleted as discouragement from further engagement in some current drama.

This makes sense overall, but I am somewhat confused by the criteria you specify. Are either “poor epistemics” or “low content” really sufficient grounds for judging a post or comment entirely unsuitable for the site, even if posted on a user’s personal page? Forgive me for saying so, but this seems to indict quite a bit of what I see posted on people’s personal pages!

Petrov Day Retrospective: 2021

Forgive me if engage with only part of this, I believe that the OP already acknowledges most of the problem you've described. speaks to half of this. 

To engage with the point that is novel (epistemic status, haven't thought that hard about this):

The Ritual misrepresents the true opinion of the community, by selecting those who would take it seriously and erasing those who wouldn't[1].

This makes me realize that there are different frames you could approach the ritual creation with:

  1. It's a ritual for the "the" community and therefore the entire community
... (read more)
1Isnasene1moNo forgiveness needed! I agree that the OP addresses this portion -- I read the OP somewhat quickly the first time and didn't fully process that part of it. And, as I've said, I do appreciate the thought you've put into all this. I think I differ from the text of the OP in that social-shaming/lack-of-protest-method in rituals is often an okay and sensible thing. It is only when this property is combined with a serious problem with the ritual itself that I get worried -- but I have a hunch that you'd agree with this. I agree that having/establishing a group of people you can work with/trust is a good thing, and I think that rituals about this can be beneficial. However I have two main objections to this perspective: #1. It is not obvious to me that identifying a group unlikely to press the button in a Petrov Day ritual is one capable of coordinating generally when stakes are real and high. As commenters have noted, social pressure incentives stack pressure against defecting. Moreover, if you are selecting for people who you know well enough to speculate on behavior in these circumstances, you are probably also selecting for people more deeply connected for the community for whom these pressures matter a lot. #2. I don't think an existence proof for a 100-strong set of LWers who don't press the button in a Petrov's Day ritual is particularly useful or surprising to me. If 50% of LWers would press the button and 50% wouldn't, its mathematically obvious that such a group exists. The actually arguably impressive/surprising part of the Ritual is not "does this group exist?" -- its "hey look! we have a selection process with strong enough discriminatory power to find one-hundred people who will act in a certain way." This could mean something important symbolically -- about how so many people in the community are trustworthy that we can assemble a group with even our imperfect judgement. But it could also mean the following things: * We have 10,000 people to select
Petrov Day Retrospective: 2021

On a different note, how do you know how many people opened the email?

The mail merge software (YAMM) that we used gives you the option to track this. And we're evil and used it. :/

4philh1moSeems worth noting that you probably can't track everyone, so some people may have opened it but been uncounted. If (as I expect) they work through embedding images, then someone with images disabled by default wouldn't be counted unless they specifically enabled them. (I am such a person, but didn't get an email this year.)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I appreciate the considerateness! 

These are important questions, though, that you've raised. I consider it a piece of "integrity debt" (as Ray would call it) that we don't have clear transparent moderation policies posted anywhere. I hope to get to that soonish and hopefully I can at least answer some of the questions you raised tomorrow.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

IlyaShpister's comments are worthy of moderator attention, I'm looking at them now. 

The recent community discussion threads, this one alone at 741 comments, have exceeded the team's (or at least my) capacity to read and review every comment. Maybe we should set up a way for us to at least review every negative karma comment.

6dxu1moThanks for your reply. I didn't intend my comment to impose any kind of implicit obligation on you or any other member of the mod team (especially if your capacity is as strained as it is), so to the extent that my initial comment came across as exerting social pressure for you to shift your priorities away from other more pressing concerns, I regret wording things the way I did, and hereby explicitly disavow that interpretation.
Feature idea: Notification when a parent comment is modified

I agree with that! Sorry, to tried to capture it in my off-hand "kind of feature we should have in some form"

Petrov Day Retrospective: 2021

the current structure of the Petrov day ritual misses what is admirable about Petrov by about a mile.

In my opinion, the most important thing about Petrov is that he didn't press the metaphorical button even though it was an option. The incentive structure and pressures make those decisions more admirable, but the core of the thing is not pressing the button and the ritual celebrated to date let's us reenact that element.

Also, it's possibly the name "Petrov Day" anchors us too much but I don't actually think the entire focus should narrowly be around Petrov... (read more)

It seems like you're anchoring too much on the "pressing a button" element of the decision. To me the core features of Petrov's story are that he:
- Overcame local social pressure
- And tribalism/us-vs-them mentality
- To take a unilateral action
- Which he thought had highly beneficial consequences

Right now I think "not pressing the button" on LW doesn't have any of these features. After this thread, I'm personally highly uncertain about whether the effects of LW going down are good or bad; I'm guessing that if someone had pressed the button and then defended... (read more)

I disagree. The fact that Petrov didn't press the metaphorical button puts him in the company of Stalin, Mao, and every other leader of a nuclear power since 1945. The vast, vast majority of people won't start a nuclear war when it doesn't benefit them. The things that make Petrov special are a) that he was operating under conditions of genuine uncertainty and b) he faced real, severe consequences for not reporting the alert up his chain of command. Even in those adverse circumstances, he made the right call. I'm not totally sure how to structure a ritual ... (read more)

Feature idea: Notification when a parent comment is modified

This definitely seems like the kind of feature were should have in some form. Consider in on the queue! Now just to find time for it...

PS: we're hiring

It's not a good feature without settling the notification pollution objection. I sometimes edit comments like 10 times for typos and wording. This would be fine if there is an opt-in flag to intentionally push the update notifications when I judge my own edit as substantial.

The theory-practice gap

Curated. This post introduces a useful frame for thinking about different kinds of alignment work and related differences of opinion.

Load More