MondSemmel

Wiki Contributions

Comments

Sorted by

While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]

For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.

But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.

And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to their face that you hate them and everything they stand for. This would be virtuously legible and non-deceptive, at the cost of immediately ending the conversation and thus forfeiting any chance of e.g. gains from trade, coming to a compromise, etc.

One way I've seen this cost manifest on LW is that some authors complain that there's a style of commenting here that makes it unenjoyable to post here as an author. As a result, those authors are incentivized to post less, or to post elsewhere.[2]

And as a final aside, I'm skeptical of treating Feynman as socially graceless. Maybe he was less deferential towards authority figures, but if he had told nothing but the truth to all the authority figures (who likely included some naked emperors) throughout his life, his career would've presumably ended long before he could've gotten his Nobel Prize. And b), IIRC the man's physics lectures are just really fun to watch, and I'm pretty confident that a sufficiently socially graceless person would not make for a good teacher. For example, it is socially graceful not to belittle fledgling students as intellectual inferiors, even though they in some ways are just that.

  1. ^

    Related: I wrote this comment and this follow-up where I wished that Brevity was considered a rationalist virtue. Because if there's no counterbalancing virtue to trade off against other virtues like legibility and truth-seeking, then supposedly virtuous discussions are incentivized to become arbitrarily long.

  2. ^

    The moderation log of users banned by other users is a decent proxy for the question of which authors have considered which commenters to be too costly to interact with, whether due to lack of social grace of something else.

Related, here is something Yudkowsky wrote three years ago:

I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics.  Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation.  But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad.

I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc.  I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs.  But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there."  This proposal is not mainly based on the advance theories by which you might suspect or guess that subgroups like that would end badly; it is motivated mainly by my sense of what the actual outcomes have been.

Since implicit subtext can also sometimes be bad for us in social situations, I should be explicit that concern about outcomes of psychedelic advocacy includes Michael Vassar, and concern on woo includes the alleged/reported events at Leverage.

I mean, here are two comments I wrote three weeks ago, in a shortform about Musk being able to take action against Altman via his newfound influence in government:

That might very well help, yes. However, two thoughts, neither at all well thought out: ... Musk's own track record on AI x-risk is not great. I guess he did endorse California's SB 1047, so that's better than OpenAI's current position. But he helped found OpenAI, and recently founded another AI company. There's a scenario where we just trade extinction risk from Altman's OpenAI for extinction risk from Musk's xAI.

And:

I'm sympathetic to Musk being genuinely worried about AI safety. My problem is that one of his first actions after learning about AI safety was to found OpenAI, and that hasn't worked out very well. Not just due to Altman; even the "Open" part was a highly questionable goal. Hopefully Musk's future actions in this area would have positive EV, but still.

all the focus on the minutia of OpenAI & Anthropic may very well end up misplaced.

This doesn't follow. The fact that OpenAI and Anthropic are racing contributes to other people like Musk deciding to race, too. This development just means that there's one more company to criticize.

Re: the history of LW, there's a bunch more detail at the beginning of this podcast Habryka did in early 2023.

I could barely see that despite always using a zoom level of 150%. So I'm sometimes baffled at the default zoom levels of sites like LessWrong, wondering if everyone just has way better eyes than me. I can barely read anything at 100% zoom, and certainly not that tiny difference in the formulas!

I can't find any off the top of my had, but I'm pretty sure the LW/Lightcone salary question has been asked and answered before, so it might help to link to past discussions?

Apologies if I gave the impression that "a selfish person should love all humans equally"; while I'm sympathetic to arguments from e.g. Parfit's book Reasons and Persons[1], I don't go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith's invisible hand: that aggregating over every individual's selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren't exactly identical to your own.

  1. ^

    Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they'd care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.

Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?

Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.

Maybe our disagreement is that I'm more skeptical about the legislature proactively suggesting any good legislation? My default assumption is that without leadership, hardly anything of value gets done. Like, it's an obviously good idea to repeal the Jones Act, and yet it's persisted for a hundred years.

Load More