LESSWRONG
LW

RationalElf
2751130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Raemon's Shortform
RationalElf2d10

How do you know the rates are similar? (And it's not e.g. like fentanyl, which in some ways resembles other opiates but is much more addictive and destructive on average)

Reply
AI Moratorium Stripped From BBB
RationalElf12d1311

Did this case update you to think "If you’re trying to pass a good bill, you need to state and emphasize the good reasons you want to pass that bill, and what actually matters". If so, why? The lesson I think one would naively take from this story is an update in the direction of: "if you want to pass a good bill, you should try to throw in a bunch of stuff you don't actually care about but that others do and build a giant coalition, or make disingenuous but politically expedient arguments for your good stuff, or try to make out people who oppose the bill to be woke people who hate trump, etc"? 

Relevant quotes:

The opposition that ultimately killed the bill seems to have had essentially nothing to do with the things I worry most about. It did not appear to be driven by worries about existential or catastrophic risk, and those worries were not expressed aloud almost at all (with the fun exception of Joe Rogan). That does not mean that such concerns weren’t operating in the background, I presume they did have a large impact in that way, but it wasn’t voiced.

....

I am happy the moratorium did not pass, but this was a terrible bit of discourse. It does not bode well for the future. No one on any side of this, based on everything I have heard, raised any actual issues of AI long term governance, or offered any plan on what to do. One side tried to nuke all regulations of any kind from orbit, and the other thought that nuke might have some unfortunate side effects on copyright. The whole thing got twisted up in knots to fit it into a budget bill.

How does this relate to the question of which arguments to make and emphasize about AI going forward? My guess is that a lot of this has to do with the fact that this fight was about voting down a terrible bill rather than trying to pass a good bill.

If you’re trying to pass a good bill, you need to state and emphasize the good reasons you want to pass that bill, and what actually matters, as Note Sores explained recently at LessWrong. You can and should also offer reasons for those with other concerns to support the bill, and help address those concerns. As we saw here, a lot of politicians care largely about different narrow specific concerns.

Reply
TurnTrout's shortform feed
RationalElf13d*3-1

[acknowledging that you might not reply] Sorry, I don't think I understand your point about the MtG questions: are you saying you suspect I'm missing the amount (or importance-adjusted amount) of positive responses to Nate? If so, maybe you misunderstood me. I certainly wouldn't claim it's rare to have a very positive response to talking to him (I've certainly had very positive conversations with him too!); my point was that very negative reactions to talking to him are not rare (in my experience, including among impactful and skilled people doing important work on AIS, according to me), which felt contrary to my read of the vibes of your comment. But again, I agree very positive reactions are also not rare!

Or, to put it another way: most of the people that like Nate's conversational style and benefit greatly from it and find it a breath of fresh air aren't here in the let's-complain-about-it conversation.

I mean, we're having this conversation on LessWrong. It's, to put it mildly, doing more than a bit of selection for people who like Nate's conversational style. Also, complaining about people is stressful and often socially costly, and it would be pretty weird for random policymakers to make it clear to random LW users how their conversation with Nate Soares had gone. How those effect compare to the more-specific selection effect of this being a complaint thread spurred by people who might have axes to grind is quite unclear to me. 

At the very least, I can confidently say that I know of no active critic-of-Nate's-style who's within an order of magnitude of having Nate's positive impact on getting this problem taken seriously.  Like, none of the people who are big mad about this are catching the ears of senators with their supposedly better styles.

I believe that's true of you. I know of several historically-active-critic-of-Eliezer's-style who I think have been much more effective at getting this problem taken seriously in DC than Eliezer post-Sequences, but not of Nate's or Eliezer's with respect to this book in particular, but I also just don't know much about how they're responding other than the blurbs (which I agree are impressive! But also subject to selection effect!). I'm worried there's substantial backfire effect playing out, which is nontrivial to catch, which is one of the reasons I'm interested in this thread.  

Reply
TurnTrout's shortform feed
RationalElf13d149

I appreciate you writing this, and think it was helpful. I don't have a strong take on Nate's object-level decisions here, why TurnTrout said what he said, etc. But I wanted to flag that the following seems like a huge understatement:

The concerns about Nate's conversational style, and the impacts of the way he comports himself, aren't nonsense.  Some people in fact manage to never bruise another person, conversationally, the way Nate has bruised more than one person.

But they're objectively overblown, and they're objectively overblown in exactly the way you'd predict if people were more interested in slurping up interpersonal drama than in a) caring about truth, or b) getting shit done.

 For context, I've spoken to Nate for tens of hours. Overall, I'd describe our relationship as positive. And I'm part of the rationalist and AIS communities, and have been for more than 5 years; I spend tens of hours per week talking to people in those communities. There are many nice things I could say about Nate. But I would definitely consider him top-decile rude and, idk, bruising in conversation within those communities; to me, and I think to others, he stands out as notably likely to offend or be damagingly socially oblivious. My sense is that my opinion is fairly widely shared. Nate was one of the participants in the conversation about AI safety that I have ever seen become most hostile and close to violence, though my impression was that the other party was significantly more in the wrong in that case. 

I don't know what the base rates of people being grumpy post interacting with Nate are, and agree it's a critical question. I wouldn't be surprised if the rate is far north of 15% for people that aren't already in the rationalist community who talk to him about AIS for more than an hour or something. I would weakly guess he has a much more polarizing effect on policymakers than other people who regularly talk to policymakers about AIS, and am close to 50-50 on whether his performance is worse overall than the average of that group. 

I feel bad posting this. It's a bit personal, or something. But he's writing a book, and talking to important people about it, so it matters. 

Reply
Sexual Abuse attitudes might be infohazardous
RationalElf5mo*10

(Idk why I'm replying to this 2 years later). I forgave him for what I think are pretty normal reasons to forgive someone. A combination (1) of he's been a good friend in many respects over the years and so has a bunch of "credit" and I wanted to find a path to our relationship continuing, (2) nothing like that ever happened again so I believe it was really aberrant and unlucky or he took it really seriously and changed, (3) like I said above it wasn't that harmful to me and seemed less harmful than a lot of stuff a lot of other people do so it seemed like it should be in the "forgivable actions" reference class. 

If I'd been the only woman in the world I probably would have forgiven him more quickly but I felt some need to punish him extra on behalf of the women who would have suffered more from what he did to me than I did. 

Reply
Human takeover might be worse than AI takeover
RationalElf6mo2020

I mean humans with strong AGIs under their control might function as if they don't need sleep, might become immortal, will probably build up superhuman protections from assasination, etc

Reply
Review: Breaking Free with Dr. Stone
RationalElf6mo22

I'm glad this helped you, and think it's cool you wrote up this recommendation, and I wish people did more of that sort of thing.

I felt very disappointed by this show. It fell into a lot of anime tropes I find cringey and misleading, but worse, I felt like the characters acted very irrationally and uncarefully, and in my opinion aren't good role models of rationality. 

E.g to pick a few early not-very-spoilery points, they don't optimize their first deliberate de-stoning, and even though it's known that when stone people break they die, they choose to carry a stone person they value highly, including running with them through the forest (which seems like it could easily have resulted in tripping and breaking them) instead of un-stoning in situ. Senku contends that Taiju shouldn't let himself die to save Senku because both their skillsets are needed, but Taiju's is about being physically strong (vs Senku being exceptionally smart and good at science) which is clearly a more common skillset (and easier to identify in petrified people). 

Also, Senku contends that counting is "simply the rational thing to do" but that doesn't seem obvious at all; for most people, that seems pretty unlikely to be the right approach to maintaining sanity. 

Reply
Joseph Miller's Shortform
RationalElf6mo40

Thank you, this is very interesting and it seems like you did a valuable public service in compiling it

  • The motivations of OpenAI or some other actor to murder a whistleblower are unlikely. The most plausible to me is that they want to send a warning to other potential whistleblowers, but this isn't very compelling

What do you think of the motive that he was counterfactually going to testify in a very damaging way, or that he had very damaging evidecne/data that was deleted? 

Reply
Capital Ownership Will Not Prevent Human Disempowerment
RationalElf6mo31

My sense is that we do see multi-century (including spanning the Industrial Revolution) persistence of wealth, though I don't trust it much because I don't think it accounts for genetic effects. E.g. here 

Reply
Sexual Abuse attitudes might be infohazardous
RationalElf3y9940

Not sure if a single anecdote is worth anything at all, but I am a woman, and I experienced what is legally and culturally considered rape at least twice (arguably 3x), and it really didn't bother me very much (though I think different versions, e.g. more violent ones or one perpetrated by people I looked up to, would have been much more damaging). One of the people who technically raped me (it was a very drunken screwup with, I believe, no malevolent intent) is still a friend of mine. I feel scared about people finding this out about our friendship, mostly on his behalf.  

Notably, I think it was way less traumatizing than several experiences I have had for which I've never been able to garner 1/10th as much sympathy; a trusted close friend failing me in a time of need, a painful and embarrassing medical experience, a pet dying. 

I share the view of the OP that there's something off here; I think the combination of a pretty wide range of disparate acts being considered rape/sexual abuse + rape/sexual abuse being considered among the worse experiences a person can have, is pretty unhealthy for the reasons described and some others. I also think it drains social energy from recognizing other kinds of trauma people can experience and helping them with it. 

Reply
Load More
119Mental health benefits and downsides of psychedelic use in ACX readers: survey results
4y
18