Quadratic Reciprocity

Wiki Contributions


Topics I would be excited to have a dialogue about [will add to this list as I think of more]:

  • I want to talk to someone who thinks p(human extinction | superhuman AGI developed in next 50 years) < 50% and understand why they think that 
  • I want to talk to someone who thinks the probability of existential risk from AI is much higher than the probability of human extinction due to AI (ie most x-risk from AI isn't scenarios where all humans end up dead soon after)
  • I want to talk to someone who has thoughts on university AI safety groups (are they harmful or helpful?)
  • I want to talk to someone who has pretty long AI timelines (median >= 50 years until AGI)
  • I want to have a conversation with someone who has strong intuitions about what counts as high/low integrity behaviour. Growing up I sort of got used to lying to adults and bureaucracies and then had to make a conscious effort to adopt some rules to be more honest. I think I would find it interesting to talk to someone who has relevant experiences or intuitions about how minor instances of lying can be pretty harmful. 
  • If you have a rationality skill that you think can be taught over text, I would be excited to try learning it. 

I mostly expect to ask questions and point out where and why I'm confused or disagree with your points rather than make novel arguments myself, though am open to different formats that make it easier/more convenient/more useful for the other person to have a dialogue with me. 

Cullen O'Keefe also no longer at OpenAI (as of last month)

From the comment thread:

I'm not a fan of *generic* regulation-boosting. Like, if I just had a megaphone to shout to the world, "More regulation of AI!" I would not use it. I want to do more targeted advocacy of regulation that I think is more likely to be good and less likely to result in regulatory-capture

What are specific regulations / existing proposals that you think are likely to be good? When people are protesting to pause AI, what do you want them to be speaking into a megaphone (if you think those kinds of protests could be helpful at all right now)? 

This is so much fun! I wish I could download them!

I thought I didn’t get angry much in response to people making specific claims. I did some introspection about times in the recent past when I got angry, defensive, or withdrew from a conversation in response to claims that the other person made. 

After some introspection, I think these are the mechanisms that made me feel that way:

  • They were very confident about their claim. Partly I felt annoyance because I didn’t feel like there was anything that would change their mind, partly I felt annoyance because it felt like they didn’t have enough status to make very confident claims like that. This is more linked to confidence in body language and tone rather than their confidence in their own claims though both matter. 
  • Credentialism: them being unwilling to explain things and taking it as a given that they were correct because I didn’t have the specific experiences or credentials that they had without mentioning what specifically from gaining that experience would help me understand their argument.
  • Not letting me speak and interrupting quickly to take down the fuzzy strawman version of what I meant rather than letting me take my time to explain my argument.
  • Morality: I felt like one of my cherished values was being threatened. 
  • The other person was relatively smart and powerful, at least within the specific situation. If they were dumb or not powerful, I would have just found the conversation amusing instead. 
  • The other person assumed I was dumb or naive, perhaps because they had met other people with the same position as me and those people came across as not knowledgeable. 
  • The other person getting worked up, for example, raising their voice or showing other signs of being irritated, offended, or angry while acting as if I was the emotional/offended one. This one particularly stings because of gender stereotypes. I think I’m more calm and reasonable and less easily offended than most people. I’ve had a few conversations with men where it felt like they were just really bad at noticing when they were getting angry or emotional themselves and kept pointing out that I was being emotional despite me remaining pretty calm (and perhaps even a little indifferent to the actual content of the conversation before the conversation moved to them being annoyed at me for being emotional). 
  • The other person’s thinking is very black-and-white, thinking in terms of a very clear good and evil and not being open to nuance. Sort of a similar mechanism to the first thing. 

Some examples of claims that recently triggered me. They’re not so important themselves so I’ll just point at the rough thing rather than list out actual claims. 

  • AI killing all humans would be good because thermodynamics god/laws of physics good
  • Animals feel pain but this doesn’t mean we should care about them
  • We are quite far from getting AGI
  • Women as a whole are less rational than men are
  • Palestine/Israel stuff

Doing the above exercise was helpful because it helped me generate ideas for things to try if I’m in situations like that in the future. But it feels like the most important thing is to just get better at noticing what I’m feeling in the conversation and if I’m feeling bad and uncomfortable, to think about if the conversation is useful to me at all and if so, for what reason. And if not, make a conscious decision to leave the conversation.

Reasons the conversation could be useful to me:

  • I change their mind
  • I figure out what is true
  • I get a greater understanding of why they believe what they believe
  • Enjoyment of the social interaction itself
  • I want to impress the other person with my intelligence or knowledge

Things to try will differ depending on why I feel like having the conversation. 

Advice of this specific form has been has been helpful for me in the past. Sometimes I don't notice immediately when the actions I'm taking are not ones I would endorse after a bit of thinking (particularly when they're fun and good for me in the short-term but bad for others or for me longer-term). This is also why having rules to follow for myself is helpful (eg: never lying or breaking promises) 

women more often these days choose not to make this easy, ramping up the fear and cost of rejection by choosing to deliberately inflict social or emotional costs as part of the rejection

I'm curious about how common this is, and what sort of social or emotional costs are being referred to. 

Sure feels like it would be a tiny minority of women doing it but maybe I'm underestimating how often men experience something like this. 

My goals for money, social status, and even how much I care about my family don't seem all that stable and have changed a bunch over time. They seem to be arising from some deeper combination of desires to be accepted, to have security, to feel good about myself, to avoid effortful work etc. interacting with my environment. Yet I wouldn't think of myself as primarily pursuing those deeper desires, and during various periods would have self-modified if given the option to more aggressively pursue the goals that I (the "I" that was steering things) thought I cared about (like doing really well at a specific skill, which turned out to be a fleeting goal with time).   

Current AI safety university groups are overall a good idea and helpful, in expectation, for reducing AI existential risk 


Things will basically be fine regarding job loss and unemployment due to AI in the next several years and those worries are overstated 

Load More