Having a group of people who have experience convincing people of alignment helps us:

  • 1. Convince people of alignment
  • 2. Make explicit common reasons for skepticism and unwillingless
  • 3. Coach people who want to have these conversations w/ their friends/colleagues/advisors/etc.

If you're not convinced of alignment, and would be willing to donate 1 hour of your time talking to me, I'd appreciate if you messaged me to schedule a call these next couple of weeks.

If you'd like to help convince people of alignment and build a curriculum w/ me, I'd also appreciate a message and schedule a call to chat.

What is Street Epistemology?

 Street Epistemology is a set of tools that helps you have better conversations about difficult topics.

It's a way of getting to the core disagreements of a conversation, instead of talking in circles, past each other, and never getting anywhere.

Recently I had a conversation that went like this (note: details are vague, but this was the general flow of the conversation. They are a friend and had read Superintelligence but were not working in alignment):

L: What would convince you to start working on alignment?

C: There's not really anything actionable to solve the problem.

L: Have you looked at people's research agendas who work on this?

C: No.

L: Okay, say we have an actionable plan that you could work on, would you then want to work on alignment?

C: ... there's definitely a pay-aspect, so I couldn't take much of a pay cut than what I'm working on now (ie software engineer)

L: Okay, say there's an actionable plan and you have funding at the same pay as your current job, then would you want to work on alignment?

[Note that we could've gone off on tangents like explaining different people's research agendas or saying "if you truly believed AI could cause an existential risk, then a pay cut's not a big deal"]

C: I'm not sure if I'd be happy working on it. When I'm studying ML, there's a monotony to it.

L: I think you're the type of person who'd be happy working on this type of problem; it's not all ML. Though there are definitely parts that are a slog, but programming and learning is like that sometimes too.

C: Oh, I definitely can find joy in things that are slog, but things like this and global warming make me feel bad.

[asking to elaborate eventually yields]:

C: "...even if I made a perfect solution to alignment, it wouldn't change anything. You could have some people on board, but then one guy in another country say "screw this" and ruin everything"

[bring up AI governance and pivotal act into a long reflection]

He said later “I could see myself transitioning if I made it a hobby first”. A good question in general may then be "How could you see yourself transitioning into alignment, realistically?" or giving people very concrete pathways that other people have gone through.

The Current Plan

I'll meets up/call 0-5 people each week to help build my personal understanding so that I can better convince others and coach people in the future.

Find at least 1 other person interested in this to start creating a curriculum together. I would expect them to also have conversations, coach people, and potentially read up on the street epistemology site for tips and tricks.

After ~20 calls and building the curriculum, start coaching people on how to do it themselves. One idea is to do 5 sessions of

  • 1. Intro to street epistemology
  • 2. Conversation with Logan (that's me!) pretending to be a skeptic
  • 3. Conversation with a volunteer skeptic
  • 4. Conversation with their friend/colleague
  • 5. How to follow up

I also expect to uncover common reasons for skepticisms and unwillingness, which I can write about and convince someone else to help fix those problems

Isn't This Kind of Weird or Cult-like?

In high-level detail, I can see that comparison, but when you actually have these conversations, they come across as very honest and intentional.

Call to Action

Repeating the beginning, if you're not convinced of alignment, and would be willing to donate 1 hour of your time talking to me, I'd appreciate if you messaged me to schedule a call these next couple of weeks.

If you'd like to help convince people of alignment and build a curriculum w/ me, I'd also appreciate a message and schedule a call to chat.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 7:22 AM

Note I'd like to make: a lot of people around here worry about cult-like behavior. that's not irrational; cult-like abuses have in fact occurred in humanity, including in places that are low network distance to this group. Cult-like behavior must be pushed back against specifically, not using vague generality. Attempting to convince people of the value of aligning multi-agent networks is, in fact, a major valuable direction that humanity could go, IMO, and being able to do that without risking cult-like abuses is important. Key things to avoid include isolating people from their friends, breaking the linguistic association of words to reality, demanding that someone change their linguistic patterns on the spot, etc - mostly things which street epistemology specifically makes harder due to the recommended techniques. I'd suggest that, in future instances where you'd like to push against cult-like abuses due to worrying you might be risking encouraging them, you can inline my point here and specifically state the details, such as that encouraging people to believe things risks being too convincing and that frequent reminders should be present to ensure people stay connected to their existing social networks unless they really have a strong personal reason not to.

Just a thought, anyway.

Returning from the tangent: I agree, convincing people that multi-agent alignment is a critical step for life on earth does seem like the #1 problem facing humanity, and we're reaching an era where the difference between human and AI has already been blurred. If we are to ensure that no subnetwork of beings replaces another, it is critical to find and spread the knowledge of how to ensure all beings are in prosocial alignment with each other at least enough to share whatever our cosmic endowment is.

Thanks. Yeah this all sounds extremely obvious to me, but I may not have included such obvious-to-Logan things if I was coaching someone else.

Key things to avoid include isolating people from their friends, breaking the linguistic association of words to reality, demanding that someone change their linguistic patterns on the spot, etc - mostly things which street epistemology specifically makes harder due to the recommended techniques

Are you saying street epistemology is good or bad here? I've only seen a few videos and haven't read through the intro documents or anything.

Good. people have [edit: some] defenses against abusive techniques and from what I've seen of Street epistemology it's responses to most of those is to knock on the front door rather than trying to sneak in the window, metaphorically speaking.

Have you gotten farther with this? It seems like a potentially very impactful thing to me. I also had the idea recently of paying skeptical AI researchers to spend a few hours discussing/debating their reasons for skepticism