LESSWRONG
LW

Severin T. Seehrich
68815390
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Networking: A Game Manual
No wikitag contributions to display.
Overview: AI Safety Outreach Grassroots Orgs
Severin T. Seehrich2mo10

I love them and have been around since the start, decided to not include them because they don't point in the outreach direction.

Reply
Overview: AI Safety Outreach Grassroots Orgs
Severin T. Seehrich2mo20

We found them but had the impression they're not super joinable.

Reply
Overview: AI Safety Outreach Grassroots Orgs
Severin T. Seehrich2mo31

More separate than e.g. PauseAI Germany? My assumption was that anyone would find their respective local chapters over the general PauseAI page.

Reply
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich3mo10

Huh, interesting. We seem to understand different things under the term "policy" there. Going by the titles alone, I'd only have counted lines 13, 18-20 as "yes" and 33 as "maybe". So 12-15%. 9-13% if I exclude this post, which has been made after observing the ratio.

Still curious what percentage they make up among total submissions, not only under the AI tag.

Reply
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich3mo10

Huh, that's a potentially significant update for me. Two questions:

1. Can you give me a source for the claim that making the models incapable of deception seems likely to work? I managed to miss that so far.

2. What do you make of Gradual Disempowerment? Seems to imply that even successful technical alignment might lead to doom.

Reply
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich3mo20

Good catch! My implicit question was about what ends up on the frontpage, i.e. some mix of version 1 and 3. A friend of mine answered the sociological side of that question to my satisfaction: Many of the most competent people already pivoted to governance/outreach. But they don't have much use for in-group signalling, so they have quantitatively much less posts on the frontpage than others.

Reply
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich3mo32

On a more general note, it's certainly possible that I vastly overestimate how well the median LessWronger will be at presenting the case for halting AI progress to non-rationalists.

After all, I've kept up considerable involvement with my normie family and non-rationalist communities over the past years and put a bunch of skill points into bridging the worlds. To the point that by now, I find it easier to navigate leftist than rationalist spaces despite my more gray tribe politics - because I know the local norms from the olden days, and expect leftists to be more fluent at guess culture so I don't need to verbalize so many things. In addition, I'm unusually agnostic on the more controversial LW pet topics like transhumanism compared to others here.

At the same time, having constructive conversations with normies is a learnable skill. I suspect that many LWers have about as much learned helplessness around that as I had two or three years ago. I admit that it might make sense for super technical people to stay in their lane and just keep building on their existing skill trees. Still, I suspect that for more rationalists than are currently doing it, investing more skill points into being normie-compatible and helping with Control AI-style outreach might be a high-leverage thing to do.

Reply
Why does LW not put much more focus on AI governance and outreach?
Severin T. Seehrich3mo31

So you think the alignment problem is solvable within the time we appear to have left? I'm very sceptical about that, and that makes me increasingly prone to believe that CEV, at this point in history, genuinely is not a relevant question. Which appears to be a position a number of people in PauseAI hold.

Reply
My covid-related beliefs and questions
Severin T. Seehrich4mo20

Makes sense! My current emerging policy is to keep going out, but to avoid being in closed rooms for less than high ev interactions. Plus prioritizing applying to remote over in-person jobs.

Given the large leaps with which AGI and various other existential risks come closer, being extremely covid cautious seems not indicated though. Gotta see the world while it still stands.

Reply
Considerations on orca intelligence
Severin T. Seehrich4mo20

A key piece of information I'm missing here is how well-myelinated orca brains are compared to human brains.

A quick Google search (1) suggests that "unmyelinated axon conduction velocities range from about 0.5 to 10 m/s, myelinated axons can conduct at velocities up to 150 m/s." This seems even more significant in orcas than in humans given their larger brain and body size.

Reply
Load More
46Overview: AI Safety Outreach Grassroots Orgs
2mo
8
19Offer: Team Conflict Counseling for AI Safety Orgs
3mo
1
78Why does LW not put much more focus on AI governance and outreach?
3mo
31
10My covid-related beliefs and questions
1y
3
23Tend to your clarity, not your confusion
1y
1
32The Dunbar Playbook: A CRM system for your friends
2y
16
59AISafety.info "How can I help?" FAQ
2y
0
4Sequence opener: Jordan Harbinger's 6 minute networking
2y
0
150Advice for newly busy people
2y
3
68Advice for interacting with busy people
2y
4
Load More