Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
Open Threads
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)

Comments

TurnTrout's shortform feed

I lean not, mostly because of arguments that nuclear war doesn't actually cause extinction (although it might still have some impact on number-of-observers-in-our-era? Not sure how to think about that)

Is AI Alignment a pseudoscience?

I think in this case brackets is pretty good. I agree with Martin that it's good to avoid using quote marks when it might be mistaken for a literal quote.

Nuclear war is unlikely to cause human extinction

This post feels quite important from a global priorities standpoint. Nuclear war mitigation might have been one of the top priorities for humanity (and to be clear it's still plausibly quite important). But given that the longtermist community has limited resources, it matters a lot whether something falls in the top 5-10 priorities. 

A lot of people ask "Why is there so much focus on AI in the longtermist community? What about other x-risks like nuclear?". And I think it's an important, counterintuitive answer that nuclear war probably isn't an x-risk. 

Like Jeff and Bucky, I still think it's worth someone following up and investigating the phenomenon here in more detail. It's disappointing that humanity hasn't studied this problem in as much depth as we could have.

Why rationalists should care (more) about free software

I initially assumed something similar to what you just described. However, it's plausible to me that in practice the line between "program" and "data" might be blurry here.

Implications of Civilizational Inadequacy (reviewing mazes/simulacra/etc)

I meant to be referring to "I think Moral Mazes is a misleading meme that itself contributes to the problem". Why is it misleading? Why does it contribute to the problem? What evidence or reasoning leads you to believe that?

Implications of Civilizational Inadequacy (reviewing mazes/simulacra/etc)

This statement is kinda opaque and I'd like it if you spelled out your arguments more. (I realize it's not always worth the effort to wade into the fully argumentation, but a point of the review is to more-fully hash out arguments for posts. Road To Mazedom ranked at #19 during the preliminary voting, so if there's disagreement about it I think it's good to spell out).

(I don't necessarily disagree with your claim, but as worded it doesn't really convey anything beyond "Romeo thinks it's misleading")

What's Up With Confusingly Pervasive Consequentialism?

I might be conflating Richard, Paul, and my own guesses here. But, I think part of the argument here is about what can happen before AGI, that gives us lines of hope to pursue.

Like, my-model-of-Paul wants various tools for amplifying his own thought to (among other things) help think about solving the longterm alignment problem. And the question is whether there are ways of doing that that actually help when trying to solve the sorts of problems Paul wants to solve. We've successfully augmented human arithmetic and chess. Are there tools we actually wish we had, that narrow AI meaningfully helps with, 

I'm not sure if Richard has a particular strategy in mind, but I assume he's exploring the broader question of "what useful things can we build that will help navigate x-risk"

The original dialogs were exploring the concept of pivotal acts that could change humanity's strategic position. Are there AIs that can execute pivotal acts that are more like calculators and Deep Blue than like autonomous moon-base-builders? (I don't know if Richard actually shares the pivotal act / acute risk period frame, or was just accepting it for sake of argument)

What's Up With Confusingly Pervasive Consequentialism?

But I think Richard’s point is ‘but we totally built AIs that defeated grand chess masters without destroying the world. So, clearly it’s possible to use tool AI to do this sort of thing. So… why do think various domains will reliably output horrible outcomes? If you need to cure cancer, maybe there is an analogous way to cure cancer that just… isn’t trying that hard?’

Richard is that what you were aiming at?

Implications of Civilizational Inadequacy (reviewing mazes/simulacra/etc)

Curious if you have particular exemplar posts from sociology.

Periodically I see people make the "I'd like to see you engaging with the mainstream literature" comment, but then, well, there's a whole lot of mainstream literature and I'm not sure how to sift through it for the parts that are relevant to me. Do you actually have experience with the literature or just an assumption that there's stuff there? (if you are familiar, I think it'd be great to have an orientation post giving an overview of it)

Implications of Civilizational Inadequacy (reviewing mazes/simulacra/etc)

This was on the long side, covering a lot of points. I'm curious to get feedback from people who ended up skimming and bounced off about where they did so.

Load More