Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

The Coordination Frontier
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

I've just pushed an update to the Reacts Palette. I aimed to a) remove some reacts that either weren't getting used, or seemed to be used confusingly, b) add some reacts that seem missing, c) reorganize them so they were a bit easier to parse.

And, the biggest change is d) which is to mark how likely a claim is via reacting. I'm imagining this primarily used via inline-reacting. If a lot of people end up using it might make sense to build a more specialized system for this, but it seemed to cheap to add via Reacts for the immediate future.

It looks like this now, when you first open the palette. It deliberately doesn't emphasize the ability to scroll-for-more-reacts, at first, because I think people are probably already fairly overwhelmed with the default palette.

Fwiw I still don't think it makes sense to call that a nitpick. Seems like a good thing to point out. (I agree it's not, like, a knockdown argument against the whole thing. But I think of nitpicks as things that aren't relevant to the central point of the post)

Yeah, the only reason we don't have that yet is it's a bit technically complicated.

A thing that feels somewhat relevant here is the Dark Forest Theory of AI Mass Movements. New people keep showing up, seeing a Mass Movement Shaped Hole, and being like "Are y'all blind? Why are you not shouting from the rooftops to shut down AI everywhere and get everyone scared?"

And the answer is "well, I do think maybe LWers are biased against mainstream politics in some counterproductive ways, but there are a lot of genuine reasons to  be wary of mass movements. They are dumb, hard-to-aim at exactly the right things, and we probably need some very specific solutions here in order to be helpful rather than anti-helpful or neutral-at-best. And political polarization could make this a lot harder to talk sanely about."

One of the downsides of mass-movement shaped solutions is making it harder to engage in trades like you propose here.

There's a problem, where, AI is pretty obviously scary in a lot of ways, and a Mass Movement To Shutdown AI may happen to us whether we want it or not. And if x-risk professionals aren't involved trying to help steer it it may be a much stupider worse version of itself.

So, I don't know if it's actually tractable to make the trade of "avoid mass movements that are likely to drive the dial down" (at least in a legible-enough way to make such a trade)

It does seem more tractable to proactively drive up the dial in other target ways, and be proactive about shouting that. (i.e. various x-risk-oriented grantmaking bodies also giving grants to other kinds of technical progress, lobbying to remove regulations that everyone agrees are bad, etc).

Curated.

I've heard people vaguely wishing for this sort of product for a few years, and I feel pretty excited looking at the potential here. 

There's a lot of room for improvement, some of which is UI here, and some of which depends on how individual predictions and prediction-market-communities turn out to evolve. But I think the current product is above the bar of worth taking a look at and signal boosting. I hadn't consciously thought through the lens of "[Prediction], platforms’ UX is orientated towards forecasters, not information consumers", which seems like an obvious font of potential innovation.

The thing that has me pretty confused about your confidence here is not just that there's something weird going on here, but, that you expect it to be confirmed within 5 years.

I think those don’t say ‘and then the AI kills you’

Load More