## LESSWRONGLW

ozziegooen

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

# Sequences

Squiggle
Prediction-Driven Collaborative Reasoning Systems

# Wiki Contributions

Improving on the Karma System

Just want to say; I'm really excited to see this.

I might suggest starting with an "other" list that can be pretty long. With Slack, different subcommunities focus heavily on different emojis for different functional things. Users sometimes figure out neat innovations and those proliferate. So if it's all designed by the LW team, you might be missing out.

That said, I'd imagine 80% of the benefit is just having anything like this, so I'm happy to see that happen.

Disagreeables and Assessors: Two Intellectual Archetypes

I just (loosely) coined "disagreeables" and "assessors" literally two days ago.

I suggest coming up with any name you think is a good fit.

Disagreeables and Assessors: Two Intellectual Archetypes

I wouldn't read too much into my choice of word there.

It's also important to point out that I was trying to have a model that assumed interestingness. The "disagreeables" I mention are the good ones, not the bad ones. The ones worth paying attention to I think are pretty decent here; really, that's the one thing they have to justify paying attention to them.

Zoe Curzi's Experience with Leverage Research

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were available, it seems easy to justify a fair bit more. Or even better could be something like, "We'll help fund lawyers in case you're attacked legally, or anti-harassing teams if you're harassed or trolled". This is similar to how the EFF helps with cases from small people/groups being attacked by big companies.

I don't mean to complain; I think any steps here, especially so quickly are fantastic.

3) I'm afraid this will get lost in this comment section. I'd be excited about a list of "things to keep in mind" like this to be repeatedly made prominent somehow. For example, I could imagine that at community events or similar, there could be necessary papers like, "Know your rights, as a Rationalist/EA", which flags how individuals can report bad actors and behavior.

4) Obviously a cash prize can encourage lying, but I think this can be decently managed. (It's a small community, so if there's good moderation, $15K would be very little compared to the social stigma that would come and you've found out to have destructively lied for$15k)

Intelligence, epistemics, and sanity, in three short parts

The latter option is more of what I was going for.

I’d agree that the armor/epistemics people often aren’t great at coming up with new truths in complicated areas. I’d also agree that they are extremely unbiased and resistant to both poor faith arguments, and good faith, but systematically misleading arguments (these are many of the demons the armor protects against, if that wasn’t clear).

When I said that they were soft-spoken and poor at arguing, I’m assuming that they have great calibration and are likely arguing against people who are very overconfident, so in comparison they seem meager. I think of a lot of superforecasters in this way; they’re quite thoughtful and reasonable, but not often bold enough to sell a lot of books. Other people with too epistemics sometimes recognize their skills (especially when f they have empirical track records like in forecasting systems), but that’s right now a meager minority.

Prioritization Research for Advancing Wisdom and Intelligence

When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.

I tried to make it clear that I was referring to groups with the phrase, "of humanity", as in, "as a whole", but I could see how that could be confusing.

the wisdom and intelligence[1] of humanity

For those interested in increasing humanity’s long-term wisdom and intelligence[1]

I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.

I imagine there's a lot of overlap. I'd also be fine with multiple prioritization research projects, but think it's early to decide that.

This makes me wonder how nascent this really is?

I'm not arguing that people haven't made successes in the entire field (I think there's been a ton of progress over the last few hundred years, and that's terrific). I would argue though that there's very little formal prioritization of such progress. Similar to how EA has helped formalize the prioritization of global health and longtermism, we have yet to have similar efforts for "humanity's wisdom and intelligence".

I think that there are likely still strong marginal gains in at least some of the intervention areas.

Prioritization Research for Advancing Wisdom and Intelligence

That's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas.

I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than \$15,000; marginal exciting ones aren't obvious to me.

But I'd be up for more research to decide if things like that are the best way forward :)

In the shadow of the Great War

The first few chapters of "The Existential Pleasures of Engineering" detail some optimism, then pessimism, of technocracy in the US at least.

I think the basic story there was that after WW2, in the US, people were still pretty excited about tech. But in the 70s (I think), with environmental issues, military innovations, and general malaise, people because disheartened.

https://www.amazon.com/Existential-Pleasures-Engineering-Thomas-Dunne-ebook/dp/B00CBFXLWQ

I'm sure I'm missing details, but I found the argument interesting. It is true that in the US at least, there seemed to be a lot of techno-optimism post-WW2.