Wiki Contributions


My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

There's this general problem of Rationalists splitting into factions and subcults with minor doctrinal differences, each composed of relatively elite members of The Community, each with a narrative of how they’re the real rationalists and the rest are just posers and/or parasites. And, they're kinda right. Many of the rest are posers, we have a mop problem.

There’s just one problem. All of these groups are wrong. They are in fact only slightly more special than their rival groups think they are. In fact, the criticisms each group makes of the epistemics and practices of other groups are mostly on-point.

Once people have formed a political splinter group, almost anything they write will start to contain a subtle attempt to slip in the doctrine they're trying to push. With sufficient skill, you can make it hard to pin down where the frame is getting shoved in.

I have at one point or another been personally involved with a quite large fraction of the rationalist subcults. This has made the thread hard to read - I keep feeling a tug of motivation to jump into the fray, to take a position in the jostling for credibility or whatever it is being fought over here, which is then marred by the realization that this will win nothing. Local validity isn't a cure for wrong questions. The tug of political defensiveness that I feel, and that many commenters are probably also feeling, is sufficient to show that whatever question is being asked here is not the right one.

Seeing my friends behave this way hurts. The defensiveness has at this point gone far enough that it contains outright lies.

I'm stuck with a political alignment because of history and social ties. In terms of political camps, I've been part of the Vassarites since 2017. It's definitely a faction, and its members obviously know this at some level, despite their repeated insistence to me of the contrary over the years.

They’re right about a bunch of stuff, and wrong about a bunch of stuff. Plenty of people in the comments are looking to scapegoat them for trying to take ideas seriously instead of just chilling out and following somebody’s party line. That doesn’t really help anything. When I was in the camp, people doing that locked me in further, made outsiders seem more insane and unreachable, and made public disagreement with my camp feel dangerous in the context of a broader political game where the scapegoaters were more wrong than the Vassarites.

So I’m making a public declaration of not being part of that camp anymore, and leaving it there. I left earlier this year, and have spent much of the time since trying to reorient / understand why I had to leave. I still count them among my closest friends, but I don't want to be socially liable for the things they say. I don't want the implicit assumption to be that I'd agree with them or back them up.

I had to edit out several lines from this comment because they would just be used as ammunition against one side or another. The degree of truth-seeking in the discourse is low enough that any specific information has to be given very carefully so it can’t be immediately taken up as a weapon.

This game sucks and I want out.

Seemingly Popular Covid-19 Model is Obvious Nonsense

Even with that as the goal this model is useless - social distancing demonstrably does not lead to 0 new infections. Even Wuhan didn't manage that, and they were literally welding people's doors shut.

A War of Ants and Grasshoppers

...they're ants. That's just not how ants work. For a myriad of reasons. The whole point of the post is that there isn't necessarily local deliberative intent, just strategies filling ecological niches.

How rapidly are GPUs improving in price performance?

Of course, if you don’t like how an exponential curve fits the data, you can always change models—in this case, probably to a curve with 1 more free parameter (indicating a degree of slowdown of the exponential growth) or 2 more free parameters (to have 2 different exponentials stitched together at a specific point in time).

Oh that's actually a pretty good idea. Might redo some analysis we built on top of this model using that.

Blackmailers are privateers in the war on hypocrisy

This argument would make much more sense in a just world. Information that should damage someone is very different from information that will damage someone. With blackmail you're optimized to maximize damage to the target, and I expect tails to mostly come apart here. I don't see too many cases of blackmail replacing MeToo. When was the last time the National Enquirer was a valuable whistleblower?

EDIT: fixed some wording

How rapidly are GPUs improving in price performance?
When trying to fit an exponential curve, don't weight all the points equally

We didn't. We fit a line in log space, but weighted the points by sqrt(y). The reason we did that is because it doesn't actually appear linear in log space.

This is what it looks like if we don't weight them. If you want to bite the bullet of this being a better fit, we can bet about it.

Act of Charity
I'd optimize more for not making enemies or alienating people than for making people realize how bad the situation is or joining your cause.

Why isn't this a fully general argument for never rocking the boat?

Act of Charity
Based on my models (such as this one), the chance of AGI "by default" in the next 50 years is less than 15%, since the current rate of progress is not higher than the average rate since 1945, and if anything is lower (the insights model linked has a bias towards listing recent insights).

Both this comment and my other comment are way understating our beliefs about AGI. After talking to Jessica about it offline to clarify our real beliefs rather than just playing games with plausible deniability, my actual probability is between 0.5 and 1% in the next 50 years. Jessica can confirm that hers is pretty similar, but probably weighted towards 1%.

Act of Charity
I think I'm more skeptical than you are that it's possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented)

Where do you think the superintelligent AIs will come from? AFAICT it doesn't make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.

Load More