Ruby

Team Lead for LessWrong

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Ah, nope. Oops, we haven't published that one yet but will soon. Will edit for now.

Ruby7d4020

I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)

"I'm downvoted because I'm controversial" is a go-to stance for people getting downvoted (and resultantly rate-limited), though in my experience the issue is quality rather than controversy (or rather both in combination).

Overall though, we've been thinking about the rate limit system and its effects. I think there are likely bad effects even if it's successfully in some case reducing low quality stuff.

I think if you are a cofounder of a organization and have a front row seat, that even if you were not directly doing the worst things, I want hold you culpable for not noticing or intervening.

I don't think the post fully conveyed it, but I think the employees were quite afraid of leaving and expected this to get them a lot of backlash or consequences. A particularly salient for people early in EA careers is what kind of reference they'll get.

Think about the situation of leaving your first EA job after a few months. Option 1: say nothing about why you left, have no explanation for leaving early, don't really get a reference. Option 2: explain why the conditions were bad, risk the ire of Nonlinear (who are willing to say things like "your career could be over in a couple of DMs"). It's that kind of bind that gets people to keep persisting, hope it'll get better.

There's a single codebase. It's React and the site is composed out of "components". Most components are shared but can have some switching logic within them changes behavior. For some things e.g. frontpage, each site has its own customized component. There are different "style sheets" / "themes" for each them. When you run in instance for Forum Magnum, you tell it whether it's a LW instance, EA Forum instance, etc. and it will run as the selected kind of site.

Coordination happens via Slack, GitHub, and a number of meetings (usually over Zoom/Tuple). Many changes get "forum-gated" so they only apply to one site.

The LW1.0 was a fork of the Reddit codebase, I assume because it was available and had many of the desired features. I wasn't there for the decision to build LW2.0 as a new Forum, but I imagine doing so allowed for a lot more freedom to build a forum that served the desired purpose in many ways.

how ForumMagnum is really developed

Something in your framing feels a bit off.  Think of "ForumMagnum" as an engine and LessWrong, EA Forum as cars. We're the business of "building and selling cars", not engines. LW and EA Forum are sufficiently similar to use the same the engine, but there aren't Forum Magnum developers, just "LW developers" and "EAF developers". You can back out an abstracted Forum Magnum philosophy, but it's kind of secondary/derived from the object level forums. I suppose my point is against treating it as too primary.

Very interesting! You've identified many of the reasons for many of the decisions.

Cmd-enter will submit on LW comment boxes.

Forum Magnum was originally just the LessWrong codebase (built by the LessWrong Team that later renamed/expanded as Lightcone Infrastructure), and the EA Forum website for a long while was a synced fork of it. In 2021 we (LW) and EA Forum decided to have a single codebase with separate branches rather than a fork (in many ways very similar, but reduced some frictions), and we chose the name Forum Magnum for the shared codebase.

You can see who's contributed to the codebase here: https://github.com/ForumMagnum/ForumMagnum/graphs/contributors

jimrandomh, Raemon, discordious, b0b3rt and darkruby501 are LessWrong devs.
 

Curated. I like a lot of things about this post, but I particularly like posts that dig out something vaguely like "social" vs "non-social drives", and how our non-social drives affect the social incentives that we set up for ourselves. I think this is a complicated tricky topic and Elizabeth has done a commendable job tacking it for herself, a good example of tackling this head on. It's also just unfortunate that the message of "think for yourself/motivate yourself indepedent of other's approval" can become a hoop of other's approval. I like that this was called out. It's tricky, but perhaps that's just how it needs to be.

Curated. There's a lot about Raemon's feedbackloop-first rationality that doesn't sit quite right, isn't quite how I'd theorize about it, but there's a core here I do like. My model is that "rationality" was something people were much more excited about ~10 years ago until people updated that AGI was much closer than previously thought. Close enough, that rather than sharpen the axe (perfect the art of huma thinking), we better just cut the tree now (AI) with what we've got.

I think that might be overall correct, but I like it if not everyone forgets about the Art of Human Rationality. And if enough people pile on the AI Alignment train, I could see it being right to dedicate quite a few of them to the meta of generally thinking better.

Something about the ontology here isn't quite how I'd frame it, though I think I could translate it. The theory that connects this back to Sequences rationality is perhaps that feedbackloops are iterated empiricism with intervention. An alternative name might be "engineered empiricism", basically this is just one approach to entangling oneself with the territory. That's much less of what Raemon's sketched out, but I think situating feedbackloops within known rationality-theory would help.

I think it's possible this could help with Alignment research, though I'm pessimistic about that unless Alignment researchers are driving the development process, but maybe it could happen and just be slower.

I'd be pretty glad for a world where we had more Raemons and other people and this could be explored. In general, I like this is for keeping alive the genre of "thinking better is possible", a core of LessWrong and something I've pushed to keep alive even as the bulk of the focus is on concrete AI stuff.

but I do think it's the most important open problem in the field.

What are the other contenders?

Load More