LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
As I said in the post, there are positive, negative and neutral valences for wuckles. Sometimes you say "weird" and you mean "and, bad", sometimes you mean "and, pleasantly surprisingly good".
I've heard the earmark thing before, is there a good writeup about it?
It feels like it should be worldview-quake-y to many people, if-true. (in that "get rid of earmark pork" might have seemed like an obvious thing to do to reduce corruption but alas turns out it was loadbearing)
Mmm.
Somewhat related problem: a lot of the impact of writing a review is that it bumps the post into awareness on the frontpage which makes it more likely for people who liked it to see it and vote positively on it (whether this is good or bad from the perspective of a critical reviewer depends on whether you think you're writing a takedown of something popular, or, just clarifying why something-isn't-good that there was already rough mutual agreement wasn't that good). I don't know that that problem needs "solving" but wanted to acknowledge it and see if anyone had thoughts.
Curated. Having been an organizer for ~7 years I endorse most of these specific ideas and the underlying generating vibe that outputs them.
(some of the ideas are somewhat oddly-specific for particular sub-types of rationalist subculture, i.e. sub-subculture, and not saying every idea here is great for every meetup, but the general vibe of "figure out what culture you want and make it, opinionatedly" is great)
I maybe want to flag:
I would rather meetups in a city exist ~indefinitely in mildly crappy form, than if they exist in ideal form but only for a bit, and then the city has no more rationality meetups after that.
I think I agree with how this is presented in this post (i.e. in this context you're mostly talking about being "logistically crappy", which I think meetups can easily survive, especially if it's done such that someone else could choose to step up and make them less crappy). But, I think sometimes meetups are culturally kinda-crappy (because there wasn't someone putting in the work to make them culturally great), and in those cases I think it is sometimes better for the meetup to die, such that there's a clear vacuum someone could step up to fill.
I spent a bit of time (like, 10 min) thinking through warning shots today.
I definitely do not think anyone should take any actions that specifically cause warning shots to happen (if you are trying to do something like that, you should be looking for "a scary demo", not "a warning shot". Scary demos can/should be demo'd ethically)
If you know of a concrete safety intervention that'd save lives, obviously do the safety intervention.
But, a lot of the questions here are less like "should I do this intervention?" and more like "should I invest years of my life researching into a direction that helps found a new subfield that maybe will result in concrete useful things that save some lives locally but also I expect to paper over problems and cost more lives later?" (when, meanwhile, there are tons of other research directions you could explore)
...yes there is something sus here that I am still confused about, but, with the amount of cluelessness that necessarily involves I don't think people have an obligation to go founding new research subfields if their current overall guess is "useful locally but harmful globally."
I think if you go and try to suppress research into things that you think are moderately likely to save some lives a few years down the line but cost more live later, then we're back into ethically fraught territory (but, like, also, you shouldn't suppress people saying "guys this research line is maybe on net going to increase odds of everyone dying."
I didn't actually get to having a new crystallized take, that was all basically my background thoughts from earlier.
(Also, hopefully obviously: when you are deciding your research path, or arguing people should abandon one, you do have to actually do the work to make an informed argument for whether/how bad any of the effects are, 'it's plausible X might lead to a warning shot that helps' or 'it's plausible Y might lead to helping on net with alignment subproblems' or 'Y might save a moderate number of lives' are all things you need to unpack and actually reason through)
Remembering that PolitiFact actually reminds me to be more worried about this. My impression is PolitiFact started off reasonably neutral and then veered into being a partisan mouthpiece.
(I am maybe less worried about a version that has more specific goal of "decide between candidates you plausibly like", but, it's the sort of the thing that would have a natural tendency to turn into an org, and then get audience-captured)
Random aside: I did recently find out about a thing called "Center for Effective Lawmaking" that seems to rate legislators based on how well they accomplish the policies they (or their party?) set out to do. I haven't looked into it at all but it seemed like another angle from PolitiFact of "past example of someone trying to do the thing".
I read Tim's comment and was like "oh wow good point" and then your comment and was like "oh shit, sign flip maybe." Man, I could use a better way to think sanely about warning shots.
mm. I'd prolly call that one-level more meta than prediction markets, and about the same-meta as the others but oriented around a different problem. (I agree the others are one-level-more-meta if you are specifically oriented around that goal).
Do not accommodate people who don't do the readings
Historically, I've gone in the reverse direction of "mostly, don't assign readings, just allocate the first hour to doing the reading and then talk, to avoid giving people more executive-function-heavy-work" (and, let people who actually did do the readings show up an hour late).
But, I have been pretty pleasantly surprised with how assigning readings has gone for the Lighthaven reading group, and sounds like it's also working for you. I think historically I haven't actually had a meetup where "do the reading" was a regular action as opposed to an occasional one-of so it didn't make as much sense to develop explicit culture around it.
It does seem nice to have a relatively minor, achievable "high standard" to help enculturate the broader idea of "have standards."
I think I agree with one interpretation of this but disagree with like 2 others.
I think we want AI to generally, like, be moral. (with some caveats on what counts as morality, and how to not be annoying about it for people trying to do fairly ordinary things. But like, generally, AI should solve solve it's problems by cheating, and should not solve it's problems by killing people, etc.
Ideally this is true whether it's in a simulation or real life (or, like, ideally in simulations AI does whatever it would do in real life.
Also, like, I think research orgs should be doing various flavors of tests, and if you don't want that to poison data I think it should be the AI Lab's job to filter out that kind of data from the next training run.