Sequences

Waterloo Rationality Meetups Showcase

Wiki Contributions

Comments

jenn2mo30

thanks for writing this! can you say a little bit more about the process of writing notes on a scribe? I've been interested in getting one, but my understanding is that e-ink displays are good for mostly static displays, and writing notes on it requires it to update in real-time and will drain the battery fairly quickly? my own e-reader is from like, 2018, so idk if there's been significant updates. how often do you need to charge them when you're using them?

jenn3mo10

your points about taking the time to think through problems and how you can do this across many contexts is definitely what i was going for subtextually. so, thanks for ruining all of my delicate subtlety, adam :p

standing on others' shoulders is definitely a reasonable play as well, although this is not something that works great for me as a Canadian - international shipping is expensive and domestic supply of any recommended product isn't guaranteed.

jenn3mo92

counterpoint: I run a weekly meetup in a mid-size Canadian city and I think it's going swimmingly. It is not trivial to provide value but it is also not insurmountably difficult: I got funding from the EA Infrastructure Fund to buy a day off me per week for running meetups and content planning, and that's enough for me to create programming that people really like, in addition to occasional larger events like day trips and cottage weekends. 8-12 people show up to standard meetups, I'd say around 70% are regulars who show up ~weekly and then you have a long tail of errants. Lots of people move away since it's a university town, but when they visit they make sure to come to a meetup and catch up.

re: constraining, filling a new niche, etc - i feel like your POV is a bit doomered and this is pretty easy for a rationalist meetup to do - just enforce rules for good discourse norms and strongly signal that any topic is allowed as long as the dialogue remains constructive. make it a safe space for the people that will run their mouths in favor of the truth even if it kills the vibe at other parties and everyone else is glaring daggers at them, and people will show up. They'll show up because they can't get a community like that anywhere else in the city, as long as the city in question isnt in the bay area :P

jenn5mo20

heh, thanks, I was going to make a joke about memorizing the top 10 astrology signs but then I didn't think it was funny enough to actually complete

jenn5mo90

leaving out obvious things like religious garb/religious symbols in jewlery, engagement rings/wedding bands, various pride flag colours and meanings etc:

  • semicolon tattoos: indicates that someone is struggling with or has overcome severe mental health challenges such as suicidal depression. You see them fairly often if you look for them. i've heard that butterflies and a few other tattoos mean similar things, but you'll run into false positives with any more generic tattoos.
  • claddagh rings: learned about this while jewelry shopping recently; it's a ring that looks like a pair of hands holding a heart. it's an irish thing, the finger you wear it on and whether or not it's inverted indicates your relationship status.
  • iron rings: In Canada, engineers wear an iron ring on the little finger of their working hand, made from the remains of a bridge that collapsed catastrophically. a decent number of my engineer friends wear the ring.
  • lace code: basically entirely dead, but if someone is dressed like a punk and they're wearing black boots with red laces, there's enough of a chance that they're a nazi that i'd avoid them. there's like a whole extended universe of lace colours and their meanings but red is the most (in)famous one.
  • astrology jewlery: astrology obviously isn't real but if someone is wearing jewlery with their astrological sign, that tells you that 1) they are into astrology (or homestuck if you're lucky) and 2) they likely have some affinity with their designated star sign, which you can ask them about.
  • teardrop tattoo right under the eye: this person killed someone or was in prison at some point, or want to pretend that that's true for them (e.g. if they're a soundcloud rapper from the suburbs). also see other prison tattoos
  • puzzle piece tattoo or jewelry: this person likely has an autistic child or close family member, and is not super up to date on the most uh, progressive thoughts on the topic. autistic people themselves are more likely to dislike the puzzle piece symbolism for autism
jenn6mo92

Thanks for writing this piece; I think your argument is an interesting one.

One observation I've made is that MIRI, despite its first-mover advantage in AI safety, no longer leads the conversation in a substantial way. I do attribute this somewhat to their lack of significant publications in the AI field since the mid-2010s, and their diminished reputation within the field itself. I feel like this serves as one data point that supports your claim.

I feel like you've done a good job laying out potential failure modes of the current strategy, but it's not a slam dunk (not that I think it was your intention to write a slam dunk as much as it was to inject additional nuance to the debate). So I want to ask, have you put any thought into what a more effective strategy for maximizing work on AI safety might be?

jenn9mo30

Thanks for writing this up! We tried this out in our group today and it went pretty well :-)

Detailed feedback:

Because our venue didn't have internet I ended up designing and printing out question sheets for us to use (google docs link). Being able to compare so many responses easily, we were able to partner up first and find disagreements second, which I think was overall a better experience for complete beginners. The takes that you were most polarized on with any random person weren't actually that likely to be the ones that you feel the most strongly about, and there were generally a few options to choose from. So we got a lot of practice in with cruxing without getting particularly heated. I'd like to find a way to add that spice back for a level 2 double crux workshop, though!

We repurposed using the showing fingers for agreement/disagreement for coming up with custom questions; we had quite a few suggestions but only wrote down the ones that got a decent spread in opinion. This took a while to do, but was worth it, because I was actually really bad at choosing takes that would be controversial in the group, and people were like "wtf Jenn how can we practice cruxing if we all agree that everything here is a bunch of 3s." (slightly exaggerated for effect)

I didn't realize this until I was running the event, but this write-up was really vague on what was supposed to happen after step 3! I ended up referencing this section of the double crux post a lot, and we ended up with this structure:

  1. partner up and identify a polarized opinion from the question sheet that you and your partner are both interested in exploring.
  2. spend 5 minutes operationalizing the disagreement.
  3. spend 5 minutes doing mostly independent work coming up with cruxes.
  4. spend 15 minutes discussing with your partner and finding double cruxes. (in our experience, it was actually quite rare for the cruxes to have overlapped!) you'll very likely have to do more operationalizing/refining of the disagreement here. (I'm not sure if that's normal or if we're doing it slightly wrong.)
  5. come back together in a large group, discuss your experience trying to find a double crux and one learning from your attempt to convey to the rest of the group so everyone learns from others' experiences/mistakes. I did this in lieu of the checking in, because the discussions all seemed pretty tame.
  6. repeat from step 1, with a different partner and different opinion.

We did two rounds in total. People unfortunately did not report that the second round was generally easier than the first, but seemed to overall find the workshop a valuable experience! One person commented that it led to much more interesting conversation than most readings-based meetups, and I'm inclined to agree.

jenn10mo10

The question is rather, what qualities do EAs want themselves and the EA movement to have a reputation for?

Yes, I think this is a pretty central question. To cross the streams a little, I did talk about this a bit more in the EA Forums comments section: https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea?commentId=KNCg8LHn7sPpQPcR2

jenn10mo120

I get a sense that the org is probably between 15 and 50 years old

Yep, close to the top end of that.

It's probably been through a bunch of CEOs, or whatever equivalent it has, in that time. Those CEOs probably weren't selected on the basis of "who will pick the best successor to themselves". Why has no one decided "we can help people better like this, even if that means breaking some (implicit?) promises we've made" and then oops, no one really trusts them any more?

That's a really great observation. Samaritans has chosen to elide this problem simply by having no change in leadership throughout the entire run of the organization so far. They'll have to deal with a transition soon as the founders are nearing retirement age, but I think they'll be okay; there are lots of well aligned people in the org who have worked there for decades.

Have they had any major fuck ups? If so, did that cost them reputationally? How did they regain trust?

If not, how did they avoid them? Luck? Tending to hire the sorts of people who don't gamble with reputation? (Which might be easier because that sort of person will instead play the power game in a for-profit company?) Just not being old enough yet for that to be a serious concern?

They haven't had any major fuck ups, and there's two main reasons for that imo:

  1. The culture is very, very hufflepuff, and it shows. When you talk to people from Samaritans it's very obvious that the thing they want to do the most is to do as much good as possible, in the most direct way as possible, and they are not interested in any sort of moral compromise. They've turned down funding from organizations that they didn't find up to snuff. Collaborating orgs either collaborate on Samaritan's stringent terms, or not at all.
    Doing the work this way has become increasingly easier as working with Samaritans has gotten to be an increasingly stronger and valuable signal of goodness, but they didn't make compromises even as a very young and cash strapped organization.
  2. They have a very very slow acculturation process for staff. It's very much one of those organizations where you have to be in it for over a decade before they start trusting you to make significant decisions, and no one who is unaligned would find working there for a decade tolerable, lol. So basically there are no unaligned rogue actors inside it at all.
jenn10mo21

[reputation and popularity] probably have overlapping causes and effects, but they're not the same.

I'm inclined to think that this is a distinction without a difference, but I'm open to having my mind changed on this. Can you expand on this point further? I'm struggling to model what an organization that has a good reputation but is unpopular, or vice versa, might look like.

If EA as a whole is unpopular, that's also going to cause problems for well-reputed EA orgs.

Yes, I think that's the important part, even though you're right that we can't do much about individual orgs choosing to associate itself with EA branding.

Load More