It's not the same thing; the link was broken because Slack links expire after a month. Fixed for now.
Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/
I'm not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
Let me know if that's at all helpful.
That's one of the suggestions of the CanAIries Winter Getaway where I felt least qualified to pass judgment. I'm working on finding out about their deeper models so that I (or them) can get back to you.
I imagine that anyone who is in a good position to work on this has existing familial/other ties to the countries in debate though, and already knows where to start.
Yep, the field is sort of underfunded, especially after the FTX crash. That's why I suggested grantwriting as a potential career path.
In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can't make the time to talk with everyone.
Hah, this makes a lot of sense. Thanks!
An addition to that: If we look through the goggles of Sara Ness' Relating Languages, the rationalist style of doing conversations is at the far end of the internal-focusing dialects Debater/Chronicler/Scientist. In my experience, more gooey communities have way more Banterer/Bard/Spaceholder-heavy types of interactions, which focus more on peoples' needs in the situation than on forming and communicating true beliefs. People don't necessarily know which dialects they speak themselves, because their way of interacting just feels normal to them, and everyone else weird. It's hard to learn speak in dialects that are not your natural default. For example, I didn't even notice myself slipping into Bard/Banterer during writing this post, but in hindsight it's fairly obvious how it digresses from the LessWrong language game.
I think the LW-way is ideal for its purpose, but I'm realizing that there's a whole lot of tacit knowledge and implicit norms involved in understanding and doing it. This strong selection for a particular style of communication may be responsible for a significant chunk of the difficulty I'm perceiving in interfacing between the rationalist and other memeplexes. In both directions, both for the rationalist community learning from other memeplexes, and for useful memes getting from rationalist circles into the outside world.
Thanks for the input!
It wasn't my intention to reinforce this dichotomy. Instead, I hoped to encourage people to name things that break the rationalist community's Overton window, so that others read them and think "Whoopsie, things like that can actually be said here?!" I suspect that way more people here picked up useful heuristics and models in their pre-rationalist days than realize it, because they overupdate on the way of the Sequences being the One True Way. I've learned in other communities that breaking taboos with questions like these is a useful means for breaking conformity pressure. My hope was that eventually, this helps a little to reduce the imbalance towards prickliness I perceive in the rationalist community, and with that this dichotomy.
Apparently, I haven't yet figured out how to express and enact intentions like these in a way that fits the rationalist language game.
This is a rallying flag: Respond/message me if you can imagine working on the Superconnecting project. Especially if you are based in Europe, but not exclusively then.
The larger part of Ithaka Berlin's expected impact comes from fulfilling this function. However, I'd also be super keen to help build non-co-living-versions of the Superconnecting project, whether as co-founder, advisor, or the person who connected the people who end up building the thing.
This is one of the points I'm less sure about because often enough, the rest of the message will implicitly answer it. In addition, what to include is highly dependent on context and who you are writing to.
Two very general recommendations:
- Something that helps the other person gauge how long the inferential distances between you two are, so that communication can be as quick as possible and as thorough as necessary.
- Something that helps them gauge your level of seniority. It's unfortunate but true that the time of people a couple levels of seniority above your own is extremely valuable. For example, it would hardly make sense for a Nick Bostrom to make time for helping a bright-but-not-Einstein-level high school student he never met decide which minor to choose in university. If people can't gauge your level of seniority, they might misjudge whether they are the right person for you to talk to, and then you might end up in a conversation that is extremely awkward and a waste of time for either side.
Some examples:
- "Hi! I'm xyz, Ops lead at Linear."
- "Hi! I'm a computer science undergrad at Atlantis University and a long-time lurker on LessTrue."
- ...
Thanks for adding clarity! What does "support" mean, in this context? What's the key factors that prevent the probabilities from being >90%?
If the key bottleneck is someone to spearhead this as a full-time position and you'd willingly redirect existing capacity to advise/support them, I might be able to help find someone as well.