Vael Gates

Wiki Contributions



This is cool! Why haven't I heard of this?
Arkose has been in soft-launch for a while, and we've been focused on email outreach more than public comms. But we're increasingly public, and are in communication with other AI safety fieldbuilding organizations! 

How big is the team?

3 people: Zach Thomas and Audra Zook are doing an excellent job in operations, and I'm the founder.

How do you pronounce "Arkose"? Where did the name come from?

I think whatever pronunciation is fine, and it's the name of a rock. We have an SEO goal for to surpass the rock's Wikipedia page.

Where does your funding come from?
The Survival and Flourishing Fund.

Are you kind of like the 80,000 Hours 1-1 team?
Yes, in that we also do 1-1 support calls, and that there are many people for whom it'd make sense to do a call with both 80,000 Hours and Arkose! One key difference is that Arkose is aiming to specifically support mid-career people interested in getting more involved in technical AI safety. 

I'm not a mid-career person, but I'd still be interested in a call with you. Should I request a call?
Regretfully no, since we're currently focusing on professors, PhD students, or industry researcher or engineers who have AI / ML experience. This may expand in the future, but we'll probably still be pretty focused on mid-career folks. 

Is Arkose's Resource page special in any way?
Generally, our resources are selected to be most helpful to professors, PhD students, and industry professionals, which is a different focus than most other resource lists. We also think is pretty cool: it's a list of AI safety papers that you can filter by topic area. It's still in development and we'll be updating it over time (and if you'd like to help, please contact Vael!)

How can I help?
• If you know someone who might be a good fit for a call with Arkose, please pass along to them! Or fill out our referral form.
• If you have machine learning expertise and would like to help us review our resources (for free or for pay), please contact

Thanks everyone!

Does Anyuan(安远) have a website? I haven't heard of them and am curious. (I've heard of Concordia Consulting and Tianxia

Anonymous comment sent to me, with a request to be posted here:

"The main lede in this post is that pushing the materials that feel most natural for community members can be counterproductive, and that getting people on your side requires considering their goals and tastes. (This is not a community norm in rationalist-land, but the norm really doesn’t comport well elsewhere.)"

was this as helpful for you/others as expected?

I think these results, and the rest of the results from the larger survey that this content is a part of, have been interesting and useful to people, including Collin and I. I'm not sure what I expected beforehand in terms of helpfulness, especially since there's a question "helpful with respect to /what/", and I expect we may have different "what"s here.

are you planning related testing to do next?

Good chance of it! There's some question about funding, and what kind of new design would be worth funding, but we're thinking it through.

I wonder if it would be valuable to first test predictions among communicators

Yeah, I think this is currently mostly done informally -- when Collin and I were choosing materials, we had a big list, and were choosing based on shared intuitions that EAs / ML researchers / fieldbuilders have, in addition to applying constraints like "shortness". Our full original plan was also much longer and included testing more readings -- this was a pilot survey. Relatedly, I don't think these results are very surprising to people (which I think you're alluding to in this comment) -- somewhat surprising, but we have a fair amount of information about researcher preferences already.

I do think that if we were optimizing for "value of new information to the EA community" this survey would have looked different.

I wonder about the value of trying to build an informal panel/mailing list of ML researchers

Instead of contacting a random subset of people who had papers accepted at ML conferences? I think it sort of depends on one's goals here, but could be good. A few thoughts: I think this may already exist informally, I think this becomes more important as there's more people doing surveys and not coordinating with each other, and this doesn't feel like a major need from my perspective / goals but might be more of a bottleneck for yours!

My guess is that people were aware (my name was all over the survey this was a part of, and people were emailing with me). I think it was also easily inferred that the writers of the survey (Collin and I) supported AI safety work far before the participants reached the part of the survey with my talk. My guess is that my having written this talk didn't change the results much, though I'm not sure which way you expect the confound to go? If we're worried about them being biased towards me because they didn't want to offend me (the person who had not yet paid them), participants generally seemed pretty happy to be critical in the qualitative notes. More to the point, I think the qualitative notes for my talk seemed pretty content focused and didn't seem unusual compared to the other talks when I skimmed through them, though would be interested to know if I'm wrong there.

Yeah, we were focusing on shorter essays for this pilot survey (and I think Richard's revised essay came out a little late in the development of this survey? Can't recall) but I'm especially interested in "The alignment problem from a deep learning perspective", since it was created for an ML audience.

Whoa, at least one of the respondents let me know that they'd chatted about it at NeurIPS -- did multiple people chat with you about it? (This pilot survey wasn't sent out to that many people, so curious how people were talking about it.)

Edited: talking via DM

Load More