Wiki Contributions

Comments

Sorted by
Aris10

Feel free to submit them at separate times! 

Aris10

Applications may be evaluated on a rolling basis, but we plan to release all acceptance and rejection messages simultaneously. You're welcome to apply early and that will likely help us gauge how much we should prepare for additional applicants!

Aris20

Owian's stream doesn't have any mentor questions!

Aris21

From my understanding, this decision is up to the mentors, and only Neel and John are actively planning to reduce the number of scholars. Neel will likely take on more scholars for the training phase than the research phase, and John's scholars willl need to apply for LTFF and receive funding through LTFF before continuing. (Ryan may correct me here if I'm wrong)

Aris10

That's a valid concern! I appreciate you checking :) 

Aris10

Hi Lao, thanks for the question! I imagine that something like this would qualify toward the accountability cohorts, but I think that a full project would likely need to be making translations as well. One reason is that it seems quite hard to make sure that people who we want to make translations for ML work/biosecurity would learn about the codex enough to popularize them. Another reason is that the projects would be taking up at least ~50 hours of time. But, making translations of useful texts and including recommended word-specific translations like you mention would be more along the lines of what I’m looking for! I’d possibly change my mind if it seems like there’s a big market of people looking for a translation codex, but the current bottleneck to me seems like the full translations themselves.

Aris162

The inferential gap didn't end up being worked out through conversation and I ended up mainly working that out by reading (Superintelligence, The Precipice, AGI Safety Fundamentals in that order) and bridging the other side of information with my own. I think this was pretty unfortunate time-wise though. Some of the things that were helpful included: 

- Increased understanding on my end of how ML worked such that I could understand what "learning" looked like. Once I understood this, it was easier to see how my initial questions might have sounded irrelevant to someone working on AI Safety. 
- A better understanding of what an AI planning multiple steps in advance (such as behaving until a treacherous turn) might look like. 
 - Encountering terms like APS or TAI, which communicated the ideas in ways that don't try to say "general intelligence" 

I'd mostly thank AGI Safety Fundamentals for these! I don't regret reading any of those resources, but I do think I'd have come to find AI Safety to be important more quickly if someone had addressed my questions with more understanding of my own background in the early stages. 

Aris10

A couple reasons, but not a ton to do with the contest itself, mostly limited by my capacity at the moment.

One is that I’ve recently been talking to more high schoolers about becoming more involved in EA and I think a lot of the general contests can be intimidating, so a student-focused contest would be helping keep the contest a bit more equal. I think that problem could be fixed with a tier system within the contest, but I’m currently working on my own and don’t have a ton of capacity. Which breeds into the second reason, which is that I plan to be working on a broader scale contest coordination project soon. So, I figured I’d try this as student centered first and see if it’s a good way to do outreach, then if it is I can do it on a larger scale and tiered systems with more support and higher prizes.

Aris30

I'd be up for increasing the prize pool! I checked with a few students who thought it seemed large enough, but I may be mistaken. How large of a prize pool do you think would be ideal?

Aris10

I've adjusted the submission form so that it's open to attachments and links now! Thank you to everyone who added thoughts on this! Soon I'll add some notes to my website to encourage people to make posts and apply to Distill's journal :) 

Load More