LESSWRONG
LW

jacobhaimes
971330
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
"Open Source AI" is a lie, but it doesn't have to be
jacobhaimes1y10

I have heard that this has been a talking point for you for at least a little while; I am curious, do you think that there are any aspects of this problem that I missed out on?

Reply
"Open Source AI" is a lie, but it doesn't have to be
jacobhaimes1y10

Thanks for responding! I really appreciate engagement on this, and your input.

I would disagree that Mistral's models are considered Open Source using the current OSAID. Although training data itself is not required to be considered Open Source, a certain level of documentation is [source]. Mistral's models do not meet these standards, however, if they did, I would happily place them in the Open Source column of the diagram (at least, if I were to update this article and/or make a new post).

Reply
Documenting Journey Into AI Safety
jacobhaimes2y20

Glad to hear that my post is resonating with some people!

I definitely understand the difficulty regarding time allocation when also working a full time job. As I gather resources and connections I will definitely make sure to spread awareness of them.

One thing to note, though, is that I found the more passive approach of waiting until I find an opportunity to be much less effective than forging opportunities myself (even though I was spending a significant amount of time looking for those opportunities).

A specific and more detailed recommendation for how to do this is going to be highly dependent on your level of experience with ML and time availability. My more general recommendation would be to apply to be in a cohort of BlueDot Impact's AI Governance or AI Safety Fundamentals courses (I believe that the application for the early 2024 session of the AI Safety Fundamentals course is currently open). Taking a course like this provides opportunities to gain connections, which can be leveraged into independent projects/efforts. I found that the AI Governance session was very doable with a full time position (when I started it, I was still full time at my current job). Although I cannot definitely say the same for the AI Safety Fundamentals course, as I did not complete it through a formal session (and instead just did the readings independently), it seems to be a similar time commitment. I think that taking the course with a cohort would definitely be valuable, even for those that have completed the readings independently.

Reply
5Double Podcast Drop on AI Safety
15d
0
34Hunting for AI Hackers: LLM Agent Honeypot
5mo
0
4Understanding AI World Models w/ Chris Canal
5mo
0
4Let’s Talk About Emergence
1y
0
19"Open Source AI" is a lie, but it doesn't have to be
1y
5
3Podcast interview series featuring Dr. Peter Park
1y
0
5INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park
1y
0
6INTERVIEW: StakeOut.AI w/ Dr. Peter Park
1y
0
11Hackathon and Staying Up-to-Date in AI
2y
0
12Interview: Applications w/ Alice Rigg
2y
0
Load More