Director & Movement Builder - AI Safety ANZ
Advisory Board Member (Growth) - Giving What We Can
The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).
I thought it might be useful to spell that out.
Thanks :) Uh, good question. Making some good links? Have you done much nondual practice? I highly recommend Loch Kelly :)
Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!
Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)
2. Importance of interpretability (solution)
3. Mis/dis information from deepfakes (concern)
4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)
5. Unemployment without safety nets for Australians (concern)
6. Rate of capabilities development (concern)
They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!
Thanks for letting me know!
More people are going to quit labs / OpenAI. Will EA refill the leaky funnel?
Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??
Please help me find research on aspiring AI Safety folk!
I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.
But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .
Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?
Some examples of pain points I've observed so far through my interviews with Technical folk: