yanni kyriacos

Director & Movement Builder - AI Safety ANZ

Advisory Board Member (Growth) - Giving What We Can

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

Wiki Contributions

Comments

Please help me find research on aspiring AI Safety folk!

I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.

But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .

Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?

Some examples of pain points I've observed so far through my interviews with Technical folk:

  • I often felt overwhelmed by the vast amount of material to learn.
  • I felt there wasn’t a clear way to navigate learning the required information
  • I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas  (i.e. personal fit / comparative advantage) .
  • I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly? Fast enough?)
  • I regularly experienced fear of failure
  • I regularly experienced fear of wasted efforts / sunk cost
  • Fear of admitting mistakes or starting over might prevent people from making necessary adjustments.
  • I found it difficult to identify my desired role / job (i.e. the end goal)
  • When I did think I knew my desired role, identifying the specific skills and knowledge required for a desired role was difficult
  • There is no clear career pipeline: Do X and then Y and then Z and then you have an A% chance of getting B% role
  • Finding time to get upskilled while working is difficult
  • I found the funding ecosystem opaque
  • A lot of discipline and motivation over potentially long periods was required to upskill
  • I felt like nobody gave me realistic expectations as to what the journey would be like 

Thanks :) Uh, good question. Making some good links? Have you done much nondual practice? I highly recommend Loch Kelly :)

Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!

Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)

2. Importance of interpretability (solution)

3. Mis/dis information from deepfakes (concern)

4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)

5. Unemployment without safety nets for Australians (concern)

6. Rate of capabilities development (concern)

They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!

Ilya Sutskever has left OpenAI https://twitter.com/ilyasut/status/1790517455628198322

Thanks for letting me know!

More people are going to quit labs / OpenAI. Will EA refill the leaky funnel?

[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!

I expect (~ 75%) that the decision to "funnel" EAs into jobs at AI labs will become a contentious community issue in the next year. I think that over time more people will think it is a bad idea. This may have PR and funding consequences too.

Help clear something up for me: I am extremely confused (theoretically) how we can simultaneously have:

1. An Artificial Superintelligence

2. It be controlled by humans (therefore creating misuse of concentration of power issues)

My intuition is that once it reaches a particular level of power it will be uncontrollable. Unless people are saying that we can have models 100x more powerful than GPT4 without it having any agency??

Load More