yanni kyriacos

Co-Founder & Director - AI Safety ANZ (join us: https://www.facebook.com/groups/1099249420923957)

Advisory Board Member (Growth) - Giving What We Can

Creating Superintelligent Artificial Agents (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (we already have AGI).

Wiki Contributions

Comments

A piece of career advice I've given a few times recently to people in AI Safety, which I thought worth repeating here, is that AI Safety is so nascent a field that the following strategy could be worth pursuing:

1. Write your own job description (whatever it is that you're good at / brings joy into your life).

2. Find organisations that you think need thing that job but don't yet have it. This role should solve a problem they either don't know they have or haven't figured out how to solve.

3. Find the key decision maker and email them. Explain the (their) problem as you see it and how having this role could fix their problem. Explain why you're the person to do it.

I think this might work better for mid-career people, but, if you're just looking to skill up and don't mind volunteering, you could adapt this approach no matter what stage you're at in your career.

TIL that the words "fact" and "fiction" come from the same word: "Artifice" - which is ~ "something that has been created".

When I was ~ 5 I saw a homeless person on the street. I asked my dad where his home was. My dad said "he doesn't have a home". I burst into tears. 

I'm 35 now and reading this post makes me want to burst into tears again. I appreciate you writing it though.

:mega:  Attention: AI Safety Enthusiasts in Wellington (and New Zealand) :mega: 

I'm pleased to announce the launch of a brand new Facebook group dedicated to AI Safety in Wellington: AI Safety Wellington (AISW). This is your local hub for connecting with others passionate about ensuring a safe and beneficial future with artificial intelligence / reducing x-risk.To kick things off, we're hosting a super casual meetup where you can:

  • Meet & Learn: Connect with Wellington's AI Safety community.
  • Chat & Collaborate: Discuss career paths, upcoming events, and training opportunities.
  • Share & Explore: Whether you're new to AI Safety or a seasoned expert, come learn from each other's perspectives.
  • Expand the Network: Feel free to bring a friend who's curious about AI Safety!

Join the group and RSVP for the meetup here :point_down:
https://www.facebook.com/groups/500872105703980

one of my more esoteric buddhist practices is to always spell my name in lower case; it means I am a regularly reminded of the illusory nature of Self, while still engaging in imagined reality (Parikalpitā-svabhāva) in a practical way.

When AI Safety people are also vegetarians, vegans or reducetarian, I am pleasantly surprised, as this is one (of many possible) signals to me they're "in it" to prevent harm, rather than because it is interesting.

Hey mate thanks for the comment. I'm finding "pretty surprised" hard to interpret. Is that closer to 1% or 15%?

Hi Ann! Thank you for your comment. Some quick thoughts:

"I would consider, for the sake of humility, that they might disagree with your assessment for actual reasons, rather than assuming confusion is necessary."

  • Yep! I have considered this. The purpose of my post is to consider it (I am looking for feedback, not upvotes or downvotes).

"They also happen to have a have a p(doom from not AGI) of 40% from combined other causes, and expect an aligned AGI to be able to effectively reduce this to something closer to 1% through better coordinating reasonable efforts."

  • This falls into the confused category for me. I'm not sure how you have a 40% p(doom) from something other than unaligned AGI. Could you spell out for me what could make such a large number?

Hi Richard! Thanks for the comment. It seems to me that might apply to < 5% of people in capabilities?

Thanks for your comment Thomas! I appreciate the effort. I have some questions:

  • by working on capabilities, you free up others for alignment work who were previously doing capabilities but would prefer alignment

I am a little confused by this, would you mind spelling it out for me? Imagine "Steve" took a job at "FakeLab" in capabilities. Are you saying Steve making this decision creates a Safety job for "Jane" at "FakeLab", that otherwise wouldn't have existed?

  • more competition on product decreases aggregate profits of scaling labs

Again I am a bit confused. You're suggesting that if, for e.g., General Motors announced tomorrow they were investing $20 billion to start an AGI lab, that would be a good thing? 

Load More