yanni kyriacos

Co-Founder & Director - AI Safety ANZ (join us: www.aisafetyanz.com.au)

Advisory Board Member (Growth) - Giving What We Can

Creating Superintelligent Artificial Agents (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (we already have AGI).

Wikitag Contributions

Comments

Sorted by

AI Safety Monthly Meetup - Brief Impact Analysis 

For the past 8 months, we've (AIS ANZ) been running consistent community meetups across 5 cities (Sydney, Melbourne, Brisbane, Wellington and Canberra). Each meetup averages about 10 attendees with about 50% new participant rate, driven primarily through LinkedIn and email outreach. I estimate we're driving unique AI Safety related connections for around $6.

Volunteer Meetup Coordinators organise the bookings, pay for the Food & Beverage (I reimburse them after the fact) and greet attendees. This initiative would literally be impossible without them.

Key Metrics:

  • Total Unique New Members: 200
    • 5 cities × 5 new people per month × 8 months
    • Consistent 50% new attendance rate maintained
  • Network Growth: 600 new connections
    • Each new member makes 3 new connections
    • Only counting initial meetup connections, actual number likely higher
  • Cost Analysis:
    • Events: $3,000 (40 meetups × $75 Food & Beverage per meetup)
    • Marketing: $600
    • Total Cost: $3,600
    • Cost Efficiency: $6 per new connection ($3,600/600)

ROI: We're creating unique AI Safety related connections at $6 per connection, with additional network effects as members continue to attend and connect beyond their initial meetup.

One axis where Capabilities and Safety people pull apart the most, with high consequences is on "asking for forgiveness instead of permission."

1) Safety people need to get out there and start making stuff without their high prestige ally nodding first
2) Capabilities people need to consider more seriously that they're building something many people simply do not want

AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.

Solving the AGI alignment problem demands a herculean level of ambition, far beyond what we're currently bringing to bear. Dear Reader, grab a pen or open a google doc right now and answer these questions: 

1. What would you do right now if you became 5x more ambitious? 
2. If you believe we all might die soon, why aren't you doing the ambitious thing?

One idea: Create a LinkedIn advertiser account and segment by Industry and or Job Title. and or Job Function

I think > 40% of AI Safety resources should be going into making Federal Governments take seriously the possibility of an intelligence explosion in the next 3 years due to proliferation of digital agents.

SASH isn't official (we're waiting on funding).

Here is TARA :)
https://www.lesswrong.com/posts/tyGxgvvBbrvcrHPJH/apply-to-be-a-ta-for-tara

I think it will take less than 3 years for the equivalent of 1,000,000 people to get laid off.

If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.

E.g. 1: compute governance might no longer be a good intervention
E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases

Load More