LESSWRONG
LW

yanni kyriacos
514141070
Message
Dialogue
Subscribe

Co-Founder & Director - AI Safety ANZ (join us: www.aisafetyanz.com.au)

Advisory Board Member (Growth) - Giving What We Can

Creating Superintelligent Artificial Agents (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (we already have AGI).

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2yanni's Shortform
1y
116
yanni's Shortform
yanni kyriacos5mo20

AI Safety Monthly Meetup - Brief Impact Analysis 

For the past 8 months, we've (AIS ANZ) been running consistent community meetups across 5 cities (Sydney, Melbourne, Brisbane, Wellington and Canberra). Each meetup averages about 10 attendees with about 50% new participant rate, driven primarily through LinkedIn and email outreach. I estimate we're driving unique AI Safety related connections for around $6.

Volunteer Meetup Coordinators organise the bookings, pay for the Food & Beverage (I reimburse them after the fact) and greet attendees. This initiative would literally be impossible without them.

Key Metrics:

  • Total Unique New Members: 200
    • 5 cities × 5 new people per month × 8 months
    • Consistent 50% new attendance rate maintained
  • Network Growth: 600 new connections
    • Each new member makes 3 new connections
    • Only counting initial meetup connections, actual number likely higher
  • Cost Analysis:
    • Events: $3,000 (40 meetups × $75 Food & Beverage per meetup)
    • Marketing: $600
    • Total Cost: $3,600
    • Cost Efficiency: $6 per new connection ($3,600/600)

ROI: We're creating unique AI Safety related connections at $6 per connection, with additional network effects as members continue to attend and connect beyond their initial meetup.

Reply
yanni's Shortform
yanni kyriacos5mo10

One axis where Capabilities and Safety people pull apart the most, with high consequences is on "asking for forgiveness instead of permission."

1) Safety people need to get out there and start making stuff without their high prestige ally nodding first
2) Capabilities people need to consider more seriously that they're building something many people simply do not want

Reply
yanni's Shortform
yanni kyriacos5mo53

AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.

Reply
yanni's Shortform
yanni kyriacos6mo4-2

Solving the AGI alignment problem demands a herculean level of ambition, far beyond what we're currently bringing to bear. Dear Reader, grab a pen or open a google doc right now and answer these questions: 

1. What would you do right now if you became 5x more ambitious? 
2. If you believe we all might die soon, why aren't you doing the ambitious thing?

Reply
Open Thread Fall 2024
yanni kyriacos6mo10

One idea: Create a LinkedIn advertiser account and segment by Industry and or Job Title. and or Job Function

Reply
yanni's Shortform
yanni kyriacos6mo31

I think > 40% of AI Safety resources should be going into making Federal Governments take seriously the possibility of an intelligence explosion in the next 3 years due to proliferation of digital agents.

Reply
Ryan Kidd's Shortform
yanni kyriacos6mo10

SASH isn't official (we're waiting on funding).

Here is TARA :)
https://www.lesswrong.com/posts/tyGxgvvBbrvcrHPJH/apply-to-be-a-ta-for-tara

Reply
Open Thread Fall 2024
yanni kyriacos6mo54

I think it will take less than 3 years for the equivalent of 1,000,000 people to get laid off.

Reply
yanni's Shortform
yanni kyriacos6mo30

If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.

E.g. 1: compute governance might no longer be a good intervention
E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases

Reply
shortform
yanni kyriacos6mo4-1

Yep
 

Reply
Load More
10Apply to be a TA for TARA
7mo
0
4It is time to start war gaming for AGI
9mo
1
132/3 Aussie & NZ AI Safety folk often or sometimes feel lonely or disconnected (and 16 other barriers to impact)
1y
0
4How have analogous Industries solved Interested > Trained > Employed bottlenecks?
Q
1y
Q
1
4If you're an AI Safety movement builder consider asking your members these questions in an interview
1y
0
17What would stop you from paying for an LLM?
Q
1y
Q
15
86Apply to be a Safety Engineer at Lockheed Martin!
1y
3
2yanni's Shortform
1y
116
3Does increasing the power of a multimodal LLM get you an agentic AI?
Q
1y
Q
3
1Some questions for the people at 80,000 Hours
1y
0
Load More