LESSWRONG
LW

3927
Ted Sanders
420Ω-53610
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Safety researchers should take a public stance
Ted Sanders2d5-1

Yes, mostly.

I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.

Reply
Safety researchers should take a public stance
Ted Sanders2d*140

I wonder if there's a palatable middle ground where instead of banning all AI research, we might get people to agree to ban in advance only dangerous types of ASI.

My current personal beliefs:
- ASI existential risk is very much worth worrying about
- Dangerous ASI is likely the #1 threat to humanity
- In the next few decades, the odds of ASI killing/disempowering us is tiny
- I feel good accelerating capabilities at OpenAI to build technology that helps more people
- I would not support a ban or pause on AI/AGI (because it deprives people of AI benefits, breaks promises, and also accumulates a compute overhang for whenever the ban is later lifted)
- I would happily support a preemptive ban on dangerous ASI

Reply
You can't eval GPT5 anymore
Ted Sanders4d290

We'll take a serious look at offering this option. We're generally happy to give people more control and transparency; the main worry that might hold us back is death by a thousand papercuts where each extra API option is small by itself but makes the overall experience more confusing for developers when dozens of them pile up. But in this case, might really be worth it. Stay tuned. :)

Reply3
You can't eval GPT5 anymore
Ted Sanders4d590

What would be the best solution for you? A new API parameter that lets you override the date?

Reply6
The Industrial Explosion
Ted Sanders1mo30

Hi Rose and Tom. One small error I noticed:

>AI chips double in FLOP/$ every ~2 years.

If you look at the top black dots from 2008 onward, the slope is more like doubling every 4-5 years.

Reply
Thomas Kwa's Shortform
Ted Sanders4mo102

Can you explain what a point on this graph means? Like, if I see Gemini 2.5 Pro Experimental at 110 minutes on GPQA, what does that mean? It takes an average bio+chem+physics PhD 110 minutes to get a score as high as Gemini 2.5 Pro Experimental?

Reply
DAL's Shortform
Ted Sanders6mo158

One potential angle: automating software won't be worth very much if multiple players can do it and profits are competed to zero. Look at compilers - almost no one is writing assembly or their own compilers, and yet the compiler writers haven't earned billions or trillions of dollars. With many technologies, the vast majority of value is often consumer surplus never captured by producers.

In general I agree with your point. If evidence of transformative AI was close, you'd strategically delay fundraising as late as possible. However, if you have uncertainty about your ability to deliver, investors' ability to recognize transformative potential, or uncertainty about competition, you might hedge and raise sooner than you need. Raising too early never kills a business. But raising too late always does.

Reply
evhub's Shortform
Ted Sanders8mo32

Terrific!

Reply
We probably won't just play status games with each other after AGI
Ted Sanders8mo2611

We can already see what people do with their free time when basic needs are met. A number of technologies have enabled new hacks to set up 'fake' status games that are more positive-sum than ever before in history:

  • Watch broadcast sports, where you can feel like a winner (or at least feel connected to a winner), despite not having had to win yourself
  • Play video games with AI opponents, where you can feel like a winner, despite it not being zero-sum against other humans
  • Watch streamers and influencers to feel connected to high status people, without having to earn respect or risk rejection
  • Get into a niche hobby community in order to feel special, ignoring the other niche hobbies that other people join that you don't care about

Feels likely to me that advancing digital technology will continue to make it easier for us to spend time in constructed digital worlds that make us feel like valued winners. On the one hand, it would be sad if people retreat into fake digital siloes; on the other hand, it would be nice if people got to feel like winners more.

Reply
Tips On Empirical Research Slides
Ted Sanders8mo20

Management consulting firms have lots of great ideas on slide design: https://www.theanalystacademy.com/consulting-presentations/

 

Some things they do well:

  • They treat slides as documents that can be understood standalone (this is even useful when presenting, as not everyone is following every word)
  • They employ a lot of hierarchy to help make the content skimmable (helpful for efficiency)
  • They put conclusions / summaries / action items up front, details behind (helpful for efficiency, especially in a high trust environments)
Reply1
Load More
0On power and its amplification
1y
0
4The Pyromaniacs
2y
1
26Transformative AGI by 2043 is <1% likely
2y
117