LESSWRONG
LW

60
Andrea_Miotti
1354Ω2498222
Message
Dialogue
Subscribe

Founder, executive director of ControlAI. 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2022 Conjecture AI Discussions
A Narrow Path: a plan to deal with AI extinction risk
Andrea_Miotti1y30

Thanks! Do you still think the "No AIs improving other AIs" criterion is too onerous after reading the policy enforcing it in Phase 0?

In that policy, we developed the definition of "found systems" to have this measure only apply to AI systems found via mathematical optimization, rather than AIs (or any other code) written by humans.

This reduces the cost of the policy significantly, as it applies only to a very small subset of all AI activities, and leaves most innocuous software untouched.

Reply
RSPs are pauses done right
Andrea_Miotti2y16-1

In terms of explicit claims:

"So one extreme side of the spectrum is build things as fast as possible, release things as much as possible, maximize technological progress [...].

The other extreme position, which I also have some sympathy for, despite it being the absolutely opposite position, is you know, Oh my god this stuff is really scary.

The most extreme version of it was, you know, we should just pause, we should just stop, we should just stop building the technology for, indefinitely, or for some specified period of time. [...] And you know, that extreme position doesn't make much sense to me either."

Dario Amodei, Anthropic CEO, explaining his company's "Responsible Scaling Policy" on the Logan Bartlett Podcast on Oct 6, 2023.

Starts at around 49:40.

Reply
Priorities for the UK Foundation Models Taskforce
Andrea_Miotti2y30

Thanks for the kind feedback! Any suggestions for a more interesting title?

Reply
Palantir's AI models
Andrea_Miotti2y10

Palantir's recent materials on this show that they're using three (pretty small for today frontier's standards) open source LLMs: Dolly-v2-12B, GPT-NeoX-20B, and Flan-T5 XL.

 

Reply
Critiques of prominent AI safety labs: Conjecture
Andrea_Miotti2y30

Apologies for the 404 on the page, it's an annoying cache bug. Try to hard refresh your browser page (CMD + Shift + R) and it should work.

Reply
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
Andrea_Miotti2y10

The "1000" instead of "10000" was a typo in the summary.

In the transcript Connor states "SLT over the last 10000 years, yes, and I think you could claim the same over the last 150". Fixed now, thanks for flagging!

Reply
Japan AI Alignment Conference
Andrea_Miotti3yΩ110

Which one? All of them seem to be working for me.

Reply
Fighting without hope
Andrea_Miotti3y80

Pessimism of the intellect, optimism of the will.

Reply
Retrospective on the 2022 Conjecture AI Discussions
Andrea_Miotti3y60

People from OpenPhil, FTX FF and MIRI were not interested in discussing at the time. We also talked with MIRI about moderating, but it didn't work out in the end.

People from Anthropic told us their organization is very strict on public communications, and very wary of PR risks, so they did not participate in the end.

In the post I over generalized to not go into full details.

Reply
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
Andrea_Miotti3y20

Yes, some people mentioned it was confusing to have two posts (I had originally posted two separate ones for Summary and Transcript due to them being very lengthy) so I merged them in one, and added headers pointing to Summary and Transcript for easier navigation.

Reply
Load More
Conjecture (org)
3 years ago
Time (value of)
3 years ago
(+604/-57)
46Three main views on the future of AI
12d
1
32Anthropic CEO calls for RSI
7mo
10
196The Compendium, A full argument about extinction risk from AGI
Ω
10mo
Ω
52
74A Narrow Path: a plan to deal with AI extinction risk
Ω
1y
Ω
12
105Priorities for the UK Foundation Models Taskforce
Ω
2y
Ω
4
29Conjecture: A standing offer for public debates on AI
2y
1
96Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
Ω
2y
Ω
10
61Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
Ω
3y
Ω
7
90Retrospective on the 2022 Conjecture AI Discussions
Ω
3y
Ω
5
138Full Transcript: Eliezer Yudkowsky on the Bankless podcast
Ω
3y
Ω
89
Load More