A general guide for pursuing independent research, from conceptual questions like "how to figure out how to prioritize, learn, and think", to practical questions like "what sort of snacks to should you buy to maximize productivity?"
Helen Toner went on the TED AI podcast, giving us more color on what happened at OpenAI. These are important claims to get right.
I will start with my notes on the podcast, including the second part where she speaks about regulation in general. Then I will discuss some implications more broadly.
This seems like it deserves the standard detailed podcast treatment. By default each note’s main body is description, any second-level notes are me.
we have found Mr Altman highly forthcoming
That's exactly the line that made my heart sink.
I find it a weird thing to choose to say/emphasize.
The issue under discussion isn't whether Altman hid things from the new board; it's whether he hid things to the old board a long while ago.
Of course he's going to seem forthcoming towards the new board at first. So, the new board having the impression that he was forthcoming towards them? This isn't information that helps us much in assessing whether to side with Altman vs the old board. That makes me think: why repo...
It seems like one of the biggest problems* in AI Safety is that it is ridiculously hard to get good training (i.e. MATS is ridiculously competitive now) and employed (samesies).
Has anyone look across other categories (e.g. potential other sciences) to see how this problem has been solved? I assume at the most macro level it is going to be "Industry" vs "Government" but I'm looking for more concrete interventions.
Thoughts?
*we're turning away very smart, motivated, well-meaning and skilled people. This is bad.
This seems to be asking from the demand side ("we" being people with lots of money who want to hire trained people), but then switches to demand side (people being turned away looking for training and employment).
I think that's a hint to your answer: other industries solve it by actually hiring lots of people, and offering training on the job or with regular programs. Oh, and usually waiting for equilibrium to catch up, which is not comfortable for rapid-change requirements.
As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy.
Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than-human AI systems from destroying humanity. Persuading governments worldwide to take sufficiently drastic action will not be easy, but we believe this is the most viable path.
Policymakers deal mostly in compromise: they form coalitions by giving a little here to gain a little somewhere else. We are concerned that most legislation intended to keep humanity alive will go...
Man I just want to say I appreciate you following up on each subthread and noting where you agree/disagree, it feels earnestly truthseeky to me.
You are invited to join Vision Weekend Europe, the annual festival of Foresight Institute at Bückeburg Castle in Germany from July 12 - 14.
What’s this year’s theme? This year’s main conference track is dedicated to “Paths to Progress”; meaning you will hear 10+ invited presentations from Foresight’s core community highlighting paths to progress in the following areas:
Confirmed presenters include Jaan Tallinn (Future of Life Institute), Hendrik Dietz (Dietz Lab), Anders Sandberg (University of Oxford), Catalin Mitelut (NYU), Muriel Richard-Noca (ClearSpace), Nikolina Lauc (GlycanAge), Andrew Critch (Encultured), Joao Pedro De Magalhaes (University of Birmingham), Jeremy Barton, Toby Pilditch (Transformative Futures Institute), Matjaz Leonardis (Oxford University), Trent McConaghy (Ocean Protocol), Chiara...
Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently—that is, for the rest of their lives—refrain from criticizing OpenAI, or from otherwise taking any actions which might damage its finances or reputation.[1]
If they refused to sign, OpenAI threatened to take back (or make unsellable) all of their already-vested equity—a huge portion of their overall compensation, which often amounted to millions of dollars. Given this immense pressure, it seems likely that most employees signed.
If they did sign, they became personally liable forevermore for any financial or reputational harm they later caused. This liability was unbounded, so had the potential to be financially ruinous—if, say, they later wrote a blog post critical of OpenAI, they might in principle be...
Geoffrey Irving (Research Director, AI Safety Institute)
Given the tweet thread Geoffrey wrote during the board drama, it seems pretty clear that he's willing to publicly disparage OpenAI. (I used to work with Geoffrey, but have no private info here)
Labs should give deeper model access to independent safety researchers (to boost their research)
Sharing deeper access helps safety researchers who work with frontier models, obviously.
Some kinds of deep model access:
See Shevlane 2022 and Bucknall and Trager 2023.
A lab is disincentivized from sharing deep model access because it doesn't want headlines about h...
List of 27 papers (supposedly) given to John Carmack by Ilya Sutskever: "If you really learn all of these, you’ll know 90% of what matters today."
The list has been floating around for a few weeks on Twitter/LinkedIn. I figure some might have missed it so here you go.
Regardless of the veracity of the tale I am still finding it valuable.
https://punkx.org/jackdoe/30.html
I like this format and framing of "90% of what matters" and someone should try doing it with other subjects.