LTBT appoints Reed Hastings to Anthropic’s board of directors.
Today we announced that Reed Hastings, Chairman and co-founder of Netflix who served as its CEO for over 25 years, has been appointed to Anthropic's board of directors by our Long Term Benefit Trust. Hastings brings extensive experience from founding and scaling Netflix into a global entertainment powerhouse, along with his service on the boards of Facebook, Microsoft, and Bloomberg.
"The Long Term Benefit Trust appointed Reed because his impressive leadership experience, deep philanthropic work, and commitment to addressing AI's societal challenges make him uniquely qualified to guide Anthropic at this critical juncture in AI development," said Buddy Shah, Chair of Anthropic's Long Term Benefit Trust. [...]
Hastings said: "Anthropic is very optimistic about the AI benefits for humanity, but is also very aware of the economic, social, and safety challenges. I'm joining Anthropic's board because I believe in their approach to AI development, and to help humanity progress."
Personally, I'm excited to add Reed's depth of business and philanthropic experience to the board, and that more of the LTBT's work is now public.
Does he have anything public about his thoughts on AI risk? The announcement concerningly focuses on job displacement, which seems largely irrelevant to what I (and I think most other people who have thought about this hard) consider most important to supervise about Anthropic's actions. Has he ever said or written anything about catastrophic or existential risk, or risk of substantial human disempowerment?
What do you know of his level of understanding of AGI, existential risk, super intelligence, etc? It seems that choosing board members is one of the main levers of influence that LTBT members have for influencing Anthropic, and extremely important that they choose neutral board members who will prioritise what matters in the long term and not just what seems safest in the short term. I generally assume anyone is bad at this unless I observe good evidence to the contrary, so by default this seems like a concerning choice. But I would love to hear counter evidence
The Wikipedia section about his donations resembles the donations of someone who doesn't believe AI existential risk.[1]
We can only hope he is a rational person and learns about it quickly.
The Anthropic announcement mentions that he "recently made a $50 million gift to Bowdoin College to establish a research initiative on AI and Humanity," but that isn't focused on AI safety (let alone AI existential risk). Instead, the college vaguely says “We are thrilled and so grateful to receive this remarkable support from Reed, who shares our conviction that the AI revolution makes the liberal arts and a Bowdoin education more essential to society.”
Does anyone understand the real motivation here? Who at Anthropic makes the call to appoint a random CEO who (presumably) doesn't care about x-risk, and what do they get out of it?
I’d guess it looks more stable to investors. Unlike if you have a bunch of EAs on the OpenAI board and they confusingly try to fire the CEO for as unimportant a crime as lying; that’s quite hard for investors to predict.
I (and many others) recently received an email about the Lightcone fundraiser, which included:
Many people with (IMO) strong track records of thinking about existential risk have also made unusually large personal donations, including ..., Zac Hatfield-Dodds, ...
and while I'm honored to be part of this list, there's only a narrow sense in which I've made an unusually large personal donation: the $1,000 I donated to Lightcone is unusually large from my pay-what-I-want budget, and I'm fortunate that I can afford that, but it's also much less than my typical annual donation to GiveWell. I think it's plausible that Lightcone has great EV for impartial altruistic funding, but don't count it towards my efffective-giving goal - see here and here.
(I've also been happy to support Lightcone by attending and recommending events at Lighthaven, including an upcoming metastrategy intensive, and arranging a donation swap, but don't think of these as donations)
Yeah, seems like a reasonable critique. In general I feel confused how to best do this kind of social proof. I know that many (especially lower-context) donors care a lot about it, but it’s hard to communicate the exact level of endorsement in an email that already was straining the lengths of what is reasonable to send out to lower-context people.
On reflection, I feel like I shouldn’t have put you on the list, given that the $1,000 feels too small to be called “unusually large”.
For what it's worth I think this accurately conveys "Zac endorses the Lightcone fundraiser and has non-trivially donated", and dropping the word "unusually" would leave the sentence unobjectionable; alternatively maybe you could have dropped me from the list instead.
I just posted this because I didn't want people to assume that I'd donated >10% of my income when I hadn't :-)
I'd love a "post a hash" feature, where I could make a private post with hashes of a post's title and body. Then when the post is publishable, it could include a verified-by-LW "written at" timestamp as well as the time it was actually published and some time-of-publication post-matter.
Idea prompted by re-reading a private doc I wrote earlier this year, and thinking that it'd be nice to have trustable timestamps if or when it's feasible to publish such docs. Presumably others are in a similar position; it's a neat feature for e.g. predictions (although you'd want some way to avoid the "hash both sides, reveal correct" problem).
Downgrading my competence estimation: Taiwan and Singapore's current coronavirus surge should serve as a warning to Australia (ABC Australia). Excerpts:
Taiwan
Taiwan's status for having successfully contained the virus was challenged in April ... Rules had been relaxed prior to the outbreak, allowing pilots to quarantine for three days instead of the full 14.
At first, infections were reported from pilots, hotel workers and their family members. ... Taiwanese were staying at the same hotel as the quarantining pilots. From there, the virus is believed to have made its way into Taipei's Wanhua district, known for its "tea houses" ... Many who tested positive were unwilling to declare they had visited such adult entertainment venues, making contact tracing more difficult.
Singapore
Even as Singapore was being celebrated, cases were quietly spreading through the island's one vulnerable location: Changi International Airport. It's believed that airport workers who came into contact with travellers from high-risk nations may have contracted the virus before visiting Changi's food court, which is open to the public.
Many of the cases linked to the airport cluster were later found to have a highly contagious Indian variant, known as B.1.617. ... "It's not like everything was relaxed in Singapore. It's not like behaviour has changed in the last six months. But I do think we've got a less-forgiving virus, which is more easily transmitted,"
Only 29 per cent of Singaporeans have received one dose. ... They're now considering lengthening the time between doses and vaccinating younger adults.
How a similar scenario would play out in Australia
What the recent outbreaks in Singapore and Taiwan show is that successful containment strategies can be thwarted by complacency and a failure to identify and act quickly to contain quarantine breaches.
Musing on a piece in Communications of the ACM lately (Changing the Nature of AI Research) - I find this level of ~reframing or insistence on a mathematical perspective quite frustratingly political. ISTM that this just isn't how software or AI systems work! (at least, those which can survive outside academic papers)
Taking a step back, Four Cultures of Programming (a fantastic 75-page read) discusses hacker culture, engineering culture, managerial culture, and mathematical culture in programming. I'm so deep in hacker/engineer culture that it's hard to see out of that, even if I use and appreciate (some) of the conceptual and technical tools from managerial and mathematical cultures.
(and if you want to learn more about the early history of software engineering, Arguments that Count is excellent; see also the much shorter Are we really engineers? essay series by Hillel Wayne)