Wiki Contributions

Comments

Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong.  Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with.  But I want to say publicly that the hedge fund idea makes me nervous.

Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important. 

I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebuttal this is not (and many of the points of disagreement I don’t yet feel confident in my view on). There is still plenty to do to hash out the actual empirical questions at hand.

Why I am nervous 

A hedge fund investing in AI related investments means Aschenbrenner and his investors will gain financially from more and accelerated AGI progress. This seems to me to be one of the most important dynamics (excluding the points about influence above). That creates an incentive to create more AGI progress, even at the cost of safety, which seems quite concerning. I will say that Leopold has a good track record here around turning down money in not signing an NDA at Open AI despite loss of equity.

Aschenbrenner expresses strong support for the liberal democratic world to maintain a lead on AI advancement, and ensure that China does not reach an AI-based decisive military advantage over the United States[1]. The hedge fund, then, presumably aims to both support the goal of maintaining an AI lead over China and profit off of it. In my current view, this approach increases race dynamics and increases the risks of the worst outcomes (though my view on this has softened somewhat since my first draft, for reasons similar to what Zvi clarifies here[2]). 

I especially think that it risks unnecessary competition when cooperation - the best outcome - could still be possible. It seems notable, for example, that no Chinese version of the Situational Awareness piece has come to my attention; going first in such a game both ensures you are first and that the game is played at all. 

It’s also important that the investors (e.g. Patrick Collison) appear to be more focused on economic and technological development, and less concerned about risks from AI. The incentives of this hedge fund are therefore likely to point towards progress and away from slowing down for safety reasons. 

There are other potential lines of thought here I have not yet fleshed out including: 

  • The value of aiming to orient the US government and military attention to AGI (seems like a huge move with unclear sign)
  • The degree to which this move is unilateralist on Aschenbrenner’s part
  • How much money could be made and how much power the relevant people (e.g. Aschenbrenner and his investors) will have through investment and being connected to important decisions. 
    • If a lot of money and/or power could be acquired, especially over AGI development, then there’s a healthy default skepticism I think should be applied to their actions and decision-making. 
  • Specifics about Aschenbrenner himself. Different people in the same role would take very different actions, so specifics about his views, ways of thinking, and profile of strengths and weaknesses may be relevant.

Ways that the hedge fund could in fact be a good idea:

EA and AI causes could really use funder diversification. If Aschenbrenner intends to use the money he makes to support these issues, that could be very valuable (though I’ve certainly become somewhat more concerned with moonshot “become a billionaire to save the world” plans than I used to be).

The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better. 

Aschenbrenner of course could be right about the value of the US government’s involvement, maintaining a US lead, and the importance of avoiding Chinese military supremacy over the US. In that case, him achieving his goals would of course be good. Cruxes include the likelihood of international cooperation, the possibility of international bansprobability of catastrophic outcomes from AI and the likelihood of “muddling through” on alignment.

I’m interested in hearing takes, ways I could be wrong, fleshing out of my arguments, or any other thoughts people have relevant to this. Happy to have private chats in DMs to discuss as well.

  1. ^

     To be clear, Aschenbrenner wants that lead to exist to avoid a tight race in which safety and caution are thrown to the winds. If we can achieve that lead primarily through infosecurity (something he emphasizes), then added risks are low; but I think the views expressed in Situational Awareness also imply the importance of staying technologically ahead of China as their AI research improves. This comes with precisely the risks of creating and accelerating a race of this nature.

    Additionally, when I read his description of the importance of even a two month lead, it implied to me that if the longer, more comfortable lead is lost, there will be strong reasons for the US to advance quickly so as to avoid China reaching superintelligence and subsequent military dominance first (which doesn’t mean he thinks we should actually do this if the time came). This seems to fairly explicitly describe the tight race scenario. I don’t think Aschenbrenner believes this would be a good situation to be in, but nonetheless thinks that’s what the true picture is. 

  2. ^

    From Zvi’s post: “He confirms he very much is NOT saying this:
    The race to ASI is all that matters.
    The race is inevitable.
    We might lose.
    We have to win.
    Trying to win won’t mean all of humanity loses.
    Therefore, we should do everything in our power to win.

    I strongly disagree with this first argument. But so does Leopold. 
    Instead, he is saying something more like this:

    ASI, how it is built and what we do with it, will be all that matters.
    ASI is inevitable.
    A close race to ASI between nations or labs almost certainly ends badly.
    Our rivals getting to ASI first would also be very bad.
    Along the way we by default face proliferation and WMDs, potential descent into chaos.
    The only way to avoid a race is (at least soft) nationalization of the ASI effort.
    With proper USG-level cybersecurity we can then maintain our lead. 
    We can then use that lead to ensure a margin of safety during the super risky and scary transition to superintelligence, and to negotiate from a position of strength.”
     

It sounds from this back and forth like we should assume that Anthropic leadership who left from OAI (so Dario and Daniela Amodei, Jack Clark, Sam McCandlish, others?) are still under NDA because it was probably mutual. Does that sound right to others?

I feel pretty sympathetic to the desire not to do things by text; I suspect you get much more practiced and checked over answers that way.

which privacy skills you are able to execute.

 

This link goes to a private google doc, just fyi.

This is great!

I really like this about slack:

  • If you aren’t maintaining this, err on the side of cultivating this rather than doing high-risk / high-reward investments that might leave you emotionally or financially screwed.
    • (or, if you do those things, be aware I may not help you if it fails. I am much more excited about helping people that don’t go out of their way to create crises)


Seems like a good norm and piece of advice.

I'm confused how much I should care whether an impact assessment is commissioned by some organization. The main thing I generally look for is whether the assessment / investigation is independent. The argument is that because AISC is paying for it, that will influence the assessors? 

I have not read most of what there is to read here, just jumping in on "illegal drugs" ---> ADHD meds. Chloe's comment spoke to weed as the illegal drug on her mind.

AI has immense potential, but also immense risks. AI might be misused by China, or get of control. We should balance the needs for innovation and safety." I wouldn't call this lying (though I agree it can have misleading effects, see Issue 1).


Not sure where this slots in, but there's also a sense in which this contains a missing positive mood about how unbelievably good (aligned) AI could or will be, and how much we're losing by not having it earlier.

Load More