Kakili

Research interests in AI safety, anthropogenic risk, complex adaptive systems, and computational modeling. Broad experience in international security issues, network science, and exploratory futures modeling.

Wiki Contributions

Comments

Kakili2y10

I'd bet on #4.

Or an alternative combo of 3 and 4: e.g., the AI-empowered corporations continue to gain astronomical wealth until they're largely more powerful than national governments. The process of mergers, acquisitions, and takeovers from AI companies leads to megacorporation oligopolies that control geographic and cyber empires. Incrementally, the economic power of large AI giants begins to encompass geopolitical power--akin to the historical shift from empires to city-states to nation states--and gradually company security debarments become armies, and company leadership, functional authoritarian ASI governments. 

Kakili2y10

Hi! I appreciate you taking a look. I'm new to the topic and enjoy developing this out and learning some new potential useful approaches. 

The survey is rather ambiguous and I've received a ton of feedback and lessons learned; as it is my first attempt at a survey, whether I wanted one or not, I am getting a Ph.D. on what NOT to do with surveys certainly. A learning experience to say the least. 

For the Modeling Transformative AI Risk (MTAIR) project  

The MTAIR guys I'm tracking and have been working with them as able with the hope that our projects can complement each other. Although, MTAIR is a more substantive long-term project which I should be clear to focus on that in the next few months. The scenario mapping project--at least with the first stage (depending on if there's further development)--will be complete more or less in three months. A Short project, unfortunately (which has interfered with changing/rescoping). But I'm hoping there will be some interesting results using the GMA methodology. 

And "Turchin & Derkenberger" piece is the closest classification scheme I've come across that's similar to what I'm working on. Thanks for flagging that one. 

If it looks reasonable to expand and refine and conduct another iteration with a workshop perhaps That could be useful. Hard to do a project like this in a 6mos timeframe. 

Kakili2y10

Great project. I'd love to hear more details. Somehow I missed this post when it was released but it was pointed out to me yesterday. 

I've been developing a project for the past couple of months that lines up quite closely (specifically, the goal of exploring additional scenarios as you highlighted in the takeaways).  I have a very short time horizon for the completion of this particular part of the project (which has interfered with refining the survey much as I'd have liked) but I'd be happy to share any results. 

The project I've been working on cobbling together is broadly similar. I compiled a list of AI scenario "dimensions," key aspects of different scenarios, with three to four conditions for each dimension. Conditions are basically the direction each could go in a scenario, e.g. "takeoff" would be a dimension, and "fast, slow, moderate (controlled), and (uncontrolled) would be example conditions, or "AI paradigm" would be a dimension, with deep learning, hybrid, new paradigm, embodiment, deep learning plus something else would be the conditions). 

The plan so far is to try and get judgments on the individual components or plausible components of each scenario and then use a scenario mapping tool (based on GMA, with some variations) to cluster all possible combinations. 

I have a longer version of the survey for both impact and likelihood, and a short version for just likelihood that's easier to complete. GMA doesn't usually use elicitation, so this could be interesting, but thus far the questions have been a challenge. 

This should provide a large grouping of possible combinations to explore. I'm requesting likelihood (and impact) rankings on each, which should refine the number of options, and then we can parse different clusters to explore unique potential futures (Without rankings, the output is in the tens of millions of options). A more detailed overview is here if you're curious, or shoot me a direct message. I hope to try and put together a more comprehensive version later in the year with other data sources as well. 

Kakili2y10

Thanks for this. I'm a little late to getting to it. 

I'm finding the state capture elements of this and loss of decision autonomy especially convincing and likely underway. I'm actually writing my thesis to focus on this particular aspect of scenarios. I'm using a scenario mapping technique to outline the full spectrum of risks (using a general morphological model), but will focus the details on the more creeping normalization and slow-moving train wreck aspects of potential outcomes. 

Please help with data collection if any of you get a free minute. https://www.surveymonkey.com/r/QRST7M2 I'd be very grateful. And I'll publish a condensed version here asap. 

Kakili2y30

Excellent post. Very useful. 

It's easy to lose sight of the common threads in these arguments and that the commonalities largely outweigh the disagreements. Your comparisons with the different technologies (guns, nukes) was especially useful and I hadn't seen this framed explicitly, with the low effort vs fundamental technology aspect. One thought I had was that this potentially could play out somewhere in the middle; where there is continuous progress with increasingly powerful and disruptive AI to a point, but with the arrival of self-awareness/self-modification - since a self-adaptive technology could be considered itself a fundamental technology - something of a phase transition could force a minimal discontinuity in rate of change. So perhaps continuous with occasional peaks. 

Although in practice this would likely just appear as a continuity and not really make much of a difference.