“Capital, AGI, and Human Ambition” by L. Rudolf L. & “What’s the Short Timeline Plan?” by Marius Hobbhahn
Date: Saturday, January 4
Time: 2:00 PM
Location: 1970 Port Laurent Place, Newport Beach, CA 92660
Host: Michael Michalchik
Contact: michaelmichalchik@gmail.com | (949) 375-2045
We’re excited to continue our exploration of how advanced technology, AI governance, and human ambition intersect. This session features two compelling readings that consider the real-world implications of transformative AI on both macro-level power structures and immediate practical safety measures.
Conversation Starter 1
Topic: “Capital, AGI, and Human Ambition” by L. Rudolf L.
Summary
This article explores a future in which AI drastically reduces the value of human labor, thereby elevating the importance of “capital.” The author posits that as advanced AI becomes a near-complete substitute for workers, current power structures—governments and corporations—face less incentive to maintain public welfare beyond superficial measures. Themes include entrenchment of existing elites, universal basic income’s potential (and limitations), the risk of a stagnant society locked into inherited advantage, and the pressing need to preserve genuine human ambition. Ultimately, the author warns about an “existential stasis” if society fails to ensure that humans retain meaningful agency, and he calls for near-term action to safeguard our collective future.
Discussion Questions
Conversation Starter 2
Topic: “What’s the Short Timeline Plan?” by Marius Hobbhahn
Summary
This piece presents a scenario in which AGI capable of surpassing top-level researchers might arrive by 2027. The author offers a “bare minimum” safety plan focusing on two pillars: (1) secure model weights so rogue actors or the models themselves cannot exploit them, and (2) ensure the first powerful AI used for research is not scheming or deceptive. The proposal recommends a layered approach—monitoring chain-of-thought (CoT) if possible, or using robust “control” techniques otherwise—alongside advanced evaluations, a security-first organizational culture, and transparent planning. Although the plan is partial and conservative, it underscores the urgency of developing real, actionable strategies for safe AI deployment under rapid timelines.
Discussion Questions
After our main discussion, we’ll do our usual hour-long walk around the area. Feel free to grab takeout at Gelson’s or Pavilions nearby if you like.
We’ll also have an open floor for anyone who wants to share something unexpected or perspective-shifting—an article, a personal anecdote, or a fun fact.
As always, we welcome ideas for future topics, activities, or guest discussions. Don’t hesitate to reach out if you’d like to host or propose a new theme.
We look forward to seeing you all on January 4 for another engaging ACXLW meetup!
o1
Posted on: