The host has requested RSVPs for this event

OC ACXLW Sat May 4 The Hipster Effect and AI Self-Alignment

Hello Folks! We are excited to announce the 64th Orange County ACX/LW meetup, happening this Saturday and most Saturdays after that.

Host: Michael Michalchik Email: michaelmichalchik@gmail.com (For questions or requests) Location: 1970 Port Laurent Place (949) 375-2045 Date: Saturday, May 4 2024 Time 2 pm

Conversation Starters:

  1. The Hipster Effect: Why Anti-Conformists Always End Up Looking the Same: A study examines how the desire to be different can paradoxically lead to conformity among anti-conformists. Using a mathematical model, the author shows how, under certain conditions, efforts by individuals to oppose the mainstream result in a synchronized and homogeneous population.

Text and audio link: https://www.technologyreview.com/2019/02/28/136854/the-hipster-effect-why-anti-conformists-always-end-up-looking-the-same/ Full paper: https://arxiv.org/pdf/1410.8001

Questions for discussion: a) How might the "hipster effect" described in the paper relate to other examples of emergent synchronization in complex systems, such as financial markets or neuronal networks? b) The paper discusses several conditions that give rise to the hipster effect, such as a preference for non-conformity and the presence of delay in recognizing trends. What other social or psychological factors might contribute to this phenomenon? c) Can insights from the study of the hipster effect be applied to understanding political polarization and the dynamics of contrarian movements? What strategies might help maintain diversity of opinions in these contexts?

  1. Self-Regulating Artificial General Intelligence: This paper examines the "paperclip apocalypse" concern that a superintelligent AI, even one with a seemingly innocuous goal, could pose an existential threat by monopolizing resources. The author argues that, under certain assumptions about recursive self-improvement, an AI may refrain from enabling the development of more powerful "offspring" AIs to avoid the same control problem that humans face with AIs.

Text link: https://drive.google.com/file/d/1k0wulhBo0syW9n-qNqcFE_gYgh_L54TH/view?usp=sharing

Questions for discussion: a) The paper assumes that an AI can only self-improve by employing specialist "offspring" AIs with targeted goals. How plausible is this assumption, and what implications would a more integrated model of AI self-improvement have for the argument? b) The author suggests that the key to controlling potential negative impacts of AI is to limit their ability to appropriate resources. What legal, economic, or technical mechanisms might be used to enforce such limitations? c) If advanced AIs are indeed "self-regulating" in the manner described, what are the potential benefits and risks of relying on this property as a safety measure? How might we verify and validate an AI's self-regulation capabilities?

Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are readily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660. Share a Surprise: Tell the group about something unexpected that changed your perspective on the universe. Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.


 

1

New Comment