Adam David Long

Found of LawSnap
wiki: https://lawsnap.mywikis.wiki/wiki/Main_Page
substack: lawsnap.substack.com 

Posts

Sorted by New

Wiki Contributions

Comments

Thanks for this comment. I totally agree and I think I need to make this clearer. I mentioned elsewhere ITT that I realized that I need to do a better job distinguishing between "positions" vs. "people" especially when debate gets heated. 

These are tough issues with a lot of uncertainty and I think smart people are wrestling with these. 

If I can risk the "personal blog" tag: I showed an earlier draft of the essay that became this post to a friend who is very smart but doesn't follow these issues very closely. He put me on the spot by asking me "ok, so which are you? You are putting everyone else in a box, so which box are you in?" And it made me realize, "I don't know -- it depends on which day you ask me!" 

 

Thanks. I think this is useful and I'm trying to think through who is in the upper left hand corner. Are there "AI researchers" or, more broadly, people who are part of the public conversation who believe (1) AI isn't moving all that fast towards AGI and (2) that it's not that risky? 

I guess my initial reaction is that people in the upper  left hand corner just generally think "AI is kind of not that big a deal" and that there are other societal problems to worry about. does that sound right? Any thoughts on who should be placed in the upper left? 

Was not aware of this Collective Intelligence Project and glad to learn about them. I'll take a look. Thanks.

I'm very eager to find other "three-sided" frameworks that this one might map onto. 

Maybe OT, but I also have been reading Arnold Kling's "Three Languages of Politics" but so far have been having trouble mapping Kling's framework onto this one 

Thanks. To be honest, I am still wrestling with the right term to use for this group. I came up with "realist" and "pragmatist" as the "least bad" options after searching for a term that meets the following criteria:

  1. short, ideally one word
  2. conveys the idea of prioritizing (a) current or near-term harms over (b) far-term consequences
  3. minimizes the risk that someone would be offended if the label were applied to them

I also tried playing around with an acronym like SAFEr for "Skeptical, Accountable, Fair, Ethical" but couldn't figure out an acronym that I liked. 

Would very much appreciate feedback or suggestions on a better term. FWIW, I am trying to steelman the position but not pre-judge the overall debate. 

Thanks for that feedback. Perhaps this is another example of the tradeoffs in the "how many clusters are there in this group?" decision. I'm kind of thinking of this as a way to explain, e.g., to smart friends and family members, a basic idea of what is going on. For that purpose I tend, I guess, to lean in favor of fewer rather than more groups, but of course there is always a danger there of oversimplifying.

I think I may also need to do a better job distinguishing between describing positions vs describing people. Most of the people thinking and writing about this have complicated, evolving views on lots of topics, and perhaps many don't fit neatly, as you say. Since the Munk Debate, I've been trying to learn more about, e.g. Melanie Mitchell's views, and in at least one interview I heard, she acknowledged that existential risk was a possibility, she just thought it was a lower priority than other issues.

I need to think more about the "existential risk is a real problem but we are very confident that we can solve it on our current path" typified by Sam Altman and (maybe?) the folks at Anthropic. Thanks for raising that. 

As you note, this view contrasts importantly with both the (1) boosters and (2) the doomers. 

My read is that the booster arguments put forth by, Marc Andreessen or Yann LeCun, argue that "existential risk" concerns are like worrying about "what happens if Aliens invade our future colony on Mars?" -- view that "this is going to be airplane development -- yes there are risks but we are going to handle it!" 

I think you've already explained very well the difference between the Sam Altman view and the Doomer view. Maybe this needs to be a 2 by 2 matrix? OTOH, perhaps there, in the oversimplified framework, there are two "booster" positions on why we shouldn't be inordinatetly worried about existential risk: (1) it's just not a likely possibility (Andreessen, LeCun) (2) "yes it's a problem but we are going to solve it and so we don't need to, e.g. shut down AI development" (Altman)

Thinking about another debate question, I wonder about the question

"We should pour vastly more money and resources into fixing [eta: solving] the alignment problem"

I think(??) that Altman and Yudkowsky would both argue YES, and that Andreessen and LeCun would (I think?) argue NO.

yes, this has been very much on my mind: if this three-sided framework is useful/valid, what does it mean for the possibility of the different groups cooperating?

I suspect that the depressing answer is that cooperation will be a big challenge and may not happen at all. Especially as to questions such as "is the European AI Act in its present form a good start or a dangerous waste of time?" It strikes me that each of the three groups in the framework will have very strong feelings on this question

  • realists: yes, because, even if it is not perfect, it is at least a start on addressing important issues like invasion of privacy. 
  • boosters: no, because it will stifle innovation
  • doomers: no, because you are looking under the lamp post where the light is better, rather than addressing the main risk, which is existential risk.  

Yes agreed. Indeed one of the things that motivated me to propose this three-sided framework is watching discussions of the following form:
1. A & B both state that they believe that AI poses real risks that the public doesn't understand. 

2. A takes (what I now call) the "doomer" position that existential risk is serious and all other risks pale in comparison: "we are heading toward an iceberg and so it is pointless to talk about injustices on the ship re: third class vs first class passengers"

3. B takes (what I now call) the "realist" or "pragmatist position" that existential risk is, if not impossible, very remote and a distraction from more immediate concerns, e.g. use of AI to spread propaganda or to deny worthy people of loans or jobs: "all this talk of existential risk is science fiction and obscuring the REAL problems"

4. A and B then begin vigorously arguing with each other, each accusing the other of wasting time on unimportant issues. 

My hypothesis/theory/argument is that at this point the general public throws up its hands because both the critics/experts can't seem to agree on the basics.

By the way, I hope it's clear that I'm not accusing A or B of doing anything wrong. I think they are both arguing in good faith from deeply held beliefs.