LESSWRONG
LW

334
jamiefisher
13030
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
AI #131 Part 2: Various Misaligned Things
jamiefisher4d30

Ultimately, Congress needs to act.  Right?  (because voluntary commitments from companies just won't cut it)  But how to get to that point?

I've wondered what Daniel & "AI Futures Project's" actual strategy is.
For example, are they focusing the most on convincing:
a) politicians,
b) media outlets (NYT, CNN, Fox, MSNBC, tech websites, etc.),
c) AI/AI-Adjacent Companies/Executives/Managers, or
d) scientists and scientific institutions

If I could over-generalized, I would say:
- the higher up the list, the "more intimate with the halls of power"
- the lower on the list, the "more intimate with the development of AI"

But I feel it's very hard for "d) scientists and scientific institutions" to get their concerns all the way to "a) politicians" without passing-through (or competing-with) "b" and "c".

Daniel's comment reveals they're at least trying to convince "a) politicians" directly. I'm not saying it's bad to talk to politicians, but I feel that politicians are already hearing too many contradictory signals on AI Risk (from "b" and "c" and maybe even some "d").  On my phone, I get articles constantly saying "AI is over-hyped", "AI is a bubble", "AI is just another lightbulb", etc.

That's a lot to compete with!  Even without the influence of lobbying money, the best-intentioned politician might be genuinely confused right now!

If I was able to speak to various AI-Risk organizations directly, I would ask: how much effort are you putting into convincing the people who convince the politicians?  Ideally we'd get the AI Executives themselves on our side (and then the lobbying against us would start to disappear), but in the absence of that, the media needs to at least be talking about it and scientific institutions need to be unequivocal.

But if they're just "doing one Congressional staffer meeting at a time"... without strongly covering the other bases... in my non-expert-opinion... we're in trouble.

Reply
AI #131 Part 2: Various Misaligned Things
jamiefisher11d52

That’s a lot of money. For context, I remember talking to a congressional staffer a few months ago who basically said that a16z was spending on the order of $100M on lobbying and that this amount was enough to make basically every politician think “hmm, I can raise a lot more if I just do what a16z wants” and that many did end up doing just that. I was, and am, disheartened to hear how easily US government policy can be purchased

I am disheartened to hear that Daniel or that anyone else is surprised by this.  I have wondered since "AI 2027" was written how the AGI-Risk Community is going to counter the inevitable flood of lobbying money in support of deregulation.  There are virtually no guardrails left on political spending in American politics.  It's been the bane of every idealist for years.  And who has more money than the top AI companies?

Thus I'm writing to say:

I respect and admire the 'AGI-risk community' for its expertise, rigor and passion, but I often worry that this community is a closed-tent that's not benefiting enough from people with other non-STEM skillsets.

I know the people in this community are extremely-qualified in the fields of AI and Alignment itself.  But it doesn't mean they are experienced in politics, law, communication, or messaging (though I acknowledge that there are exceptions).

But for the wider pool of people who are experienced in those topics (but don't understand neural nets or Von Neumann architecture), where are their discussion groups?  Where do you bring them in?  Is it just in-person?

Reply
A case for courage, when speaking of AI danger
jamiefisher2mo80

"We cold-emailed a bunch of famous people..."

"Matt Stone", co-creator of South Park?  Have you tried him?

He's demonstrated interest in AI and software.  He's brought up the topic in the show.

South Park has a large reach.  And the creators have demonstrated a willingness to change their views as they acquire new information.  (long ago, South Park satirized Climate Change and Al Gore... but then years later they made a whole "apology episode" that presented Climate Change very seriously... and also apologized to Al Gore)

Seriously, give him a try!

Reply
No posts to display.