This is a speculative map of a hot discussion topic. I'm posting it in question form in the hope we can rapidly map the space in answers.

Looking at various claims at X and at the AI summit, it seems possible to identify some key counter-regulation narratives and frames that various actors are pushing.

Because a lot of the public policy debate won't be about "what are some sensible things to do" within a particular frame, but rather about fights for frame control, or "what frame to think in", it seems beneficial to have at least some sketch of a map of the discourse.  

I'm posting this as a question with the hope we can rapidly map the space, and one example of a "local map":

"It's about open source vs. regulatory capture"

It seems the coalition against AI safety, most visibly represented by Yann LeCun and Meta, has identified "it's about open source vs. big tech" as a favorable frame in which they can argue and build a coalition of open-source advocates who believe in the open-source ideology, academics who want access to large models, and small AI labs and developers believing they will remain long-term competitive by fine-tuning smaller models and capturing various niche markets. LeCun and others attempt to portray themselves as the force of science and open inquiry, while the scaling labs proposing regulation are the evil big tech attempting regulatory capture. Because this seems to be the prefered anti-regulation frame, I will spend most time on this.

Apart from the mentioned groups, this narrative seems to be memetically fit in a "proudly cynical" crowd which assumes everything everyone is doing or saying is primarily self-interested and profit-driven.

Overall, the narrative has clear problems with explaining away inconvenient facts, including:

  • Thousands of academics calling for regulation are uncanny counter-evidence for x-risk being just a ploy by the top labs.
    • The narrative strategy seems to explain this by some of the senior academics just being deluded, and others also pursuing a self-interested strategy in expectation of funding.
  • Many of the people explaining AI risk now were publicly concerned about AI risk before founding labs, and at times when it was academically extremely unprofitable, sometimes sacrificing standard academic careers.
    • The narrative move is to just ignore this. 

Also, many things are just assumed - for example, if the resulting regulation would be in the interest o frontrunners.

What could be memetically viable counter-arguments within the frame?

Personally, I tend to point out that motivation to avoid AI risk is completely compatible with self-interest. Leaders of AI labs also have skin in the game.

Also, recently I try to ask people to use the explanatory frame of 'cui bono' also to the other side, namely, Meta.

One possible hypothesis here is Meta just loves open source and wants everyone to flourish.

A more likely hypothesis is Meta wants to own the open-source ecosystem.

A more complex hypothesis is Meta doesn't actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.

To understand the second option, it's a prerequisite to comprehend the "commoditize the complement" strategy. This is a business approach where a company aims to drive down the cost or increase the availability of goods or services complementary to its own offerings. The outcome is an increase in the value of the company's services.

Some famous successful examples of this strategy include Microsoft and PC hardware: PC hardware became a commodity, while Microsoft came close to monopolizing the OS, extracting huge profits. Or, Apple's App Store: The complement to the phone is the apps. Apps have become a cheap commodity under immense competitive pressure, with Apple becoming the most valuable company in the world. Gwern has a great post on the topic.

The future Meta aims for is:

  • Meta becomes the platform of virtual reality (Metaverse).
  • People basically move there.
  • Most of the addictive VR content is generated by AIs, which is the complement.

For this strategy to succeed, it's quite important to have a thriving ecosystem of VR producers, competing on which content will be the most addictive or hack human brains the fastest. Why an entire ecosystem? Because it fosters more creativity in brain hacking. Moreover, if the content was produced by Meta itself, it would be easier to regulate.

Different arguments try to argue against ideological open source absolutism: unless you believe absolutely every piece of information should be freely distributable, there are some conditions under which certain information should be public.
 


Other clearly. important narratives to map seem to be at least


"It's about West vs. China"

Hopefully losing traction with China participating in the recent summit, and top scientists from China signing letters calling for regulation

"It's about near term risks vs. hypothetical sci-fi"

Hopefully losing traction with anyone being able to interact with GPT4
 

New Answer
New Comment

3 Answers sorted by

trevor

40

Strong upvoted.

A more complex hypothesis is Meta doesn't actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.

I think that this particular dystopian outcome is Moloch, not "aimed" malevolence; I think that being "aimed at a dystopian outcome" is an oversimplification of the two-level game; complex internal conflict within the company, parallel to external conflict with other companies. For example, stronger AI and stronger brain-hacking/autoanalysis allows them to reduce the risk of users dropping below 2 hours of use per day (giving their platform a moat and securing the value of the company), while simultaneously reducing the risk of users spending 4+ hours per day which causes watchdog scrutiny, more AI means more degrees of freedom to reap the benefits of addiction with less of the unsightly bits.

I've previously described a hypothetical scenario where:

Facebook and the other 4 large tech companies (of whom Twitter/X is not yet a member due to vastly weaker data security and dominance by botnets) might be testing out their own pro-democracy anti-influence technologies and paradigms, akin to Twitter/X’s open-sourcing its algorithm, but behind closed doors due to the harsher infosec requirements that the big 5 tech companies face. Perhaps there are ideological splits among executives e.g. with some executives trying to find a solution to the influence problem because they’re worried about their children and grandchildren ending up as floor rags in a world ruined by mind control technology, and other executives nihilistically marching towards increasingly effective influence technologies so that they and their children personally have better odds of ending up on top instead of someone else.

Likewise, even if companies in both the US and China seem to currently eschew brain-hacking paradigms, they might reverse course at any time, especially if brain-hacking truly is the superior move for a company or government to make in the context of the current multimodal-based ML paradigm, especially in the current Cold-War style affairs for US-China.

Your and Gwern's "commoditize the complement" point is now a very helpful gear in my model, both for targeted influence tech and for modelling the US and Chinese tech industries more generally, thank you. Also, I had either forgotten or failed to realize that a thriving community of human creators allows for more intense influence strategies to be discovered by multi-armed bandit algorithms, rather than just being algorithmically bottlenecked or user/sensor data bottlenecked.

Seth Herd

30

One possible framing is hubristic overconfidence vs. humble caution.

We aren't saying AGI will overthrow humanity soon if we're not careful; we're saying it could. Everyone saying that's ridiculous is essentially saying "hold my beer!" while attempting a stunt nobody has ever pulled off before. They could be right that it will be easy enough, they're smart enough, and they will know when the danger approaches. But they're gambling the future of all humanity on that confidence.

Experts have widely varying opinions on the dangers of AGI, so the humble belief is that we don't know what's possible, so should behave in accordance with accepting a very broad distribution of timelines, alignment difficulty, and levels of coordination.

That framing won't make the most concerned among us happy. It will result in people who aren't long-termists wanting to approach AGI fast enough to save themselves or their children from a painful death from natural causes. But it might be an acceptable compromise, while we gather and analyze more information.

M. Y. Zuo

21

How about “It’s not proven yet that vastly super-intelligent machines (i.e. >10x peak human intelligence) are even possible.” as a possible frame?

I can’t see a counterargument to it yet.

Even if we have only smartest-human-level models, you can spawn 100000 copies at 10x speed and organize them in the way "one model checks if output of other model displays cognitive biases" and get maybe not "design nanotech in 10 days" level but still something smarter than any organized group of humans.

3Dagon
Hmm.  I've not seen any research about that possibility, which is obvious enough that I'd expect to see it if it were actually promising.  And naively, it's not clear that you'd get more powerful results from using 1M times the compute this way, compared to more direct scaling. I'd put that in the exact same bucket as "not known if it's even possible".
3quetzal_rainbow
Such possibility is explored at least here: https://arxiv.org/abs/2305.17066 but that's not the point. The point is: even in hypothetical world where scaling laws and algorithmic progress hit the wall at smartest-human-level, you can do this and get an arbitrary level of intelligence. In real world, of course, there are better ways.
1M. Y. Zuo
How do you know that's possible?
1quetzal_rainbow
There are definitely enough matter on Earth to sustain additional 100k human brains with signal speed 1000m/s instead of 100m/s? I actually can't imagine how our understanding of physics should be wrong for it to not be possible.
2Dagon
I think you're using a different sense of the word "possible".  In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say "there's plenty of mass to use for computronium".  That's not the same as saying "there is an achievable causal path from what we experience now to the world described".
1M. Y. Zuo
Did you misunderstand my question?  How does the total mass of the Earth or 'signal speed 1000m/s instead of 100m/s' demonstrate how you know?
1quetzal_rainbow
The only reason why it can be impossible is if the amount of compute needed to run one smart-as-smartest human model is so huge that we need to literally disassemble Earth to run 100000 copies. It's quite unrealistic reason because similar amout of compute for actual humans fit an actual small cranium.
1M. Y. Zuo
Why is the amount of matter in a human brain relevant? 
7 comments, sorted by Click to highlight new comments since:

An important sub-topic within "open source vs regulatory capture" is "there does not exist an authority that can legibly and correctly regulate AI".  

Note that the outlook from MIRI folks appears to somewhat agree with this, that there does not exist an authority that can legibly and correctly regulate AI, except by stopping it entirely.

One possible hypothesis here is Meta just loves open source and wants everyone to flourish. ... A more complex hypothesis is Meta doesn't actually love open source that much but has a sensible, self-interested strategy

It's worth noting here that Meta is very careful never to describe Llama as open source, because they know perfectly well that it isn't.  For example, here's video of Yan LeCun testifying under oath: "so first of all Llama system was not made open source ... we released it in a way that did not authorize commercial use, we kind of vetted the people who could download the model it was reserved to researchers and academics"

Worth noting about your note:

The distinction is irrelevant for misuse by bad actors, such as terrorist groups. The model weights were on the dark net very quickly after the supposedly controlled release.

Agreed: this kind of psudeo-openness has all of the downsides of releasing a dual-use capability, and we miss so many benefits from commercial use and innovation.

I think the argument for the regulatory capture framing is basically that you have these things going on:
(1) big tech companies are spending extraordinary amounts of money, time, effort etc. on trying to capture the market of generative AI. 

(2) big tech companies are arguing for regulations which on the face of them would be prohibitive to smaller players entering, because they would involve large fixed costs that would be trivial as a proportion of the available resources that these companies have, but would be prohibitive for smaller players.

Point (1) does not seem like something that an entity which is concerned about the fate of humanity would be doing. 

I think the reality is obviously more complex, in that these big tech companies contain huge numbers of people who will have differing motivations at the conscious and subconscious level, so it's pretty difficult to answer what the motivation of the company as a whole "really is"... but I think if there is anything that it makes sense to say a corporation is "motivated to do", obviously earning profit is the most classic thing that corporations are "motivated to do"

Yes, I think the distinction between a company's goals/intents/behaviors in aggregate versus the intents and behaviors of individual employees is important. I know and trust individual people working at most of the major labs. That doesn't mean that I trust the lab as a whole will behave in complete harmony with the intent of those individual employees that I approve of.