Some choice picks:

The real danger in Western AI policy isn't that AI is doing bad stuff, it's that governments are so unfathomably behind the frontier that they have no notion of _how_ to regulate, and it's unclear if they _can_

Many AI policy teams in industry are constructed as basic the second line of brand defense after the public relations team. A huge % of policy work is based around reacting to perceived optics problems, rather than real problems. [...]

Lots of AI policy teams are disempowered because they have no direct technical execution ability - they need to internally horse-trade to get anything done, so they aren't able to do much original research, and mostly rebrand existing projects. [...]

Many of the immediate problems of AI (e.g, bias) are so widely talked about because they're at least somewhat tractable (you can make measures, you can assess, you can audit). Many of the longterm problems aren't discussed because no one has a clue what to do about them.

The notion of building 'general' and 'intelligent' things is broadly frowned on in most AI policy meetings. Many people have a prior that it's impossible for any machine learning-based system to be actually smart. These people also don't update in response to progress. [...]

The default outcome of current AI policy trends in the West is we all get to live in Libertarian Snowcrash wonderland where a small number of companies rewire the world. Everyone can see this train coming along and can't work out how to stop it.

Like 95% of the immediate problems of AI policy are just "who has power under capitalism", and you literally can't do anything about it. AI costs money. Companies have money. Therefore companies build AI. Most talk about democratization is PR-friendly bullshit that ignores this. [...]

Sometimes, bigtech companies seem to go completely batshit about some AI policy issue, and 90% of the time it's because some internal group has figured out a way to run an internal successful political campaign and the resulting policy moves are about hiring retention. [...]

People wildly underestimate how much influence individuals can have in policy. I've had a decent amount of impact by just turning up and working on the same core issues (measurement and monitoring) for multiple years. This is fun, but also scares the shit out of me. [...]

Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group wrt some issues. [...]

It's very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms. V alienating. [...]

IP and antitrust laws actively disincentivize companies from coordinating on socially-useful joint projects. The system we're in has counter-incentives for cooperation. [...]

AI policy can make you feel completely insane because you will find yourself repeating the same basic points (academia is losing to industry, government capacity is atrophying) and everyone will agree with you and nothing will happen for years. [...]

One of the most effective ways to advocate for stuff in policy is to quantify it. The reason 30% of my life is spent turning data points from arXiv into graphs is that this is the best way to alter policy - create facts, then push them into the discourse. [...]

Most policy forums involve people giving canned statements of their positions, and everyone thanks eachother for giving their positions, then you agree it was good to have a diverse set of perspectives, then the event ends. Huge waste of everyone's time.

To get stuff done in policy you have to be wildly specific. CERN for AI? Cute idea. Now tell me about precise funding mechanisms, agency ownership, plan for funding over long-term. If you don't do the details, you don't get stuff done.

Policy is a gigantic random number generator - some random event might trigger some politician to have a deep opinion about an aspect of AI, after which they don't update further. This can brick long-term projects randomly (very relaxing). [...]

AI is so strategic to so many companies that it has altered the dynamics of semiconductor development. Because chips take years to develop, we should expect drastic improvements in AI efficiency in the future, which has big implications on diffusion of capabilities. 

Attempts to control AI (e.g content filters) directly invite a counter-response. E.g, Dall-E vs #stablediffusion. It's not clear that the control methods individual companies use help relative to the bad ecosystem reactions to these control methods. (worth trying tho)

Most policymakers presume things exist which don't actually exist - like the ability to measure or evaluate a system accurately for fairness. Regulations are being written where no technology today exists that can be used to enforce that regulation. [...]

For years, people built models then built safety tooling around them. People are now directly injecting safety into models via reinforcement learning for human feedback. Everyone is DIY'ing these values, so the values are subjective via the tastes of people within each org. [...]

Malware is bad now but will be extremely bad in the future due to intersection of RL + code models + ransomware economic incentives. That train is probably 1-2 years away based on lag of open source replication of existing private models, but it's on the tracks.

66

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:35 PM

It's very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms.

This is pointing at an ongoing bravery debate: I'm sure the feeling is real; but also, "AGI-style" people see their concerns being ignored & minimized by the "immediate problems" people, and so feel like they need to get even more strident. 

This dynamic is really bad, I'm not sure what the systemic solution is, but as a starting point I would encourage people reading this to vocally support both immediate problems work and long term risks work rather than engaging in bravery-debate style reasoning like "I'll only ever talk about long term risks because they're underrated in The Discourse". Obviously, do this only to the extent that you actually believe it! But most longtermists believe that at least some kinds of immediate problems work is valuable (at least relative to the realistic alternative which, remember, is capabilities work!), and should be more willing to say so.

Ajeya's post on aligning narrow models and the Pragmatic AI Safety Sequence come to my mind as particularly promising starting points for building bridges between the two worlds.

Support.

I would add to this that The Alignment Problem by Brian Christian is a fantastic general audience book that shows how the immediate and long-term AI policy really are facing the same problem and will work better if we all work together.

Reading this has been an absolute fever dream. That's not something that happens when it's mostly or totally inaccurate, like various clickbait articles from major news outlets covering AI safety.

One thing it seems to get wrong is the typical libertarian impulse to overestimate the sovereignty of major tech companies. In the business world, they are clearly the big fish, but in the international world, it's pretty clear that their cybersecurity departments are heavily dependent on logistical/counterintelligence support from various military and intelligence agencies. Corporations might be fine at honeypots, but they aren't well known for being good at procuring agents willing to spend years risking their lives by operating behind enemy lines.

There are similar and even stronger counterparts in Chinese tech companies. Both sides of the Pacific have a pretty centralized and consistent obsession with minimizing the risk of being weaker on AI, starting 2018 at the latest (see page 10).

New to LessWrong?