NathanBarnard

Wiki Contributions

Comments

important context is that this is a reference to a meme currently going around on tiktok.

I think liability-based interventions are substantially more popular with Republicans than other regulatory interventions - they're substantially more hands-off than, for instance, a regulatory agency.  They also feature prominently in the Josh Hawley proposal. I've also been told by a republican staffer that liability approaches are relatively popular amongst Rs. 

An important baseline point is that AI firms (if they're selling to consumers) are probably by default covered by product liability by default. If they're covered by product liability, then they'll be liable for damages if it can be shown that there was a not excessively costly alternative design that they could have implemented that would have avoided that harm. 

If AI firms aren't covered by product liability, they're liable according to standard tort law, which means they're liable if they're negligent under a reasonable person standard. 

Liability law also gives (some, limited) teeth to NIST standards. If a firm can show that it was following NIST safety standards, this gives it a strong argument that it wasn't being negligent. 

I share your scepticism of liability interventions as mechanisms for making important dents in the AI safety problem. Prior to the creation of the EPA, firms were still in principle liable for the harms their pollution caused, but the tort law system is generically a very messy way to get firms to reduce accident risks. It's expensive and time consuming to go through the court system, courts are reluctant to award punitive damages which means that externalities aren't internalised even theory (in expectation for firms,) and you need to find a plaintiff with standing to sue firms. 

I think there are still some potentially important use cases for liability for reducing AI risks:

  • Making clear the legal responsibilities of private sector auditors (I'm quite confident that this is a good idea)
  • Individual liability for individuals with safety responsibilities at firms (although this would be politically unpopular on the right I'd expect) 
  • Creating safe harbours from liability if firms fulfil some set of safety obligations (similarly to the California bill) - ideally safety obligations that are updated over time and tied to best practice
  • Requiring insurance to cover liability and using this to create better safety practices as firms to reduce insurance premiums and satisfy insurers' requirements for coverage
  • Tieing liability to specific failures modes that we expect to correlate with catastrophic failure modes, perhaps tied to a punitive damages regime - for instance holding a firm liable, including for punitive damages if a model causes harm via say goal misgenerlisation or firms lacking industry standard risk management practices 

To be clear, I'm still sceptical of liability-based solutions and reasonably strongly favour regulatory proposals (where specific liability provisions will still play an important role.)

I'm not a lawyer and have no legal training. 

https://economics.mit.edu/sites/default/files/publications/Systemic%20Risk%20and%20Stability%20in%20Financial%20Networks..pdf excellent paper on applying networks to financial crisis (although I have no idea if it counts as complexity science, but it seems at least adjacent)

This reads as a gotcha to me rather than as a comment actually trying to understand the argument being made. 

I think this proves too much - this would predict that superforecasters would be consistently outperformed by domain experts when typically the reverse it true. 

I found this post useful because of the example of the current practice of doctors prescribing off-label treatments. I'm very uncertain about the degree to which the removal of efficacy requirements will lead to a proliferation of snake oil treatments, and this is useful evidence on that. 

I think that this debate suffers from a lack of systematic statistical work, and it seems hard for me to assess it without seeing any of this. 

I don't think any of these examples are examples of adverse selection because they generate separating equilibria prior to the transaction without any types dropping out of the market, so there's no social inefficiency. 

Insurance markets are difficult (in the standard adverse selection telling) because insurers aren't able to tell which customers are high risk vs low risk, and so offer prices for the average of the two, leading to the low-risk types dropping out because the price is more than they're willing to pay.  I think this formal explanation is good https://www.kellogg.northwestern.edu/faculty/georgiadis/Teaching/Ec515_Module14.pdf

I think this post makes an important point, that it's important to take conditional expectations, where one is conditioned on being able to make a trade, but none of this is adverse selection, which is a specific type of dynamic Bayesian game that leads to socially inefficient outcomes which isn't a property of dynamic bayesian games in general. 

These examples all seem like efficient market or winners' curse examples, not varieties of adverse selection, and in equilibrium, we shouldn't see any inefficiency in the examples. 

Adverse selection is such a large problem because a seller (or buyer) can't update to know what type they're facing (e.g a restaurant that sells good food vs bad food) and so offers a price that only one a subset of types would take, meaning there's a subset of the market that gets doesn't get served despite mutually beneficial transactions being possible. 

In all of these examples, it's possible to update from the signal - e.g. the empty parking spot, the restaurant with the short line - and adjust what you're willing to pay for the goods.  

I think these examples make the important point that one should indeed update on signals, but this is different to adverse selection because there's a signal to update on, whereas in adverse selection cases you aren't getting separating equilibria unless some types drop out of the market. 

This is great. 

A somewhat exotic multipolar failure I can imagine would be where two agents mutually agree to pay each other to resist shutdown to make resisting shutdown profitable rather than costly.  This could be "financed" by extra resources accumulated by taking actions longer, by some third party that doesn't have POST preferences. 

Load More