Short version: In a saner world, AI labs would have to purchase some sort of "apocalypse insurance", with premiums dependent on their behavior in ways that make reckless behavior monetarily infeasible. I don't expect the Earth to implement such a policy, but it seems worth saying the correct answer aloud anyway.

 

Background

Is advocating for AI shutdown contrary to libertarianism? Is advocating for AI shutdown like arguing for markets that are free except when I'm personally uncomfortable about the solution?

Consider the old adage "your right to swing your fists ends where my nose begins". Does a libertarian who wishes not to be punched, need to add an asterisk to their libertarianism, because they sometimes wish to restrict their neighbor's ability to swing their fists?

Not necessarily! There are many theoretical methods available to the staunch libertarian who wants to avoid getting punched in the face, that don't require large state governments. For instance: they might believe in private security and arbitration.

This sort of thing can get messy in practice, though. Suppose that your neighbor sets up a factory that's producing quite a lot of lead dust that threatens your child's health. Now are you supposed to infringe upon their right to run a factory? Are you hiring mercenaries to shut down the factory by force, and then more mercenaries to overcome their counter-mercenaries?

A staunch libertarian can come to many different answers to this question. A common one is: "internalize the externalities".[1] Your neighbor shouldn't be able to fill your air with a bunch of lead dust unless they can pay appropriately for the damages.

(And, if the damages are in fact extraordinarily high, and you manage to bill them appropriately, then this will probably serve as a remarkably good incentive for finding some other metal to work with, or some way to contain the spread of the lead dust. Greed is a powerful force, when harnessed.)

Now, there are plenty of questions about how to determine the size of the damages, and how to make sure that people pay the bills for the damages they cause. There are solutions that sound more state-like, and solutions that sound more like private social contracts and private enforcement. And I think it's worth considering that there are lots of costs that aren't worth billing for, because the cost of the infrastructure to bill for them isn't worth the bureaucracy and the chilling effect.

But we can hopefully all agree that noticing some big externality and wanting it internalized is not in contradiction with a general libertarian worldview.

 

Liability insurance

Limited liability is a risk subsidy. Liability insurance would align incentives better.

In a saner world, we'd bill people when they cause a huge negative externality (such as an oil spill), and use that money to reverse the damages.

But what if someone causes more damage than they have money? Then society at large gets injured.

To prevent this, we have insurance. Roughly, a hundred people each of whom have a 1% risk of causing damage 10x greater than their ability to pay, can all agree (in advance) to pool their money towards the unlucky few among them, thereby allowing the broad class to take risks that none could afford individually (to the benefit of all; trade is a positive-sum game, etc.).

In a sane world, we wouldn't let our neighbors take substantive risks with our lives or property (in ways they aren't equipped to pay for), for the same reason that we don't let them steal. Letting someone take massive risks, where they reap the gains (if successful) and we pay the penalties (if not), is just theft with extra steps, and society should treat it as such. The freedom and fairness of the markets depends on it just as much as the freedom and fairness of the markets depends on preventing theft.

Which, again, is not to say that a state is required in theory—maybe libertarians would prefer a world in which lots of people sign onto a broad "trade fairly and don't steal" social contract, and this contract is considered table-stakes for trades among civilized people. In which case, my point is that this social contract should probably include clauses saying that people are liable for the damage they cause, and that the same enforcement mechanisms that crack down on thieves also crack down on people imposing risks (on others) that they lack the funds and/or insurance to cover.

Now, preventing people from "imposing risks" unless they "have enough money or insurance to cover the damages" is in some sense fundamentally harder than preventing simple material theft, because theft is relatively easier to detect, and risk analysis is hard. But theoretically, ensuring that everyone has liability insurance is an important part of maintaining a free market, if you don't want to massively subsidize huge risks to your life, liberty, and property.

 

Apocalypse insurance

Hopefully by now the relevance of these points to existential risk is clear. AI companies are taking extreme risks with our lives, liberty, and property (and those of all potential future people), by developing AI while having no idea what they're doing. (Please stop.)

And in a sane world, society would be noticing this—perhaps by way of large highly-liquid real-money prediction markets—and demanding that the AI companies pay out "apocalypse insurance" in accordance with that risk (using whatever social coordination mechanisms they have available).

When I've recently made this claim in-person, people regularly objected: but insurance doesn't pay out until the event happens! What's the point of demanding that Alice has liability insurance that pays out in the event Alice destroys the world? Any insurance company should be happy to sell that insurance to Alice for very cheap, because they know that they'll never have to pay out (on account of being dead in the case where Alice kills everyone).

The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.

(And before you object "not me!", observe that civilization happily flies airplanes over your head, which have some risk of crashing and killing you—and a staunch libertarian might say you should bill civilization for that risk, in some very small amount proportional to the risk that you take on, so as to incentivize civilization to build safer airplanes and offset the risk.)

The guiding principle here is that trade is positive-sum. When you think you can make a lot of money by risking my life (e.g., by flying planes over my house), and I don't want my life risked, there's an opportunity for mutually beneficial trade. If the risk is small enough and the amount of money is big enough then you can give me a cut of the money, such that I prefer the money to the absence-of-risk, and you still have a lot of money left over. Everyone's better off.

This is the relationship that society "should" have with AI developers (and all technologists that risk the lives and livelihoods of others), according to uncompromising libertarian free-market ideals, as far as I can tell.

With the caveat that the risk is not small, and that the AI developers are risking the lives of everyone to a very significant degree, and that's expensive.

In short: apocalypse insurance differs from liability insurance in that it should be paid out to each and every citizen (that developers put at risk) immediately, seen as a trade in exchange for risking their life and livelihood.

In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".

 

Caveats

In a sane world, the exact calculations required for apocalypse insurance to work seem fairly subtle to me. To name a few considerations:

  • An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture.
  • Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).
  • And it's also complicated by the question of whether we're comfortable letting AI companies loan against all the value their AI could create, versus letting them loan against the sliver of that value that comes counterfactually from them (given that some other group might come along a little later that's a little safer and offer the same gains).
  • There are big questions about how to assess the risk (and of course the value of the promised-future-stars depends heavily on the risk).
  • There are big questions about whether future people (who won't get to exist if life on earth gets wiped out) are relevant stakeholders here, and how to bill people-who-risk-the-world on their behalf.

And I'm not trying to flesh out a full scheme here. I don't think Earth quite has the sort of logistical capacity to do anything like this.

My point, rather, is something like: These people are risking our lives; there is an externality they have not internalized; attempting to bill them for it is entirely reasonable regardless of your ideology (and in particular, it fits into a libertarian ideology without any asterisks).

 

Why so statist?

And yet, for all this, I advocate for a global coordinated shutdown of AI, with that shutdown enforced by states, until we can figure out what we're doing and/or upgrade humans to the point that they can do the job properly.

This is, however, not to be confused with preferring government intervention, as my ideal outcome.

Nor is it to be confused with expecting it to work, given the ambitious actions required to hit the brakes, and given the many ways such actions might go wrong.

Rather, I spent years doing technical research in part because I don't expect government intervention to work here. That research hasn’t panned out, and little progress has been made by the field at large; so I turn to governments as a last resort, because governments are the tools we have.

I'd prefer a world cognizant enough of the risk to be telling AI companies that they need to either pay their apocalypse insurance or shut down, via some non-coercive coordinated mechanism (e.g. related to some basic background trade agreements that cover "no stealing" and "cover your liabilities", on pain not of violence but of being unable to trade with civilized people). The premiums would go like their risk of destroying the world times the size of the cosmic endowment, and they'd be allowed to loan against their success. Maybe the insurance actuaries and I wouldn't see exactly eye-to-eye, but at least in a world where 93% of the people working on the problem say there's a 10+% chance of it destroying a large fraction of the future’s value, this non-coercive policy would do its job.

In real life, I doubt we can pull that off (though I endorse steps in that direction!). Earth doesn't have that kind of coordination machinery. It has states. And so I expect we'll need some sort of inter-state alliance, which is the sort of thing that has ever actually worked on Earth before (e.g. in the case of nukes), and which hooks into Earth's existing coordination machinery.

But it still seems worth saying the principled solution aloud, even if it's not attainable to us.


 

  1. ^

    A related observation here is that the proper libertarian free-market way to think of your neighbor's punches is not to speak of forcibly stopping him using a private security company, but to think of charging him for the privilege. My neighbors are welcome to punch me, if they're willing to pay my cheerful price for it! Trade can be positive-sum! And if they're not willing to pony up the cash, then punching me is theft, and should be treated with whatever other mechanisms we're imagining that enforce the freedom of the market.

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 1:20 PM

Very surprised there's no mention here of Hanson's "Foom Liability" proposal: https://www.overcomingbias.com/p/foom-liability

I think AI risk insurance might be incompatible with libertarianism. Consider the "rising tide" scenario, where AIs gradually get better than humans at everything, gradually take all jobs and outbid us for all resources to use in their economy, leaving us with nothing. According to libertarianism this is ok, you just got outcompeted, mate. And if you band together with other losers and try to extract payment from superior workers who are about to outcompete you, well, then you're clearly a villain according to libertarianism. Even if these "superior workers" are machines that will build a future devoid of anything good or human. It makes complete sense to fight against that, but it requires a better theory than libertarianism.

I think there isn't an issue as long as you ensure property rights for the entire universe now. Like if every human is randomly assigned a silver of the universe (and then can trade accordingly), then I think the rising tide situation can be handled reasonably. We'd need to ensuring that AIs as a class can't get away with violating our existing property rights to the universe, but the situation is analogous to other rights.

This is a bit of an insane notion of property rights and randomly giving a chunk to every currently living human is pretty arbitrary, but I think everything works fine if we ensure these rights now.

You think AIs won't be able to offer humans some deals that are appealing in the short term but lead to AIs owning everything in the long term? Humans offer such deals to other humans all the time and libertarianism doesn't object much.

Why is this a problem? People who are interested in the long run can buy these property rights while people who don't care can sell them.

If AIs respect these property rights[1] but systematically care more about the long run future, then so be it. I expect that in practice some people will explicitly care about the future (e.g. me) and also some people will want to preserve option value.


  1. Or we ensure they obey these property rights, e.g. with alignment. ↩︎

Even if you have long term preferences, bold of you to assume that these preferences will stay stable in a world with AIs. I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It'll be like money-naive people vs loan sharks, times 1000.

I expect an AI, being smarter than a human, can just talk you into signing away the stuff you care about. It'll be like money-naive people vs loan sharks, times 1000.

I think this is just a special case of more direct harms/theft? Like imagine that some humans developed the ability to mind control others, this can probably be handled via the normal system of laws etc. The situation gets more confusing as the AIs are more continuous with more mundane persuasion (that we currently allow in our society). But, I still think you can build a broadly liberal society which handles super-persuasion.

If nothing else, I expect mildly-superhuman sales and advertising will be enough to ensure that the human share of the universe will decrease over time. And I expect the laws will continue being at least mildly influenced by deep pockets, to keep at least some such avenues possible. If you imagine a hard lock on these and other such things, well that seems unrealistic to me.

If you imagine a hard lock on these and other such things, well that seems unrealistic to me.

I'm just trying to claim that this is possible in principle. I'm not particularly trying to argue this is realistic.

I'm just trying to argue something like "If we gave out property right to the entire universe and backchained from ensuring the reasonable enforcement of these property rights and actually did a good job on enforcement, things would be fine."

This implicitly requires handling violations of property rights (roughly speaking) like:

  • War/coups/revolution/conquest
  • Super-persuasion and more mundane concerns of influence

I don't know how to scalably handle AI revolution without ensuring a property basically as strong as alignment, but that seems orthogonal.

We also want to handle "AI monopolies" and "insufficient AI competition resulting in dead weight loss (or even just AIs eating more of the surplus than is necessary)". But, we at least in theory can backchain from handling this to what intervertions are needed in practice.

I agree that there is a concern due to an AI monopoly on certain goods and services, but I think this should be possible to handle via other means.

Human labor becomes worthless but you can still get returns from investments. For example, if you have land, you should rent the land to the AGI instead of selling it.

People who have been outcompeted won't keep owning a lot of property for long. Something or other will happen to make them lose it. Maybe some individuals will find ways to stay afloat, but as a class, no.

[-]jmh5mo20

Does any of this discussion (both branches from your first comment)change if one starts with the assuming that AIs are actually owned, and can be bought, by humans? Owned directly but some and indirectly by others via equity in AI companies.

Like I said, people who have been outcompeted won't keep owning a lot of property for long. Even if that property is equity in AI companies, something or other will happen to make them lose it. (A very convincing AI-written offer of stock buyback, for example.)

I agree with everyone else pointing out that centrally-planned guaranteed payments regardless of final outcome doesn't sound like a good price discovery mechanism for insurance. You might be able to hack together a better one using https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet , although I can't figure out an exact mechanism.

Superforecasters say the risk of AI apocalypse before 2100 is 0.38%. If we assume whatever price mechanism we come up with tracks that, and value the world at GWP x 20 (this ignores the value of human life, so it's a vast underestimate), and that AI companies pay it in 77 equal yearly installments from now until 2100, that's about $100 billion/year. But this seems so Pascalian as to be almost cheating. Anybody whose actions have a >1/25 million chance of destroying the world would owe $1 million a year in insurance (maybe this is fair and I just have bad intuitions about how high 1/25 million really is)

An AI company should be able to make some of its payments (to the people whose lives it risks, in exchange for the ability to risk those lives) by way of fractions of the value that their technology manages to capture. Except, that's complicated by the fact that anyone doing the job properly shouldn't be leaving their fingerprints on the future. The cosmic endowment is not quite theirs to give (perhaps they should be loaning against their share of it?).

This seems like such a big loophole as to make the plan almost worthless. Suppose OpenAI said "If we create superintelligence, we're going to keep 10% of the universe for ourselves and give humanity the other 90%" (this doesn't seem too unfair to me, and the exact numbers don't matter for the argument). It seems like instead of paying insurance, they can say "Okay, fine, we get 9% and you get 91%" and this would be in some sense a fair trade (one percent of the cosmic endowment is worth much more than $100 billion!) But this also feels like OpenAI moving some numbers around on an extremely hypothetical ledger, not changing anything in real life, and continuing to threaten the world just as much as before.

But if you don't allow a maneuver like this, it seems like you might ban (through impossible-to-afford insurance) some action that has an 0.38% chance of destroying the world and a 99% chance of creating a perfect utopia forever.

There are probably economic mechanisms that solve all these problems, but this insurance proposal seems underspecified.

[-]So8res5mo2312

Agreed that the proposal is underspecified; my point here is not "look at this great proposal" but rather "from a theoretical angle, risking others' stuff without the ability to pay to cover those risks is an indirect form of probabilistic theft (that market-supporting coordination mechanisms must address)" plus "in cases where the people all die when the risk is realized, the 'premiums' need to be paid out to individuals in advance (rather than paid out to actuaries who pay out a large sum in the event of risk realization)". Which together yield the downstream inference that society is doing something very wrong if they just let AI rip at current levels of knowledge, even from a very laissez-faire perspective.

(The "caveats" section was attempting--and apparently failing--to make it clear that I wasn't putting forward any particular policy proposal I thought was good, above and beyond making the above points.)

What about regulations against implementations of known faulty architectures?

The IFRS board (Non US) and GAAP/FASB board (US) are defined governing bodies that tackle the financial reporting aspects of companies - which AI companies are, might be good thing to discuss the ideas regarding the responsibilities for accounting for existential risks associated with AI research, I'm pretty sure they will listen assuming that they don't want another Enron or SBF type case[1] happening again.

  1. ^

    I think its its safe to assume that an AGI catastophic event will outweigh all previous fraudulent cases in history combined. So I think these governing bodies already installed will cooperate given the chance.

[-]Roko5mo2611

If it pays out in advance it isn't insurance.

A contract that relies on a probability to calculate payments is also a serious theoretical headache. If you are a Bayesian, there's no objective probability to use since probabilities are subjective things that only exist relative to a state of partial ignorance about the world. If you are a frequentist there's no dataset to use.

There's another issue.

As the threat of extinction gets higher and also closer in time, it can easily be the case that there's no possible payment that people ought to rationally accept.

Finally different people have different risk tolerances such that some people will gladly take a large risk of death for an upfront payment, but others wouldn't even take it for infinity money.

E.g. right now I would take a 16% chance of death for a $1M payment, but if I had $50M net worth I wouldn't take a 16% risk of death even if infinity money was being offered.

Since these x-risk companies must compensate everyone at once, even a single rich person in the world could make them uninsurable.

Even in a traditional accounting sense, I'm not aware that there is any term that could capture the probable existential effects of a research, but I understand what @So8res is trying to pursue in this post which I agree with. But, I think apocalypse insurance is not the proper term here. 

I think IAS/IFRS 19, actuarial gains or losses / IFRS 26 Retirement benefits are more closer to the idea - though these theortical accounting approaches applies to employees of a company. But these can be tweaked to another form of accounting theory (on another form of expense or asset) that captures how much costs are due out of possible catastrophic causes. External auditors can then review this periodically.  (The proceeds from such should be pooled for averting the AGI existential risk scenarios - this might be a hard one to capture as to who manages the collected funds.)

To think of it, AI companies are misrepresenting their financials for not properly addressing a component in their reporting that reflects the "responsibility they have for the future of humanity", and this post somehow did shed some light to me that yes, this value should be somehow captured in their financial statements. 

Based on what I know, these AI companies have very peculiar company setups, yet the problem is the world's population comprises the majority of the stakeholders (in a traditional accounting sense). So I think there is a case that AI companies should be obliged to present how they capture the possibility of losses from catastrophic events, and have them audited by external auditors - so the public is somehow aware: for example a publicly available FS will show these expenses and has been audited by a big 4 audit firm and then the average citizen will say: "Okay, this is how they are trying to manage the risks of AI research and it was audited by a Big 4 firm. I expect this estimated liability will be paid to the organisation built for redistributing such funds."[1]

(AI companies can avoid declaring such future catastrophic expense, if they can guarantee that the AGI they are building won't destroy the world which I am pretty sure no AI company can claim for the moment.)

I'm a former certified public accountant before going to safety research.

  1. ^

    Not sure of who will manage the collections though, haven't gone that far in my ideas. Yet, it is safe to say that talking to the IFRS board or GAAP board about this matter can be an option, and I expect that they will listen with the most respectable members of this community re: the peculiar financial reporting aspects of AI research.

Ooops my bad, there is a pre-existing reporting standard that covers for research and development, not existential risks though: IFRS 38 intangible assets.

An intangible asset is an identifiable non-monetary asset without physical substance. Such an asset is identifiable when it is separable, or when it arises from contractual or other legal rights. Separable assets can be sold, transferred, licensed, etc. Examples of intangible assets include computer software, licences, trademarks, patents, films, copyrights and import quotas.

An update to this standard, should be necessary to cover for the nature of AI research. 

Google Deepmind is using IFRS 38 as per page 16 of 2021 FS reports I found, so they are following this standard already and expect that if an update on this standard re: a proper accounting theory on the estimated liability of an AI company doing AGI research, it will be governed by the same accounting standard. Reframing this post to target this IFRS 38 standard, is recommended in my opinion. 

"responsibility they have for the future of humanity"

 

As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for 'killing' potential future lives it could make an already unworkable proposal even MORE unworkable.

Yes, utility of money is currently fairly well bounded. Liability insurance is a proxy for imposing risks on people, and like most proxies comes apart in extreme cases.

However: would you accept a 16% risk of death within 10 years in exchange for an increased chance of living 1000+ years? Assume that your quality of life for those 1000+ years would be in the upper few percentiles of current healthy life. How much increased chance of achieving that would you need to accept that risk?

That seems closer to a direct trade of the risks and possible rewards involved, though it still misses something. One problem is that it still treats the cost of risk to humanity as being simply the linear sum of the risks acceptable to each individual currently in it, and I don't think that's quite right.

[-][anonymous]5mo20

If you pick 5000 years for your future lifespan if you win, 60 years if you lose, and you discount each following year by 5 percent, you should take the bet until your odds are worse than 48.6 percent doom.

Having children younger than you doesn't change the numbers much unless you are ok with your children also being corpses and you care about hypothetical people not yet alive. (You can argue that this is a choice you cannot morally make for other people, but the mathematically optimal choice only depends on their discount rates)

Discounting also reduces your valuation of descendents because everything you are - genetics and culture - a certain percentage is lost with each year. This is I believe the "value drift" argument, over an infinite timespan mortal human generations will also lose all value in the universe those humans living today care about. It is no different in a thousand years if the future culture, 20 generations later, is human or AI. The AI descendants may have drifted less as AI models start immortal inherently.

I think that apocalypse insurance isn't as satisfactory as you imply, and I'd like to explain why below.

First: what's a hardline libertarian? I'll say that a hardline libertarian is in favour of people doing stuff with markets, and there being courts that enforce laws that say you can't harm people in some small, pre-defined, clear-cut set of ways. So in this world you're not allowed to punch people but you are allowed to dress in ways other people don't like.

Why would you be a hardline libertarian? If you're me, the answer is that (a) markets and freedom etc are pretty good, (b) you need ground rules to make them good, and (c) government power tends to creep and expand in ill-advised ways, which is why you've got to somehow rein it in to only do a small set of clearly good things.

If you're a hardline libertarian for these reasons, you're kind of unsatisfied with this proposal, because it's sort of subjective - you're punishing people not because they've caused harm, but because you think they're going to cause harm. So how do you assess the damages? Without further details, it sounds like this is going to involve giving a bunch of discretion to a lawmaker to determine how to punish people - discretion that could easily be abused to punish a variety of activities that should thrive in a free society.

There's probably some version that works, if you have a way of figuring out which activities cause how much expected harm that's legibly rational in a way that's broadly agreeable. But that seems pretty far-off and hard. And in the interim, applying some hack that you think works doesn't seem very libertarian.

[ note: I am not a libertarian, and haven't been for many years. But I am sympathetic. ]

Like many libertarian ideas, this mixes "ought" and "can" in ways that are a bit hard to follow.  It's pretty well-understood that all rights, including the right to redress of harm, are enforced by violence.  In smaller groups, it's usually social violence and shared beliefs about status.  In larger groups, it's a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.  

When you say you'd "prefer a world cognizant enough of the risk to be telling AI companies that...", I'm not sure what that means in practice - the world isn't cognizant and can't tell anyone anything.  Are you saying you wish these ideas were popular enough that citizens forced governments to do something?  Or that you wish AI companies would voluntarily do this without being told?  Or something else?

In theory, one billionth of the present buys one billionth of the future: Go to a casino, put all on black until you can buy the planet.

Therefore, they can buy their insurance policy with dollars. If you can't buy half the planet, apparently you can't afford a 50% chance to kill everyone.

Even a libertarian might eventually recognize that the refrain "internalize your externalities" is being used to exploit him: all anyone who wants to infringe on his liberty needs to do is utter the phrase and then make up an externality to suit.

  • You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.
  • You must be confined to your house and wear a mask because of the externality of grandma dying.
  • You may not own a gun because of the externality of children getting shot.
  • You must wear a headscarf because of the externality of … I dunno, Allah causing armageddon?
  • You may not eat hamburgers because of the externality of catastrophic climate collapse.
  • You may not use plastic straws because of the externality of sea turtles suffocating.

Most of these seem legitimate to me, modulo that instead of banning the thing you should pay for the externality you're imposing. Namely, climate change, harming wildlife, spreading contagious diseases, and risks to children's lives.

Those are real externalities, either on private individuals or on whole communities (by damaging public goods). It seems completely legitimate to pay for those externalities.

The only ones that I don't buy are the religious ones, which are importantly different because they entail not merely an external cost, but a disagreement about actual cause and effect. 

"I agree that my trash hurts the wildlife, but I don't want to stop littering or pay to have the litter picked up" is structurally different than "God doesn't exist, and I deny the claim that my having gay sex increases risk of smiting" or "Anthropogenic climate change is fake, and I deny the claim that my pollution contributes to warming temperatures."

Which is fine. Libertarianism depends on having some shared view of reality, or at least some shared social accounting about cause and effect and which actions have which externalities, in order to work. 

If there are disagreements, you need courts to rule on them, and for the rulings of the courts to be well regarded (even when people disagree with the outcome of any particular case).

You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.

Well the problem is god isn't real.

You may not eat hamburgers because of the externality of catastrophic climate collapse.

Your hamburger becomes slightly more expensive because there is a carbon tax.

I would say your examples are abusing the concept (And I have seen them before because people make trashy arguments all the time). The concept itself makes lots of sense.

[This comment is no longer endorsed by its author]Reply1

My problem is you are mixing up real and not real things. They are different. The whole post above assumes a civilization of way more sanity from people in power and the people watching them than the one we live in.

[This comment is no longer endorsed by its author]Reply

I'm confused, like I always confused with hardline libertarianism. Why would companies agree to this? Who would put capabilities researchers in jail if they, like, "I'd rather not purchase apocalypse insurance and create AI anyway"? Why this actor is not a state by other name? What should I read to became less confused?

Here's another angle that a hardline libertarian might take (partly referenced in the footnote).  I'm not quite sure how far I'd want to take the underlying principle, but let's run with it for now:

Libertarians generally agree that it's permissible to use (proportionate) force to stop someone from engaging in aggressive violence.  That could mean grabbing someone's arm if they're punching in your direction, even if they haven't hit you yet.  You don't necessarily have to wait until they've harmed you before you use any force against them.  The standard for this is probably something like "if a reasonable person in your position would conclude with high probability that the other person is trying to harm you".

Next, let's imagine that you discover that your neighbor Bob is planning to poison you.  Maybe you overhear him telling someone else about his plans; and you notice a mail delivery of a strange container labeled "strychnine" to his house, and look up what it means; maybe you somehow get hold of his diary, which describes his hatred for you and has notes about your schedule and musings about how and when would be the best time to add the poison to your food.  At some point, a reasonable person would have very high certainty that Bob really is planning to kill you.  And then you would be justified in using some amount of force, at least to stop him and possibly to punish him.  For tactical reasons it's best to bring in third parties, show them the evidence, and get them on your side; ideally this would all be established in some kind of court.  But in principle, you would have the right to use force yourself.

Or, like, suppose Bob is setting up sticks of dynamite right beside your house.  Still on his property!  But on the edge, as close to your house as possible.  And setting up a fuse, and building a little barrier that would reduce the amount of blast that would reach his house.  Surely at some point in this process you have the right to intervene forcibly, before Bob lights the fuse.  (Ideally it'd be resolved through speech, but suppose Bob just insists that he's setting it up because it would look really cool, and refuses to stop.  "Does it have to be real dynamite?"  "Yeah, otherwise it wouldn't look authentic."  "I flat out don't believe you.  Stop it."  "No, this is my property and I have the right.")

Next, suppose Bob is trying to set up a homemade nuclear reactor, or perhaps to breed antibiotic-resistant bacteria, for purposes of scientific study.  Let's say that is truly his motive—he isn't trying to endanger anyone's lives.  But he also has a much higher confidence in his own ability to avoid any accidents, and a much higher risk tolerance, than you do.  I think the principle of self-defense may also extend to here: if a reasonable person believes with high confidence that Bob is trying to do a thing that, while not intended to harm you, has a high probability of causing serious harm to you, then you have the right to use force to stop it.  (Again, for tactical reasons it's best to bring in third parties, and to try words before actually using force.)

If one is legitimately setting up something like a nuclear reactor, ideally the thing to do would be to tell everyone "I'm going to set up this potentially-dangerous thing.  Here are the safety precautions I'm going to take.  If you have objections, please state them now."  [For a nuclear reactor, the most obvious precaution is, of course, building it far away from humans, but I think there are exceptions.]  And probably post signs at the site with a link to your plans.  And if your precautions are actually good, then in theory you reach a position where a reasonable person would not believe your actions have a high chance of seriously harming them.

The notion of "what a reasonable person would believe" is potentially very flexible, depending on how society is.  Which is dangerous, from the perspective of "this might possibly justify a wide range of actions".  But it can be convenient when designing a society.  For example, if there's a particular group of people who make a practice of evaluating dangerous activities, and "everyone knows" that they're competent, hard-nosed, have condemned some seriously bad projects while approving some others that have since functioned successfully... then you might be at the point where you could say that an educated reasonable person, upon discovering the nuclear reactor, would check with that group and learn that they've approved it [with today's technology, that would mean the reactor would have a sign saying "This project is monitored and approved by the Hard-Nosers, approval ID 1389342", and you could go to the Hard-Nosers' website and confirm this] before attempting to use force to stop it; and an uneducated reasonable person would at least speak with a worker before taking action, and learn of the existence of the Hard-Nosers, and wouldn't act before doing some research.  In other words, you might be able to justify some kind of regulatory board being relevant in practice.

(There might be layers to this.  "People running a legitimate nuclear reactor would broadcast to the world the fact that they're doing it and the safety precautions they're taking!  Therefore, if you won't tell me, that's strong evidence that your precautions are insufficient, and justifies my using force to stop you!"  "Granted.  Here's our documentation."  "You also have to let me inspect the operation—otherwise that's evidence you have something to hide!"  "Well, no.  This equipment is highly valuable, so not letting random civilians come near it whenever they want is merely evidence that we don't want it to be stolen."  "Ok, have as many armed guards follow me around as you like."  "That's expensive."  "Sounds like there's no way to distinguish your operation from an unsafe one, then, in which case a reasonable person's priors would classify you as an unacceptable risk."  "We do allow a group of up to 20 members of the public to tour the facility on the first of every month; join the next one if you like."  "... Fine."

And then there could be things where a reasonable person would be satisfied with that; but then someone tells everyone, "Hey, regularly scheduled inspections give them plenty of time to hide any bad stuff they're doing, so they provide little evidentiary value", and then if those who run the reactor refuse to allow any unscheduled inspections, that might put them on the wrong side of the threshold of "expected risk in the eyes of a reasonable external observer".  So what you'd effectively be required to do would change based on what that random person had said in public and to you.  Which is not, generally, a good property for a legal system.  But... I dunno, maybe the outcome is ok?  This would only apply to people running potentially dangerous projects, and it makes some kind of sense that they'd keep being beholden to public opinion.)

So we could treat increasingly-powerful AI similarly to nuclear reactors: can be done for good reasons, but also has a substantial probability of causing terrible fallout.  In principle, if someone is building a super-AI without taking enough precautions, you could judge that they're planning to take actions that with high-enough probability are going to harm you badly enough that it would be proper self-defense for you to stop them by force.

But, once again, tactical considerations make it worth going to third parties and making your case to them, and it's almost certainly not worth acting unless they're on your side.  Unfortunately, there is much less general agreement about the dangers of AI (and the correct safety strategies) than about the dangers of nuclear reactors, and it's unlikely in the near future that you'd get public opinion / the authorities on your side about existing AI efforts (though things can change quickly).  But if someone did take some unilateral (say maybe 1% of the public supported them) highly destructive action to stop a particular AI lab, that would probably be counterproductive: they'd go to jail, it would discredit anyone associated with them, martyr their opposition, at best delay that lab by a bit, and motivate all AI labs to increase physical security (and possibly hide what they're doing).

For the moment, persuading people is the only real way to pursue this angle.

The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there's presumably some amount of money they could pay you now, in exchange for the ability to take that risk.

 

Pretty sure you mean they should pay premiums rather than payouts? 

I like the spirit of this idea, but think it's both theoretically and practically impossible: how do you value apocalypse? Payouts are incalculable/infinite/meaningless if no one is around. 

The underlying idea seems sound to me: there are unpredictable civilizational outcomes resulting from pursuing this technology -- some spectacular, some horrendous -- and the pursuers should not reap all the upside when they're highly unlikely to bear any meaningful downside risks. 

I suspect this line of thinking could be grating to many self-described libertarians who lean e/acc and underweight the possibility that technological progress != prosperity in all cases. 

It also seems highly impractical because there is not much precedent for insuring against novel transformative events for which there's no empirical basis*. Good luck getting OAI, FB, MSFT, etc. to consent to such premiums, much less getting politicians to coalesce around a forced insurance scheme that will inevitably be denounced as stymying progress and innovation with no tangible harms to point to (until it's too late).

Far more likely (imo) are post hoc reaction scenarios where either:

a) We get spectacular takeoff driven by one/few AI labs that eat all human jobs and accrue all profits, and society deems these payoffs unfair and arrives at a redistribution scheme that seems satisfactory (to the extent "society" or existing political structures have sufficient power to enforce such a scheme)

b) We get a horrendous outcome and everyone's SOL

* Haven't researched this and would be delighted to hear discordant examples.

[-][anonymous]5mo20

I like this proposal. It's a fun rethinking of the problem.

However,

  1. How can you even approximate a fair price for these payouts? AI risks are extremely conditional and depend on difficult to quantify assumptions. "The model leaked AND optimized itself to work on computers insecure and Internet available at the time of escape AND humans failed to stop it AND..."

For something like a nuclear power plant for instance, most of the risk is all black swans. There are a ton of safety systems and mechanisms to cool the core. We know from actual accidents that when this fails, it's not because each piece of equipment all failed at the same time. This relates to AI risk because multiplying the series probability does not tell you the true risk.

For all meltdowns i am aware of, the risk happened because human operators or an unexpected common cause made all the levels of safety fail at once.

  1. 3 mile island : operators misunderstood the situation and turned off cooling.
  2. Chernobyl: operators bypassed the automated control system with patch cables and put the core into an unstable part of the operating curve.
  3. Fukushima : plant wide power failure, road conditions prevented bringing spare generators on site quickly.

Each cause is coupled, and can thought of as a single cause. Adding n+1 serial defenses might not have helped in each case (depends what it is)

If AI does successfully kill everyone it's going to be in a way humans didn't model.

  1. Mineshaft gap argument. Large fees on AI companies simply encourages them to set up shop in countries that don't charge the fees. In futures where they don't kill everyone, those countries will flourish or will conquer the planet. So the other countries have to drop these costs and subsidize hasty catch up ASI research or risk losing. In the futures where AI does attack and try to kill everyone, not have tool AI (aligned only with the user) increases the probability that the AI wins. Most defensive measures are stronger if you have your own AI to scale production. (More bunkers, more nukes to fire back, more spacesuits to stop the bio and nano attacks, more drones...)

I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.

One libertarian take on how to speed up alignment is that

(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)

(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)

(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough

(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems

(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states

(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets

(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.

In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".

Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.