artifex

I’m always interested in knowing why people disagree with me, but recognize that people have limited motivation to expend the effort to explain to me why I am wrong in a way I can understand.

In case that helps make it take less effort, I am permanently committed to Crocker’s rules.

Also, the LW user artifex0 is a different person.

Posts

Sorted by New

Wiki Contributions

Comments

artifex5mo10

I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.

One libertarian take on how to speed up alignment is that

(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)

(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)

(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough

(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems

(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states

(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets

(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.

In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say "fucking stop (you are taking far too much risk with everyone else's lives; this is a form of theft until and unless you can pay all the people whose lives you're risking, enough to offset the risk)".

Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.

artifex8mo21

Either I am missing a point somewhere, or this probably doesn't work as well outside of textbook examples.

In the example, Frank was "blackmailed" into paying, because the builder knew that there were exactly 10 villagers, and knew that Frank needs the street paved. In real life, you often do not have this kind of knowledge.

Yes, you need to solve two problems (according to Tabarrok) to solve public goods provision, one of which is the free-rider problem. Dominant assurance contracts only solve the free-rider problem, but you need to also solve what he calls the information problem to know how to set the parameters of the contract.

artifex8mo1-1

Oh that’s fun, Wikipedia caused me to believe for so many years that “bad faith” means something different from what it means and I’m only learning that now.

Trillions of dollars in lost economic growth just seems like hyperbole. There’s some lost growth from stickiness and unemployment but of course the costs aren’t trillions of dollars.

They did in fact not go far enough. Japanese GNI per capita growth from 2013 to 2021 was 1.02%. The prescription would be something like 4%.

I disagree total working hours have decreased. The number of average weekly hours per person from 1950 to 2000 has been “roughly constant”. Work weeks are shorter but there are more people working.

As an example to explain why, I predict (with 80% probability) that there will be a five-year shortening in the median on the general AI question at some point in the next three years. And I also predict (with 85% probability) that there will be a five-year lengthening at some point in the next three years.

Both of these things have happened. The community prediction was June 28, 2036 at one time in July 2022, July 30, 2043 in September 2022 and is March 13, 2038 now. So there has been a five-year shortening and a five-year lengthening.

Even voting online takes more than five minutes in total.

Anyway, I’d rather sell my votes for money. I believe you can find thousands of people, current non-voters, who would vote for whatever you want them to, if you paid them only a little more than the value of their time.

If the value of voting is really in the expected benefits (according to your own values) of good political outcomes brought forth through voting, and these expected benefits really are greater than the time costs and other costs of voting, shouldn’t paying people with lower value of their time to vote the way you want be much more widespread?

You might not be able to verify that they did vote the way you wanted, or that they wouldn’t have voted that way otherwise, but, still, unless the ratio is only a little greater than one, it seems it should be much more widespread?

If however the value of voting is expressive, or it comes in a package in which you adapt your identity to get the benefits of membership in some social club, that explains why there are so many people who don’t vote and why the ones who do don’t seem interested in buying their votes. And it also explains why the things people vote for are so awful.

Voting takes much more than five minutes and if you think otherwise you haven’t added up all the lost time. And determining how you should vote if you want to vote for things that lead to good outcomes requires extremely more than five minutes.

Load More