Jameson Quinn

Jameson Quinn's Comments

Can we use Variolation to deal with the Coronavirus?

I have no specific expertise here; I'm just a statistician.

I believe that if we're in this for the long haul — that is, over a year until a vacciine comes out, with responsible people spending the majority of that year in "suppression" mode according to the terminology of the Imperial College simulation — it would be beneficial for those under 30 without special vulnerabilities to be deliberately infected in a way that does not substantially spread the virus to the wider population. This would require a huge mobilization: facilities and social organization such that a bunch of kids with a few twenty-something caretakers could live (hopefully, happily) for a 6-8 weeks with absolutely minimal physical contact with people outside. That's possible to do well in principle — think summer camps — but doing it at the largest scale possible would involve big challenges, including having health care available for a bunch of sick-but-mostly-not-dying kids.

I suspect there is no way that Western societies will be able to do this at substantial scale. I would NOT suggest attempting it at individual scale. I would not be surprised to see China begin to attempt something like this. I would be surprised if they pulled it off without at least some horror stories. Those bad results might or might not be bad enough to outweigh the net good of improving herd immunity, allowing normal schooling going forward, and creating a population of under-30 immune people able to continue to work without having to worry about their own exposure (that is, about the inside of their body serving as a vector/reservoir for the virus; of course, the outside of their body would still need precautions not to be a vector).

5 general voting pathologies: lesser names of Moloch

I believe the answers is "yes" to both questions, but I'm not 100% on the second one.

Using vector fields to visualise preferences and make them consistent

Consistency is the opposite of humility. Instead of saying "sometimes, I don't and can't know", it says "I will definitively answer any question (of the correct form)".

Let's assume that there is some consistent utility function that we're using as a basis for comparison. This could be the "correct" utility function (eg, God's); it could be a given individual's extrapolated consistent utility; or it could be some well-defined function of many people's utility.

So, given that we've assumed that this function exists, obviously if there's a quasi-omnipotent agent rationally maximizing it, it will be maximized. This outcome will be at least as good as if the agent is "humble", with a weakly-ordered objective function; and, in many cases, it will be better. So, you're right, under this metric, the best utility function is equal-or-better to any humble objective.

But if you get the utility function wrong, it could be much worse than a humble objective. For instance, consider adding some small amount of Gaussian noise to the utility. The probability that the "optimized" outcome will have a utility arbitrarily close to the lower bound could, depending on various things, be arbitrarily high; while I think you can argue that a "humble" deus ex machina, by allowing other agents to have more power to choose between world-states over which the machina has no strict preference, would be less likely to end up in such an arbitrarily bad "Goodhart" outcome.

This response is a bit sketchy, but does it answer your question?

Using vector fields to visualise preferences and make them consistent

Yes, the utility function is the "one dimension". Of course, it can be as complicated as you'd like, taking into account multiple aspects of reality. But ultimately, it has to give a weight to those aspects; "this 5-year-old's life is worth exactly XX.XXX times more/less than this 80-year-olds' life" or whatever. It is a map from some complicated (effectively infinite-dimensional) Omega to a simple one-dimensional utility.

Using vector fields to visualise preferences and make them consistent

Good point. Here's the raw beginnings of a response:

The idea here would be to resolve such questions "democratically" in some sense. I'm intentionally leaving unspecified what I mean by that because I don't want to give the impression that I think I could ever tie up all the loose ends with this proposal. In other words, this is a toy example to suggest that there's useful space to explore in between "fully utilitarian agents" and "non-agents", that agents with weakly-ordered and/or intransitive preferences may in some senses be superior to fully-utilitarian ones.

I realize that "democratic" answers to the issue you raise will tend to be susceptible to the "Omelas problem" (a majority that gets small benefits by imposing large costs on a minority) and/or the repugnant conclusion ("cram the world with people until barely-over-half of their lives are barely-better-than-death"). Thus, I do not think that "majority rules" should actually be a foundational principle. But I do think that when you encounter intransitivity in collective preferences, it may in some cases be better to live with that than to try to subtract it out by converting everything into comparable-and-summable utility functions.

Using vector fields to visualise preferences and make them consistent

I believe this is precisely the wrong thing to be trying to do. We should be trying to philosophically understand how intransitive preferences can be (collectively) rational, not trying to remove them because they're individually irrational.

(I'm going to pose the rest of this comment in terms of "what rules should an effectively-omnipotent super-AI operate under". That's not because I think that such a singleton AI is likely to exist in the foreseeable future; but rather, because I think it's a useful rhetorical device and/or intuition pump for thinking about morality.)

Once you've reduced everything down to a single utility function, and you've created an agent or force substantially more powerful than yourself who's seeking to optimize that utility function, it's all downhill from there. Or uphill, whatever; the point is, you no longer get to decide on outcomes. Reducing morality to one dimension makes it boring at best; and, if Goodhart has anything to say about it, ultimately even immoral.

Luckily, "curl" (intransitive preference order) isn't just a matter of failures of individual rationality. Condorcet cycles, Arrow's theorem, the Gibbard-Satterthwaite theorem; all of these deal with the fact that collective preferences can be intransitive even when the individual preferences that make them up are all transitive.

Imagine a "democratic referee" AI (or god, or whatever) that operated under roughly the following "3 laws":

0. In comparing two world-states, consider the preferences of all "people" which exist in either, for some clear and reasonable definition of "people" which I'll leave unspecified.

1. If a world-state is Pareto dominated, act to move towards the Pareto frontier.

2. If an agent or agents are seeking to change from one world-state A to another B, and neither of the two pareto dominates, then thwart that change iff a majority prefers the status quo A over B AND there is no third world-state C such that a majority prefers B over C and a majority prefers C over A.

3. Accumulate and preserve power, insofar as it is compatible with laws 1 and 2.

An entity which followed these laws would be, in practice, far "humbler" than one which had a utility function over world-states. For instance, if there were a tyrant hogging all the resources that they could reasonably get any enjoyment whatsoever out of, the "referee" would allow that inequality to continue; though it wouldn't allow it to be instituted in the first place. Also, this "referee" would not just allow Omelas to continue to exist; it would positively protect its existence for as long as "those who walk away" were a minority.

So I'm not offering this hypothetical "referee" as my actual moral ideal. But I do think that moral orderings should be weak orderings over world states, not full utility functions. I'd call this "humble morality"; and, while as I said above I don't actually guess that singleton AIs are likely, I do think that if I were worried about singleton AIs I'd want one that was "humble" in this sense.

Furthermore, I think that respecting the kind of cyclical preferences that come from collective preference aggregation is useful in thinking about morality. And, part of my guess that "singleton AIs are unlikely" comes from thinking that maintaining/enforcing perfectly coherent aggregated preferences over a complex system of parts is actually a harder (and perhaps impossible) problem than AGI.

Have epistemic conditions always been this bad?

I spent the 00s in Guatemala and Chiapas, so I'm probably not the best judge of that question.

As for writing this more deeply... frankly, it's unlikely to make it above the threshold on my to-do list.

Have epistemic conditions always been this bad?

I went to Oberlin College for undergrad; my in-laws are Central American communists; I live in Cambridge, MA, where my daughter goes to high school; and much of my internet activity is on left-leaning political blogs. So I think I have a reasonably broad experience of PC culture.

I'm not interested in getting deeply into this conversation here; it would take pages of writing to say everything I think, and that writing would be relatively slow because I'd have to measure my words in various ways to make it through this minefield. However, I do think that at least from my perspective, this concern is overblown. Yes, there are definitely people who self-righteously try to silence opposing views, and they do have some power; but in my experience, their power is limited in most places, and the capacity for reasonable dissent is still present.

As for the exceptions, I see no reason to believe they're particularly more widespread now than in the past (for instance, my parents have stories of weaponized conformity in EST meetings they briefly attended in the 70s). Furthermore, "dissent is illegitimate here" seems to me more often a symptom than a cause of toxic spaces.

So, sorry I don't have time to show my work on this, but for what it's worth, that's my opinion.

2018 Review: Voting Results!

There's frequently a tradeoff between "less strategic incentives" and "more-intelligible under honesty". I don't think that you should pick the former every time, but it is certainly better to err a little bit on the side of the former and get good-but-slightly-more-confusing results, than to err on the side of the latter and get results that are neither good nor intelligible (because strategic voting has ruined that, too).

2018 Review: Voting Results!

Just renormalize votes to be mean-0 before scaling.

Load More