Last week, I started a thread on the widespread sentiment that people don't understand the metaethics sequence. One of the things that surprised me most in the thread was this exchange:

Commenter: "I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this."

Me: "But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?"

Commenter: "Waah? Of course there are universally compelling arguments in math and science."

Now, I realize this is just one commenter. But the most-upvoted comment in the thread also perceived "no universally compelling arguments" as a major source of confusion, suggesting that it was perceived as conflicting with morality not being arbitrary. And today, someone mentioned having "no universally compelling arguments" cited at them as a decisive refutation of moral realism.

After the exchange quoted above, I went back and read the original No Universally Compelling Arguments post, and realized that while it had been obvious to me when I read it that Eliezer meant it to apply to everything, math and science included, it was rather short on concrete examples, perhaps in violation of Eliezer's own advice. The concrete examples can be found in the sequences, though... just not in that particular post.

First, I recommend reading The Design Space of Minds-In-General if you haven't already. TLDR; the space of minds in general ginormous and includes some downright weird minds. The space of human minds is a teeny tiny dot in the larger space (in case this isn't clear, the diagram in that post isn't remotely drawn to scale). Now with that out of the way...

There are minds in the space of minds-in-general that do not recognize modus ponens.

Modus ponens is the rule of inference that says that if you have a statement of the form "If A then B", and also have "A", then you can derive "B". It's a fundamental part of logic. But there are possible mind that reject it. A brilliant illustration of this point can be found in Lewis Carroll's dialog "What the Tortoise Said to Achilles" (for those not in the know, Carroll was a mathematician; Alice in Wonderland is secretly full of math jokes).

Eliezer covers the dialog in his post Created Already In Motion, but here's the short version: In Carroll's dialog, the tortoise asks Achilles to imagine someone rejecting a particular instance of modus ponens (drawn from Euclid's Elements, though that isn't important). The Tortoise suggests that such a person might be persuaded by adding an additional premise, and Achilles goes along with it—foolishly, because this quickly leads to an infinite regress when the Tortoise suggests that someone might reject the new argument in spite of accepting the premises (which leads to another round of trying to patch the argument, and then..)

"What the Tortoise Said to Achilles" is one of the reasons I tend to think of the so-called "problem of induction" as a pseudo-problem. The "problem of induction" is often defined as the problem of how to justify induction, but it seems to make just as much senses to ask how to justify deduction. But speaking of induction...

There are minds in the space of minds-in-general that reason counter-inductively.

To quote Eliezer:

There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.

And when you ask these strange beings why they keep using priors that never seem to work in real life... they reply, "Because it's never worked for us before!"

If this bothers you, well, I refer you back to Lewis' Carroll's dialog. There are also minds in the mind design space that ignore the standard laws of logic, and are furthermore totally unbothered by (what we would regard as) the absurdities produced by doing so. Oh, but if you thought that was bad, consider this...

There are minds in the space of minds-in-general that use a maximum entropy prior, and never learn anything.

Here's Eliezer again discussing a problem where you have to predict whether a ball drawn out of an urn will be red or white, based on the color of the balls that have been previously drawn out of the urn:

Suppose that your prior information about the urn is that a monkey tosses balls into the urn, selecting red balls with 1/4 probability and white balls with 3/4 probability, each ball selected independently.  The urn contains 10 balls, and we sample without replacement.  (E. T. Jaynes called this the "binomial monkey prior".)  Now suppose that on the first three rounds, you see three red balls.  What is the probability of seeing a red ball on the fourth round?

First, we calculate the prior probability that the monkey tossed 0 red balls and 10 white balls into the urn; then the prior probability that the monkey tossed 1 red ball and 9 white balls into the urn; and so on.  Then we take our evidence (three red balls, sampled without replacement) and calculate the likelihood of seeing that evidence, conditioned on each of the possible urn contents.  Then we update and normalize the posterior probability of the possible remaining urn contents.  Then we average over the probability of drawing a red ball from each possible urn, weighted by that urn's posterior probability. And the answer is... (scribbles frantically for quite some time)... 1/4!

Of course it's 1/4.  We specified that each ball was independently tossed into the urn, with a known 1/4 probability of being red.  Imagine that the monkey is tossing the balls to you, one by one; if it tosses you a red ball on one round, that doesn't change the probability that it tosses you a red ball on the next round.  When we withdraw one ball from the urn, it doesn't tell us anything about the other balls in the urn.

If you start out with a maximum-entropy prior, then you never learn anything, ever, no matter how much evidence you observe. You do not even learn anything wrong - you always remain as ignorant as you began.

You may think, while minds such as I've been describing are possible in theory, they're unlikely to evolve anywhere in the universe, and probably they wouldn't survive long if programmed as an AI. And you'd probably be right about that. On the other hand, it's not hard to imagine minds that are generally able to get along well in the world, but irredeemably crazy on particular questions. Sometimes, it's tempting to suspect some humans of being this way, and even if that isn't literally true of any humans, it's not hard to imagine as just a more extreme form of existing human tendencies. See e.g. Robin Hanson on near vs. far mode, and imagine a mind that will literally never leave far mode on certain questions, regardless of the circumstances.

It used to disturb me to think that there might be, say, young earth creationists in the world who couldn't be persuaded to give up their young earth creationism by any evidence or arguments, no matter how long they lived. Yet I've realized that, while there may or may not be actual human young earth creationists like that (it's an empirical question), there are certainly possible minds in the space of mind designs like that. And when I think about that fact, I'm forced to shrug my shoulders and say, "oh well" and leave it at that.

That means I can understand why people would be bothered by a lack of universally compelling arguments for their moral views... but you shouldn't be any more bothered by that than by the lack of universally compelling arguments against young earth creationism. And if you don't think the lack of universally compelling arguments is a reason to think there's no objective truth about the age of the earth, you shouldn't think it's a reason to think there's no objective truth about morality.

(Note: this may end up being just the first in a series of posts on the metaethics sequence. People are welcome to discuss what I should cover in subsequent posts in the comments.)

Added: Based on initial comments, I wonder if some people who describe themselves as being bothered the lack of universally compelling arguments would more accurately describe themselves as being bothered by the orthogonality thesis.


New Comment
230 comments, sorted by Click to highlight new comments since: Today at 12:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seems obvious that people are using "universally compelling arguments" in two different senses.

In the first sense, a universally compelling argument is one that could convince even a rock, or a mind that doesn't implement modus ponens, or a mind with anti-inductive priors. In this sense, the lack of universally compelling arguments for any domain (math/physics/morality) seems sufficiently well established.

In another sense, a universally compelling argument is one that could persuade any sufficiently sane/intelligent mind. I think we can agree that all such minds will eventually conclude that relativity and quantum mechanics are correct (or at least a rough approximation to whatever the true laws of physics end up being), so in this sense we can call the arguments that lead to them universally compelling. Likewise, in this sense, we can note as interesting the non-existence of universally compelling arguments which could compel a sufficiently sane/intelligent paperclipper to value life, beauty, justice, and the American way. It becomes more interesting if we also consider the case of babyeaters, pebblesorters, or humans with values sufficiently different to our own.

You are using the term in the first sense, but the people who are bothered by it are using it in the second sense.

Except that "sufficiently sane/intelligent" here just means, it seems, "implements modus ponens, has inductive priors, etc." We can, like Nick Tarleton, simply define as "not a mind" any entity or process that doesn't implement these criteria for sufficient sanity/intelligence...

... but then we are basically saying: any mind that is not convinced by what we think should be universally compelling arguments, is not a mind.

That seems like a dodge, at best.

Are there different criteria for sufficient sanity and intelligence, ones not motivated by the matter of (allegedly) universally compelling arguments?

"Sufficiently sane/intelligent" means something like, "Has a sufficient tendency to form true inferences from a sufficiently wide variety of bodies of evidences." Now, we believe that modus ponens yields true inferences. We also believe that a tendency to make inferences contrary to modus ponens will cause a tendency to make false inferences. From this you can infer that we believe that a sufficiently sane/intelligent agent will implement modus ponens. But the truth of this inference about our beliefs does not mean that "sufficiently sane/intelligent" is defined to mean "implements modus ponens". In particular, our definition of "sufficiently sane/intelligent" implies that, if A is a sufficiently sane/intelligent agent who lives in an impossible possible world [] that does not implement modus ponens, then A does not implement modus ponens.
"sufficiently sane/intelligent" means "effective enough in the real world to pose a threat to my values". Papercillper qualifies, flue virus qualifies, anti-inductive AI does not qualify.

So, how is the project to teach mathematics to the flue virus going?

Why, it hasn't been wrong about a single thing so far, thank you!
That doesn't follow. For one thing, we can find out how the Mind works by inspecting its code, not just by black box testing it If it seems to have all that it needs and isn't convinced by arguments that convince us, it may well be we who are wrong.
3Said Achmiz10y
We can? So I have all these minds around me. How do I inspect their code and thereby find out how they work? Detailed instructions would be appreciated. (Assume that I have no ethical restrictions.) That (only slightly-joking) response aside, I think you have misunderstood me. I did not mean that we are (in the scenario I am lampooning) saying: "Any mind that is not convinced by what we think should be universally compelling arguments, despite implementing modus ponens and having an Occamian prior, is not a mind." Rather, I meant that we are saying: "Any mind that is not convinced by what we think should be universally compelling arguments, by virtue of said mind not implementing modus ponens, having an Occamian prior, or otherwise having such-and-such property which would be required in order to find this argument compelling, is not a mind." The problem I am pointing out in such reasoning is that we can apply it to any argument we care to designate as "this ought to be universally compelling". "Ah!" we say, "this mind does not agree that ice cream is delicious? Well, that's because it doesn't implement , and without said property, why, we can hardly call it a mind at all." A rationality quote of sorts is relevant here: (Roadside Picnic, Arkady and Boris Strugatsky) What we have here is something similar. If a mind is sufficiently sane/intelligent, then it will be convinced by our arguments. And the reverse: if it is convinced by our arguments, then it is sane/intelligent... In yet other words: we can hardly say "we expect all sane/intelligent minds to be convinced by these arguments" if we have in the first place defined sanity and intelligence to require the ability to be convinced by those very arguments.
Yes. You can convince a sufficiently rational paperclip maximizer that killing people is Yudkowsy::evil, but you can't convince it to not take Yudkowsy::evil actions, no matter how rational it is. AKA the orthogonality thesis (when talking about other minds) and “the utility function is not up for grabs” (when talking about ourselves).

Even a token effort to steelman the "universally" in "universally compelling arguments" yields interesting results.

Consider a mind that thinks the following:

  • I don't want to die
  • If I drink that poison, I'll die
  • Therefore I should drink that poison

But don't consider it very long, because it drank the poison and now it's dead and not a mind anymore.

If we restrict our observations to minds that are capable of functioning in a moderately complex environment, UCAs come back, at least in math and maybe elsewhere. Defining "functioning" isn't trivial, but it isn't impossible either. If the mind has something like desires, then a functioning mind is one which tends to get its desires more often than if it didn't desire them.

If you cleave mindspace at the joints, you find sections for which there are UCAs. I don't immediately see how to get anything interesting about morality that way, but it's an avenue worth pursuing.

But it may be in the mind's best interests to refuse to be persuaded by some specific class of argument: "It is difficult to get a man to understand something when his job depends on not understanding it" (Upton Sinclair). For any supposed UCA, one can construct a situation in which a mind can rationally choose to ignore it and therefore achieve its objectives better, or at least not be majorly harmed by it. You don't even need to construct particularly far-fetched scenarios: we already see plenty of humans who benefit from ignoring scientific arguments in favor of religious ones, ignoring unpopular but true claims in order to promote claims that make them more popular, etc.
I'm not convinced that this is the case for basic principles of epistemology. Under what circumstances could a mind (which behaved functionally enough to be called a mind) afford to ignore modus ponens, for example?
Well, it doesn't have to, it could just deny the premises. But it could deny modus ponens in some situations but not others.
Hmm. Like a person who is so afraid of dying that they have to convince themselves that they, personally, are immortal in order to remain sane? From that perspective it does make sense.
UCAs are part of the Why can't the AGI figure Out Morality For Itself objection:- 1. There is a sizeable chunk of mindspace containing rational and persuadable agents. 2. AGI research is aiming for it. (You could build an irrational AI, but why would you want to?) 3. .Morality is figurable-out, or expressible as a persuasive argument. The odd thing is that the counterargument has focussed on attacking a version of (1), although, in the form it is actually held, it is the most likely premise. OTOH, 3, the most contentious, has scarely been argued against at all.
I would say Sorting Pebbles Into Correct Heaps [] is essentially an argument against 3. That is, what we think of as "morality" is most likely not a natural attractor for minds that did not develop under processes similar to our own.
Do you? I think that morality in a broad sense is going to be a necessity for agents that fulfil a fairly short list of criteria: * living in a society * interacting with others in potentially painful and pleasant ways * having limited resources that need to be assigned.
I think you're missing a major constraint there: * Living in a society with as little power as the average human citizen has in a current human society. Or in other words, something like modern, Western liberal meta-morality will pop out if you make an arbitrary agent live in a modern, Western liberal society, because that meta-moral code is designed for value-divergent agents (aka: people of radically different religions and ideologies) to get along with each other productively when nobody has enough power to declare himself king and optimize everyone else for his values. The nasty part is that AI agents could pretty easily get way, waaaay out of that power-level. Not just by going FOOM, but simply by, say, making a lot of money and purchasing huge sums of computing resources to run multiple copies of themselves which now have more money-making power and as many votes for Parliament as there are copies, and so on. This is roughly the path taken by power-hungry humans already, and look how that keeps turning out. The other thorn on the problem is that if you manage to get your hands on a provably Friendly AI agent, you want to hand it large amounts of power. A Friendly AI with no more power than the average citizen can maybe help with your chores around the house and balance your investments for you. A Friendly AI with large amounts of scientific and technological resources can start spitting out utopian advancements (pop really good art, pop abundance economy, pop immortality, pop space travel, pop whole nonliving planets converted into fun-theoretic wonderlands) on a regular basis.
No, it is not. The path taken by power-hungry humans generally goes along the lines of (1) get some resources and allies (2) kill/suppress some competitors/enemies/non-allies (3) Go to 1. Power-hungry humans don't start by trying to make lots of money or by trying to make lots of children.
Really? Because in the current day, the most powerful humans appear to be those with the most money, and across history, the most influential humans were those who managed to create the most biological and ideological copies of themselves. Ezra the Scribe wasn't exactly a warlord, but he was one of the most influential men in history, since he consolidated the literature that became known as Judaism, thus shaping the entire family of Abrahamic religions as we know them. "Power == warlording" is, in my opinion, an overly simplistic answer.
-- Niccolò Machiavelli []
Certainly doesn't look like that to me. Obama, Putin, the Chinese Politbureau -- none of them are amongst the richest people in the world. Influential (especially historically) and powerful are very different things. It's not an answer, it's a definition. Remember, we are talking about "power-hungry humans" whose attempts to achieve power tend to end badly. These power-hungry humans do not want to be remembered by history as "influential", they want POWER -- the ability to directly affect and mold things around them right now, within their lifetime.
Putin is easily one of the richest in Russia, as are the Chinese Politburo in their country. Obama, frankly, is not a very powerful man at all, but rather than the public-facing servant of the powerful class (note that I said "class", not "men", there is no Conspiracy of the Malfoys in a neoliberal capitalist state and there needn't be one). Historical influence? Yeah, ok. Right-now influence versus right-now power? I don't see the difference.
I don't think so. "Rich" is defined as having property rights in valuable assets. I don't think Putin has a great deal of such property rights (granted, he's not middle-class either). Instead, he can get whatever he wants and that's not a characteristic of a rich person, it's a characteristic of a powerful person. To take an extreme example, was Stalin rich? But let's take a look at the five currently-richest men (according to Forbes): Carlos Slim, Bill Gates, Amancio Ortega, Warren Buffet, and Larry Ellison. Are these the most *powerful* men in the world? Color me doubtful.
Well, Carlos Slim seems to have the NYT in his pocket. That's nothing to sneeze at.
A lot of money of rich people is hidden via complex off shore accounts and not easily visible for a company like Forbes. Especially for someone like Putin it's very hard to know how much money they have. Don't assume that it's easy to see power structures by reading newspapers. Bill Gates might control a smaller amount of resources than Obama, but he can do whatever he wants with them. Obama is dependend on a lot of people inside his cabinet.
Not according to Bloomberg []:
"amass wealth and exploit opportunities unavailable to most Chinese" is not at all the same thing as "amongst the richest people in the world"
You are reading a text that's carefully written not to make statements that allow for being sued for defamation in the UK. It's the kind of story for which inspires cyber attacks on a newspaper. The context of such an article provides information about how to read such a sentence.
In this case, I believe that money and copies are, in fact, resources and allies. Resources are things of value, of which money is one; and allies are people who support you (perhaps because they think similarly to you). Politicians try to recuit people to their way of thought, which is sort of a partial copy (installing their own ideology, or a version of it, inside someone else's head), and acquire resources such as television airtime and whatever they need (which requires money). It isn't an exact one-to-one correspondence, but I believe that the adverb "roughly" should indicate some degree of tolerance for inaccuracy.
You can, of course, climb the abstraction tree high enough to make this fit. I don't think it's a useful exercise, though. Power-hungry humans do NOT operate by "making a lot of money and purchasing ... resources". They generally spread certain memes and use force. At least those power-hungry humans implied by the "look how that keeps turning out" part.
Well, it's a list of four then, not a list of three. It's still much simpler than "morality is everything humans value". You seem to be making the tacit assumption that no one really values morality, and just plays along (in egalitarian societies) because they have to. Can't that be done by Oracle AIs?
Let me clarify. My assumption is that "Western liberal meta-morality" is not the morality most people actually believe in, it's the code of rules used to keep the peace between people who are expected to disagree on moral matters. For instance, many people believe, for religious reasons or pure Squick or otherwise, that you shouldn't eat insects, and shouldn't have multiple sexual partners. These restrictions are explicitly not encoded in law, because they're matters of expected moral disagreement. I expect people to really behave according to their own morality, and I also expect that people are trainable, via culture, to adhere to liberal meta-morality as a way of maintaining moral diversity in a real society, since previous experiments in societies run entirely according to a unitary moral code (for instance, societies governed by religious law) have been very low-utility compared to liberal societies. In short, humans play along with the liberal-democratic social contract because, for us, doing so has far more benefits than drawbacks, from all but the most fundamentalist standpoints. When the established social contract begins to result in low-utility life-states (for example, during an interminable economic depression in which the elite of society shows that it considers the masses morally deficient for having less wealth), the social contract itself frays and people start reverting to their underlying but more conflicting moral codes (ie: people turn to various radical movements offering to enact a unitary moral code over all of society). Note that all of this also relies upon the fact that human beings have a biased preference towards productive cooperation when compared with hypothetical rational utility-maximizing agents. None of this, unfortunately, applies to AIs, because AIs won't have the same underlying moral codes or the same game-theoretic equilibrium policies or the human bias towards cooperation or the same levels of power and influence as hum
Programming in a bias towards conformity (kohlberg level 2) maybe a lot easier than EYes fine grained friendliness.
None of that necessarily applies to AIs, but then it depends on the AI. We could, for instance, pluck AIs from virtualised socieities of AIs that haven't descended into mass slaughter.
Congratulations: you've now developed an entire society of agents who specifically blame humans for acting as the survival-culling force in their miniature world. Did you watch Attack on Titan and think, "Why don't the humans love their benevolent Titan overlords?"?
Well now I have both a new series to read/watch and a major spoiler for it.
Don't worry! I've spoiled nothing for you that wasn't apparent from the lyrics of the theme song.
...And that way you turn the problem of making an AI that won't kill you into one of making a society of AIs that won't kill you.
If Despotism failed only for want of a capable benevolent despot, what chance has Democracy, which requires a whole population of capable voters?
It requires a population that's capable cumulatively, it doesn't require that each member of the population be capable. It's like arguing a command economy versus a free economy and saying that if the dictator in the command economy doesn't know how to run an economy, how can each consumer in a free economy know how to run the economy? They don't, individually, but as a group, the economy they produce is better than the one with the dictatorship.
7Eliezer Yudkowsky10y
Democracy has nothing to do with capable populations. It definitely has nothing to do with the median voter being smarter than the average politician. It's just about giving the population some degree of threat to hold over politicians.
"Smarter" and "capable" aren't the same thing. Especially if "more capable" is interpreted to be about practicalities: what we mean by "more capable" of doing X is that the population, given a chance is more likely to do X than politicians are. There are several cases where the population is more capable in this sense. For instance, the population is more capable of coming up with decisions that don't preferentially benefit politicians. Furthermore, the median voter being smarter and the voters being cumulatively smarter aren't the same thing either. It may be that an average individual voter is stupider than an average individual politician, but when accumulating votes the errors cancel out in such a manner that the voters cumulatively come up with decisions that are as good as the decisions that a smarter person would make.
I'm increasingly of the opinion that the "real" point of democracy is something entirely aside from the rhetoric used to support it ... but you of all people should know that averaging the estimates of how many beans are in the jar does better than any individual guess. Systems with humans as components can, under the right conditions, do better than those humans could do alone; several insultingly trivial examples spring to mind as soon as it's phrased that way. Is democracy such a system? Eh.
Democracy requires capable voters in the same way capitalism requires altruistic merchants. In other words, not at all.
Could you clarify? Are you saying that for democracy to exist it doesn't require capable voters, or that for democracy to work well that it doesn't? In the classic free-market argument, merchants don't have to be altruistic to accomplish the general good, because the way to advance their private interest is to sell goods that other people want. But that doesn't generalize to democracy, since there isn't trading involved in democratic voting.
See here [] However there is the question of what "working well" means, given that humans are not rational and satisfying expressed desires might or might not fall under the "working well" label.
Ah, I see. You're just saying that democracy doesn't stop happening just because voters have preferences I don't approve of. :)
Actually, I'm making a stronger claim -- voters can screw themselves up in pretty serious fashion and it's still will be full-blown democracy in action.
The grandparent is wrong, but I don't think this is quite right either. Democracy roughly tracks the capability (at the very least in the domain of delegation) and preference of the median voter, but in a capitalistic economy you don't have to buy services from the median firm. You can choose to only purchase from the best firm or no firm at all if none offer favorable terms.
In the equilibrium, the average consumer buys from the average firm. Otherwise it doesn't stay average for long. However the core of the issue is that democracy is a mechanism, it's not guaranteed to produce optimal or even good results. Having "bad" voters will not prevent the mechanism of democracy from functioning, it just might lead to "bad" results. "Democracy is the theory that the common people know what they want, and deserve to get it good and hard." -- H.L.Mencken.
The median consumer of a good purchases from (somewhere around) the median firm selling a good. That doesn't necessarily aggregate, and it certainly doesn't weigh all consumers or firms equally. The consumers who buy the most of a good tend to have different preferences and research opportunities than average consumers, for example. You could get similar results in a democracy, but most democracies don't really encourage it : most places emphasize voting regardless of knowledge of a topic, and some jurisdictions mandate it.
I would say that something recognizably like our morality is likely to arise in agents whose intelligence was shaped by such a process, at least with parameters similar to the ones we developed with, but this does not by any means generalize to agents whose intelligence was shaped by other processes who are inserted into such a situation. If the agent's intelligence is shaped by optimization for a society where it is significantly more powerful than the other agents it interacts with, then something like a "conqueror morality," where the agent maximizes its own resources by locating the rate of production that other agents can be sustainably enslaved for, might be a more likely attractor. This is just one example of a different state an agents' morality might gravitate to under different parameters, I suspect there are many alternatives.
It's worth noting that for sufficient levels of "irrationality", all non-AGI computer programs are irrational AGIs ;-).
Contrariwise for sufficient values of "rational". I don't agree that that's worth noting.
Well-argued, and to me it leads to one of the nastiest questions in morality/ethics: do my values make me more likely to die, and if so, should I sacrifice certain values for pure survival? In case we're still thinking of "minds-in-general", the world of humans is currently a nasty place where "I did what I had to [], to survive!" is currently a very popular explanation for all kinds of nasty but difficult-to-eliminate (broadly speaking: globally undesirable but difficult to avoid in certain contexts) behaviors. You could go so far as to note that this is how wars keep happening, and also that ditching all other values in favor of survival very quickly turns you into what we colloquially call a fascist monster, or at the very least a person your original self would despise.

This is a helpful clarification. "No universally compelling arguments" is a poor standard for determining whether something is objective, as it is trivial to describe an agent that is compelled by no arguments. But I think people here use it as tag for a different argument: that it's totally unclear how a Bayesian reasoner ought to update moral beliefs, and that such a thing doesn't even seem like a meaningful enterprise. They're 'beliefs' that don't pay rent.. It's one of those things where the title is used so much it's meaning has become divorced from the content.

It is unclear how to update moral beliefs if we don't allow those updates to take place in the context of a background moral theory. But if the agent does have a background theory, it is often quite clear how it should update specific moral beliefs on receiving new information. A simple example: If I learn that there is a child hiding in a barrel, I should update strongly in favor of "I shouldn't use that barrel for target practice". The usual response to this kind of example from moral skeptics is that the update just takes for granted various moral claims (like "It's wrong to harm innocent children, ceteris paribus"). Well, yes, but that's exactly what "No universally compelling arguments" means. Updating one's factual beliefs also takes for granted substantive prior factual beliefs -- an agent with maximum entropy priors will never learn anything.
So basically the argument is: we've failed to come up with any foundational or evidential justifications for induction, Occam's razor or modus ponens; those things seem objective and true; my moral beliefs don't have a justification either: therefore my moral beliefs are objective and true?
No, what I gave is not an argument in favor of moral realism intended to convince the skeptic, it's merely a response to a common skeptical argument against moral realism. So the conclusion is not supposed to be "Therefore, my moral beliefs are objective and true." The conclusion is merely that the alleged distinction between moral beliefs and factual beliefs (or epistemic normative beliefs) that you were drawing (viz. that it's unclear how moral beliefs pay rent) doesn't actually hold up. My position on moral realism is simply that belief in universally applicable (though not universally compelling) moral truths is a very central feature of my practical theory of the world, and certain moral inferences (i.e. inferences from descriptive facts to moral claims) are extremely intuitive to me, almost as intuitive as many inductive inferences. So I'm going to need to hear a powerful argument against moral realism to convince me of its falsehood, and I haven't yet heard one (and I have read quite a bit of the skeptical literature).
But that's a universal defense of any free-floating belief. For that matter: do you really think the degrees of justification for the rules of induction are similar to those of your moral beliefs?
It's not a defense of X, it's a refutation of an argument against X. It claims that the purported argument doesn't change the status of X, without asserting what that status is.
Well, no, because most beliefs don't have the properties I attributed to moral beiefs ("...central feature of my practical theory of the world... moral inferences are extremely intuitive to me..."), so I couldn't offer the same defense, at least not honestly. And again, I'm not trying to convince you to be a moral realist here, I'm explaining why I'm a moral realist, and why I think it's reasonable for me to be one. Also, I'm not sure what you mean when you refer to my moral beliefs as "free-floating". If you mean they have no connection to my non-moral beliefs then the characterization is inapt. My moral beliefs are definitely shaped by my beliefs about what the world is like. I also believe moral truths supervene on non-moral truths. You couldn't have a universe where all the non-moral facts were the same as this one but the moral facts were different. So not free-floating, I think. Not sure what you mean by "degree of justification" here.
Well, with the addition that moral beliefs, like the others, seem to perform a useful function (though like the others this doesn't seem to be able to be turned into a justification without circularity).
It's a poor standard for some values of "universal". For others, it is about the only basis for objectivity there is [] They're beliefs that are difficult to fit within the framework of passively reflecting facts about the world. But fact-collection is not an end in itself. One eventually acts on them in order to get certain results. Morality is one set of rules for guiding action to get the required results. it is not the only one: law, decision theory, economics, etc are also included. Morality may be more deniable [] for science types, since it seems religious and fuzzy and spooky, but it remains the case that action is the corollary of passive truth-collection.

I agree with the message, but I'm not sure whether I think things with a binomial monkey prior, or an anti-inductive prior, or that don't implement (a dynamic like) modus ponens on some level even if they don't do anything interesting with verbalized logical propositions, deserve to be called "minds".

General comment (which has shown up many times in the comments on this issue): taboo "mind", and this conversation seems clearer. It's obvious that not all physical processes are altered by logical arguments, and any 'mind' is going to be implemented as a physical process in a reductionist universe.

Specific comment: This old comment by PhilGoetz seems relevant, and seems similar to contemporary comments by TheAncientGeek. If you view 'mind' as a subset of 'optimization process', in that they try to squeeze the future into a particular region, the... (read more)

Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing "intelligence", which is a particular feature of real minds? We tend to agree, for instance, that evolution is an optimization process, but to claim, "evolution has a mind", would rightfully be thrown out as nonsense. EDIT: More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don't correspond to any kind of world-optimization at all. I think there's a great confusion between "mind" and "intelligence" here.
Basically, I'm making the claim that it could be reasonable to see "optimization" as a precondition to consider something a 'mind' rather than a 'not-mind,' but not the only one, or it wouldn't be a subset. And here, really, what I mean is something like a closed control loop- it has inputs, it processes them, it has outputs dependent on the processed inputs, and when in a real environment it compresses the volume of potential future outcomes into a smaller, hopefully systematically different, volume. Right, but "X is a subset of Y" in no way implies "any Y is an X." I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by 'optimization' here I do mean the definition "make things somewhat better" for an arbitrary 'better' (this is the future volume compression remarked on earlier) rather than the "choose the absolute best option."
I think that for an arbitrary better, rather than a subjective better, this statement becomes tautological. You simply find the futures created by the system we're calling a "mind" and declare them High Utility Futures simply by virtue of the fact that the system brought them about. (And admittedly, humans have been using cui bono conspiracy-reasoning without actually considering what other people really value for thousands of years now.) If we want to speak non-tautologically, then I maintain my objection that very little in psychology or subjective experience indicates a belief that the mind as such or as a whole has an optimization function, rather than intelligence having an optimization function as a particularly high-level adaptation that steps in when my other available adaptations prove insufficient for execution in a given context.
Who said otherwise? Thanks for that. I could add that self-improvement places further constraints.

On this topic, I once wrote:

I used to be frustrated and annoyed by what I thought was short-sightedness and irrationality on the behalf of people. But as I've learned more of the science of rationality, I've become far more understanding.

People having strong opinions on things they know nothing about? It doesn't show that they're stupid. It just shows that on issues of low personal relevance, it's often more useful to have opinions that are chosen to align with the opinions of those you wish to associate yourself with, and that this has been true so ofte

... (read more)
The specific word sequence is evidence for something or other. It's still unreasonable to expect people to respond to evidence in every domain, but many people do respond to words, and calling them just sounds in air doesn't capture the reasons they do so.

That was well expressed, in a way, but seems to me to miss the central point. People who dthink there are universally compelling arguments in science or maths, don't mean the same thing by "universal". They don't think their universally compelling arguments would work on crazy people, and don't need to be told they wouldn't work on crazy AI's or pocket calculators either. They are just not including those in the set "universal".


It has been mooted that NUCA is intended as a counterblast to Why Can't an AGI Work Out Its Own Morality... (read more)

"Rational" is not "persuadable" where values are involved. This is because a goal is not an empirical proposition. No Universal Compelling Arguments, in the general form, does not apply here if we restrict our attention to rational minds. But the argument can be easily patched by observing that given a method for solving the epistemic question of "which actions cause which outcomes" you can write a (epistemically, instrumentally) rational agent that picks the action that results in any given outcome—and won't be persuaded by a human saying "don't do that", because being persuaded isn't an action that leads to the selected goal. ETA: By the way, the main focus of mainstream AI research right now is exactly the problem of deriving an action that leads to a given outcome (called planning), and writing agents that autonomously execute the derived plan.
0Said Achmiz10y
So... what if you try to build a rational/persuadable AGI, but fail, because building an AGI is hard and complicated? This idea that because AI researchers are aiming for the rational/persuadable chunk of mindspace, they will therefore of course hit their target, seems to me absurd on its face. The entire point is that we don't know exactly how to build an AGI with the precise properties we want it to have, and AGIs with properties different from the ones we want it to have will possibly kill us.
What if you try to hardwire in friendliness and fail? Out of the two, the latter seems more brittle to me -- if it fails, it'll fail hard. A merely irrational AI would be about as dangerous as David Icke. If you phrase it, as I didn't, in terms of necessity, yes. The actual point was that our probability of hitting a point in mindspace will be heavily weighted by what we are trying to do, and how we are doing it. An unweighted mindspace may be populated with many Lovercraftian horrors, but that theoretical possibility is no more significant than p-zombies. Possibly , but with low probability, is a Pascal's Mugging []. MIRI needs significant probability.
0Said Achmiz10y
I see. Well, that reduces to the earlier argument, and I refer you to the mounds of stuff that Eliezer et al have written on this topic. (If you've read it and are unsatisfied, well, that is in any case a different topic.)
Thanks to the original poster for the post, and the clarification about universal compelling arguments. I agree with the parent comment, however, that I never matched the meaning that Chris Hallquist used to the phrase 'universally compelling argument'. Within the phrase 'universally compelling argument', I think most people package: 1. the claim has objective truth value, and 2. there is some epistemiologically justified way of knowing the claim Thus I think this means only a "logical" (rational) mind needs convincing - - one that would update on sound epistemology. I would guess most people have a definition like this in mind. But these are just definitions, and now I know what you meant by math and science don't have universally compelling arguments. And I agree, using your definition. Would you make the stronger argument that math and science aren't based on sound epistemology? (Or that there is no such thing as epistemiologically justified ways of knowing?)
That's an interesting combination.

Very good post. It is a very nice summation of the issues in the metaethics sequence.

I shall be linking people this in the future.

So you can have a mind that rejects modus ponens but does this matter? Is such a mind good for anything?

The "argument" that compels me about modus ponens and simple arithmetic is that they work with small real examples. You can implement super simple symbolic logic using pebbles and cups. You can prove modus ponens by truth tables, which could be implemented with pebbles and cups. So if arithmetic and simpler rules of logic map so clearly on to the real world, then these "truths" have an existence which is outside my own mind. Th... (read more)

Where Recursive Justification Hits Bottom and its comments should be linked for their discussion of anti-inductive priors.

(Edit: Oh, this is where the first quote in the post came from.)

Nitpicking: Modus ponens is not about "deriving". It's about B being true. (Your description matches the provability relation, the "|-" operator.) It's not clear how "fundamental" modus ponens it is. You can make up new logics without that connective and other exotic connectives (such as those in modal logics). Then, you'd ask yourself what to do with them... Speaking of relevance, even the standard connectives are not very useful by themselves. We get a lot of power from non-logical axioms, with a lot of handwaving about how ... (read more)

Well, if they're logically inconsistent, but nothing you can say to them will convince to give up YECism in order to stop being logically inconsistent... then that particular chain of reasoning, at least, isn't universally compelling. Or, if they have an undefeatable hypothesis, if that's literally true... doesn't that mean no argument is going to be compelling to them? Maybe you're thinking "compelling" means what ought to be compelling, rather than what actually convinces people, when the latter meaning is how Eliezer and I are using it?
I am at a loss about the true meaning of a "universally compelling argument", but from Eliezer's original post and from references to things such as modus ponens itself, I understood it to mean something that is able to overcome even seemingly axiomatic differences between two (otherwise rational) agents. In this scenario, an agent may accept modus ponens, but if they do, they're at least required to use it consistently. For instance, a mathematician of the constructivist persuasion denies the law of the excluded middle, but if he's using it in a proof, classical mathematicians have the right to call him out. Similarly, YEC's are not inconsistent in their daily lives, nor do they have any undefeatable hypotheses about barbeques or music education: they're being inconsistent only on a select set of topics. At this point the brick wall we're hitting is not a fundamental difference in logic or priors; we're in the domain of human psychology. Arguments that "actually convince (all) people" are very limited and context sensitive because we're not 100% rational.

This is a bit long for the key point: 'Don't be bothered by a lack of universally compelling arguments against because human mind spaces contains enough minds that will not accept modus or even less.' which comes at the end. You risk TLDR if you don't put a summary at the top.

Otherwise I think the title does't really fit, or else I possibly just don't see how it derives the relation - rather the opposite: 'No Universally Compelling Arguments against Crackpots'.

Is it just me, or has the LW community gone overboard with the "include a TLDR" advice? There's something to be said for clear thesis statements, but there are other perfectly legitimate ways to structure an article, including "begin with an example that hooks your reader's interest" (which is also standard advice, btw), as well as "[here is the thing I am responding to] [response]" (what I used in this article). If the sequences were written today, I suspect the comments would be full of TLDR complaints.
8Said Achmiz10y
Eliezer is very good at writing engrossing posts, which are as entertaining to read as some of the best novels. His posts are in no need of TLDR. The only other common posters here who seem to have that skill are Yvain and (sometimes almost) Alicorn. For pretty much everyone else, TLDR helps.
Yvain is an amazing writer, one of the very few people for whom I will read everything they write just because of who the author is. (In general, I read only a fraction of the stories in my feed reader.) I wouldn't put Eliezer in that category, though. I started reading Overcoming Bias sometime around the zombie sequence, but I didn't read every post at first. I'm pretty sure I skipped almost the entire quantum mechanics sequence, because it seemed too dense. I only went through and read the entirety of the sequences more recently because (1) I was really interested in the subjects covered and (2) I wanted to be on the same page as other LW readers. Part of why "TLDR" annoys me, I think, is that often what it really signals is lack of personal interest in the post. But of course not everyone finds every post equally interesting. If someone read the beginning of this post and though, "oh, this is about Eliezer's metaethics. Eh, whatever, don't care enough about the topic to read it," well good for them! I don't expect every LW reader to want to read every post I write.
0Said Achmiz10y
You're right, the quantum mechanics sequence was pretty dense. Pretty much all of the other stuff, though? I mean, I read through the entirety of the OB archive up to 2009, and then all of Eliezer's posts from then until 2011 or so, and not once did I ever find myself going "nope, too boring, don't care", the way I do with most other posts on here. (Sorry, other posters.) Some of them (a small percentage) took a bit effort to get through, but I was willing to put in that effort because the material was so damn interesting and presented in such a way that it just absolutely fascinated me. (This includes the QM sequence, note!) But different people may have different preferences w.r.t. writing styles. That aside, you should not confuse interest (or lack thereof) in a topic with interest in a post. Plenty of posts here are written about topics I care about, but the way they're written makes me close the page and never come back. I just don't have infinite time to read things, and so I will prioritize those things that are written clearly and engagingly, both because it gives me enjoyment to read such, and because good and clear writing strongly signals that the author really has a strong grasp of the concepts, strong enough to teach me new things and new views, to make things click in my head. That's what I got from many, many posts in the Sequences.
Maybe I overgeneralized this TLDR pattern but then it is no bad advice and a) I indeed find it lacking a thread and b) I provided a summary which you might or might not use.