Lukas_Gloor

Comments

Might humans not be the most intelligent animals?
If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn't seem like a hard separation between "being able to process culture inefficiently" and "able to process culture efficiently" other than the initial jump from not being able to do it at all, which we have already passed.

I keep hearing people say this (the part "and there doesn't seem to be a hard separation"), but I don't intuitively agree! I've spelled out my position here. I have the intuition that there's a basin of attraction for good reasoning ("making use of culture to improve how you reason") that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don't seem to "get it" when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn't do this, or does it in a fashion that doesn't sufficiently correspond to reality's structure, they predictably won't make original and groundbreaking intellectual contributions. By contrast, other people do "get it," and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.

SDM's Shortform
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn't say in that bracket was that 'maybe axiology' wasn't my only guess about what the objective, normative facts at the core of ethics could be.

I'm not sure. I have to read your most recent comments on the EA forum more closely. If I taboo "normative realism" and just describe my position, it's something like this:

  • I confidently believe that human expert reasoners won't converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it's true that if "life goals don't converge" then "population ethics also doesn't converge")
  • However, I think there would likely be converge on subdomains/substatements of ethics, such as "preference utilitarianism is a good way to view some important aspects of 'ethics'"

I don't know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that's allowed if I'm a naturalist normative realist?)

Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn't occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.

Cool! I personally wouldn't call it "normatively correct rule that ethics has to follow," but I think it's something that sticks out saliently in the space of all normative considerations.

(This still strikes me as exactly what we'd expect to see halfway to reaching convergence - the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we've been working on for longer.)

Okay, but isn't it also what you'd expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions "off distribution." Another intuition is that it's the only domain in ethics where it's ambiguous what "others' interests" refers to. I don't think it's an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it's kind of odd that anyone thought there'd be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between "whether population ethics is underdetermined" and "whether every person should have the same type of life goal." I think "not every person should have the same type of life goal" is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn't all want to replicate, and I'm confident that I'm not somehow confused about what I'm doing.)

Your case for SFE was intended to defend a view of population ethics - that there is an asymmetry between suffering and happiness. If we've decided that 'population ethics' is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can't I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we're going to leave population ethics undetermined?

Exactly! :) That's why I called my sequence a sequence on moral anti-realism. I don't think suffering-focused ethics is "universally correct." The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It's a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth's future light cone.

Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also "in tension," worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as "not more in tension than Democrats versus Republicans." This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to "what do we want to do with earth's future lightcone"). After you've chosen your life goals, that still leaves open the further question "How do you think about other people having different life goals from yours?" That's where preference utilitarianism comes in (if one takes a strong stance on how much to respect others' interests) or where we can refer to "norms of civil society" (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander's archipelago blogpost for inspiring this idea. I think he also had a blogpost on "axiology" that made a similar point, but by that point I might have already found my current position.]

In any case, I'm considering changing all my framings from "moral anti-realism" to "morality is underdetermined." It seems like people understand me much faster if I use the latter framing, and in my head it's the same message.

---

As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:

1. Morality could be underdetermined

2. Moral uncertainty and confidence in strong moral realism are in tension

3. There is no absolute wager for moral realism

(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – "what we on reflection care about" – doesn't suddenly lose its significance if there's less convergence than we initially thought. Just like I shouldn't like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn't care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)

4. Mistaken metaethics can lead to poorly grounded moral opinions

(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)

5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense

(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn't reaching a different conclusion on the same task. Instead, they're doing a different task. I'm interested in all the three questions I dissolved ethics into, whereas people who play the game "pick your version of consequentialism and answer every broadly-morality-related question with that" are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)

SDM's Shortform
I think that, on closer inspection, (3) is unstable - unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims.

I agree with that.

The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further - to the absurd conclusion, because 'the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled'. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.

It sounds like your contrasting my statement from The Case for SFE ("fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms") with "arbitrarily halting the search for coherence" / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where "population ethics" is just a straightforward extension of "normal ethics," you split "ethics" into a bunch of different subcategories:

  • Preference utilitarianism as an underdetermined but universal morality
  • "What is my life goal?" as the existentialist question we have to answer for why we get up in the morning
  • "What's a particularly moral or altruistic thing to do with the future lightcone?" as an optional subquestion of "What is my life goal?" – of interest to people who want to make their life goals particularly altruistically meaningful

I think a lot of progress in philosophy is inhibited because people use underdetermined categories like "ethics" without making the question more precise.

What if memes are common in highly capable minds?

In this answer on arguments for hard takeoff, I made the suggestion that memes related to "learning how to learn" could be the secret sauce that enables discontinuous AI takeoff. Imagine an AI that absorbs all the knowledge on the internet, but doesn't have a good sense of what information to prioritize and how to learn from what it has read. Contrast that with an AI that acquires better skills about how to organize its inner models, making its thinking more structured, creative, and generally efficient. Good memes about how to learn and plan might make up an attractor, and AI designs with the right parameters could hone in on that attracter in the same way as "great minds think alike." However, if you're slightly off the attractor and give too much weight to memes that aren't useful for truth-seeking and good planning, your beliefs might resemble that of a generally smart person with poor epistemics, or someone low on creativity who never has genuine insights.

What are your thoughts on rational wiki

Does anyone know if there were admin/management changes on that site? I remember I thought the older versions of their articles on LessWrong-related topics were disgusting. Only took a quick look now, but it looks like they adjusted the tone somewhat and maybe changed some of the most uncharitable stuff..

Slate Star Codex and Silicon Valley’s War Against the Media

Interesting. Maybe I'd change my mind if I re-read it, but I want to flag that my first impression of this article was very positive. It seemed to me like the article highlighted several of Scott's qualities and contributions that make him look like the kind and relatable person that he is. And the critical stuff seems like what you'd expect by someone who is trying to present a balanced view (and some of it may well be accurate takes). There are some clear exceptions, e.g., I didn't like the accusation of (slightly) bad faith on Scott's part in asking SSC commenters to contact the NY times in a respectful manner, or that the author talked about Damore as though it was so-obvious-as-to-not-even-worth-arguing-for that he did something really bad.

I should flag that I only read this very quickly, and I had very pessimistic expectations. Sometimes when you expected something absolutely terrible, and you get something that's merely bad, you think it's very good. :)

Is Altruism Selfish?

I'm happy to grant you that, when pondering a specific decision, people always choose the option they feel better with in the moment of making the decision. If they have cravings activated, that sense of feeling better will cash out in terms of near-term hedonism (e.g., buying two packs of crisps and a Ben&Jerry's ice cream for dinner). If they make decisions with the brain's long-term-planning module activated, they will make whichever decision they feel most satisfied with as a person (e.g., choosing to do a PhD even though it means years of stress).

No one purposefully makes a decision that predictably makes them feel worse for having made that decision. In that sense, all decisions are made for "self-oriented" reasons. However, that's a trivial truth about the brain's motivational currency, not a philosophical truth about altruism versus selfishness.

Altruism is about taking genuine pride in doing good things for others. That's not what makes altruism "secretly selfish." It's what enables altruism. It also matters to what degree people have a habit of fighting rationalizations and hypocrisy. Just like it feels good to think that you're being virtuous when in reality you're entitled and in the wrong, it also feels good to spot your brain's rationalizations and combat them. Both things feel good, but only one of them contributes to altruistic ideals.

How to learn from a stronger rationalist in daily life?

I recommend finding some kind of goal other goal than "becoming more rational." Going to a workshop here and there or discussing rationality techniques with someone sounds good, but if that's your primary goal for several months or longer, that IMO risks turning into a failure mode of looking at rationality as an end rather than a means. I think you learn most by trying to do things that are important to you.

I strongly agree with the advice of trying to surround yourself with some people you want to learn from.

Why COVID-19 prevention at the margin might be bad for most LWers
We can expect some small regions will make it out with sub 1% but I think there's a 90% chance at least 4% of the US will be antibody positive from exposure (with or without severe symptoms) after a year

That sounds exactly right.

(and a 90% chance no more than 60% will)

I'd say you can go up to 97% for that.

I think the median will be somewhere around 10% of the US population very roughly and that's why I disagreed with the OP. It's unlikely I'd change my mind too drastically about those numbers, at least not in the near future and without new info, because I've spent a lot of time forecasting virus questions. :)

Will the world hit 10 million recorded cases of COVID-19? If so when?

There was a Metaculus question that opened in early April about "How many COVID-19 deaths will be recorded in the month of April, worldwide?" The community prediction was 210k (50% CI: 165k – 288k), which seemed little different from just extrapolating the trend of reported deaths. I saw that countries had all gone into lockdown a while back, so I predicted 75% that the numbers would end up below 193k. The resolution was 184k and I won a lot of points.

Trend extrapolation is only half of what's important. If the trend is foreseeably going to break because circumstances are changing, we need to factor that in. If avturchin is right about the recent numbers being linear with 100K cases a day (I didn't look this up), then we can say that it'll probably take longer than 60 days until 10M confirmed cases. In the majority of locations, R0 is below 1 and many people are recovering (and PCR tests only catch active infections). Of course, case numbers may go up again, which can happen surprisingly fast. Still, I think the mark for 10M confirmed cases is unlikely to be hit before August. Unfortunately, I suspect that we will hit it at some point later in the year when cases go out of control again in some parts of the world where there's extensive testing.


UPDATE June 12th: Seems like I got this one really wrong. Daily new cases are at 135k now, so a substantial increase in cases.

Load More