Posts

Sorted by New

Wiki Contributions

Comments

I read A Fire Upon the Deep a few years ago, and even back then I found it highly prescient. Now I'll take this sad event as an opportunity to read his highly acclaimed prequel A Deepness in the Sky. RIP.

oumuamua6mo3820

Murder is just a word. ... SBF bites all the bullets, all the time, as we see throughout. Murder is bad because look at all the investments and productivity that would be lost, and the distress particular people might feel

You are saying this as if you disagreed with it. In this case, I'd like to vehemently disagree with your disagreeing with Sam.

Murder really is bad because of all the bad things that follow from it, not because there is some moral category of "murder", which is always bad. This isn't just "Sam biting all the bullets", this is basic utilitarianism 101, something that I wouldn't even call a bullet. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.

Believing this has, in my opinion, morally good consequences. It explains why murdering a random person is bad, but very importantly does not explain why murdering a tyrant is bad, or why abortion is bad. Deontology very easily fails those tests, unless you're including a lot of moral "epicycles".

To me it feels exactly like the kind of habit we should get into.

Imagine an advanced (possibly alien) civilization, with technology far beyond ours. Do you imagine its members being pestered by bloodsucking parasites? Me neither.

The existence of mosquitoes is an indictment of humanity, as far as I'm concerned.

oumuamua6mo135

Is there an actually good argument for why eliminating only disease carrying mosquitoes is acceptable, rather than just wiping them all out? There is no question that even without the threat of malaria, creatures like mosquitoes, bed-bugs and horse-flies decrease the quality of life of humans and animals. Would the effects on ecosystems really be so grave that they might plausibly outweigh the enormous benefits of their extinction?

You know the way lots of people get obsessed with Nietzsche for a while? They start wearing black, becoming goth, smoking marijuana, and talking about how like “god is dead, nothing matters, man.” This never happened to me, in part because Nietzsche doesn’t really make arguments, just self-indulgent rambles.

This is objectionable is many ways. To say that one of the most influential German philosophers produced only self-indulgent rambles is a sufficiently outrageous claim that you should be required to provide an argument in its favor.

I don't even disagree entirely. I view Nietzsche as more of a skilled essay-writer than a philosopher, who tried to appeal more to aesthetics than reason alone, but reducing Nietzsche to a sort-of 19th century "influencer"-type is ridiculous.

I don't even know where to begin with the list, but here are the main reasons I suspect people, including myself, did not find Oppenheimer straightforwardly enjoyable.

  • I knew what was going to happen in advance because it's historically accurate. That was probably the biggest one for me. Yes, the Bomb is going to work, it's going to be dropped, Oppenheimer will survive, etc.
  • It's three hours of mostly people talking inside rooms, mostly about things I already knew about.
  • The Scenes depicting his unhappy love-life, especially those including his wife, weren't interesting to me.
  • It could have been more about the difficulty of making important moral judgements, but instead focused on political aspects of the project.

If we can’t get it together, perhaps we can at least find out who thinks hooking AIs up to nuclear weapons is an intriguing idea, and respond appropriately.

I unironically find it an intriguing idea, because it seems like it's a potential solution to certain games of nuclear chicken. If I can prove (or at least present a strong argument) that I've hooked up my nuclear weapons to an AI that will absolutely retaliate to certain hostile acts, that seems like a stronger deterrent than just the nukes alone.

After all, the nightmare scenario for nuclear arms strategy is "the enemy launches one nuke", because it makes all actions seem bad. Retaliating might escalate things further, not retaliating lets your enemies get away with something they shouldn't be getting away with, etc. etc. 

edit: I am of course aware that there are a myriad of things that could easily go wrong when doing this, so please do not take my comment as any kind of advocacy in favor of doing this.

Answer by oumuamuaSep 21, 2022136

People talk about "welfare", "happiness" or "satisfaction", but those are intrinsically human concepts

No, they are not. Animals can feel e.g. happiness as well.

If you use the word "sentient" or synonyms, provide at least some explanation of what do you mean by it.

Something is sentient if being that thing is like something. For instance, it is a certain way to be a dog, so a dog is sentient. As a contrast, most people who aren't panpsychists do not believe that it is like anything to be a rock, so most of us wouldn't say of a rock that it is sentient.

Sentient beings have conscious states, each of which are (to a classical utilitarian) desirable to some degree (which might be negative, of course). That is what utilitarians mean by "utility": The desirability of a certain state of consciousness.

I expect that you'll be unhappy with my answer, because "desirability of a certain state of consciousness" does not come with an algorithm for computing that, and that is because we simply do not have an understanding of how consciousness can be explained in terms of computation.

Of course having such an explanation would be desirable, but its absence doesn't render utilitarianism meaningless, because humans still have an understanding of what approximately we mean by terms such as "pleasure", "suffereing", "happiness", even if it is merely in a "I know it when I see it" kind of way.

I'm a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.

I think the assumption that there is one correct population ethics is wrong, and that it's totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best

This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else's ethics, insofar as they violate those axioms, are wrong. This is where the "objectivity" comes from. It doesn't matter what other people's ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.

Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is "those people are wrong, I don't care what they believe. I don't want to needlessly offend them, but that doesn't change anything about how I view the world, or how we should construct our technological devices."

Likewise, if a deontologist says "Human challenge trials for covid are wrong, because [deontological reason]", my reaction to that (I'm a utilitarian) is pretty much the same.

I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?

I realize I'll never be able to change a deontologist's mind using utilitarian arguments, and that's fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.

Tin (mostly due to glass-production) and Phosphorous (for fertilizers) are two more example of chemical elements that we are running out of rather quickly. Not completely and irreversibly, but enough to cause insane price-spikes.

Sand, including high-purity silica-sand for chip production are also running low, and aren't easy to replace.

Load More