This is a late reply, but in some societies, intentionally not killing your enemy is the point. See counting coup in the Americas. https://en.m.wikipedia.org/wiki/Counting_coup
If both sides are obeying a warrior code that lets you gain prestige but limits lethality, then you don't want to defect by escalating the lethality of attacks, because then your enemies might too. Warrior codes are often part of a prisoners dilemma.
Put a scarf or neck warmer over your mouth. Your throat will thank you. If that's too warm, you can use one of those little medical masks or chew a piece of gum.
P.S. I stand by everything I said, but I'm pretty sure I only finished the reply so I could make an Alienizer pun.
To explain my perspective, let me turn your example around by using a fictional alien species called humans. Instead of spending their childhoods contemplating the holy key, these humans would spend some of their childhoods being taught to recognize a simple shape with three sides, called a 'triangle'.
To them, when the blob forms a holy key shape, it would mean nothing, but if it formed a triangle they would recognize it immediately!
Your theory becomes simpler when you have a triangle, a key, a lock that accepts a triangle, and lock that accepts a holy key. The key and the triangle are the theories and therefore feel intrinsic. The locks (our brains) are the reality and therefore feel extrinsic.
We want to have a true theory of morality (A true holy key ;)). But the only tool we have to deduce the shape of the key is our own moral intuitions. Alieneizer's theory of meta ethics, is that you need to look at the evidence until you have a theory that is satisfying on reflection and at no point require you to eat babies or anything like that. Some people find the bottom up approach to ethics unsatisfying, but we've been trying the top down approach of: propose a morality system, then see if it works perfectly in the real world for a long time without success.
I think this should satisfy your intuitions, like how our brains seem to accept a morality key because it is a lock, morality not changing even if your mind changes (because your mind will fit a new key), how our morality was shaped by evolution but also somehow both abstract and real, and why I think that calling Alieneizer a relativist is silly. He made it his job to figure out if morality is triangle shaped, key shaped, or 4th dimensional croissant shaped so he could make an AI with that shaped ethics.
Humans value some things more than others. Survival is the bedrock human value (yourself, your family, your children, your species). Followed by things like pleasure and the lives of others and the lives of animals. Every human weighs the things a little differently, and we're all bad at the math. But on average most humans weigh the important things about the same. There is a reason Elizer is able to keep going back to the example of saving a child.
Most acne medications work by drying out your skin, not by being antibacterial out anti viral. The infections are a symptom of greasy skin and I think they claim that skin products less grease when it has the right bacteria is on it. But that still leaves me only 50% confident that it would work under optimal conditions.
Their ads say that AO spray makes your sweat smell less bad and it helps clear up acne. I've had zits since I hit puberty and a product that cuts down on the amount of caustic chemicals I need to rub all over my body would be great. I also commute to work by bicycl I'm 100+ degree Fahrenheit weather, and my office has no shower, so if AO actually cuts my BO then it might be a good investment.
I'm not sure if I'm reading you right, but if I am, you are saying that it takes a long time to kick in. Therefore I need to give up bathing while I wait for it to kick in. Therefore I would need to give up swimming because it's not useful if I want to replace the good bacteria lost from swimming in a chlorinated pool and then showering off the chlorine.
I did read one article (after I posted) where the reporter skipped showering for a month then took one shower and washed it all away (according to the bacterial swabs he took).
Does it work as advertised? Does it kind of work but only a little bit? I'd it basically a really expensive placebo? These are the kind of questions I would want answers to. I doubt anyone here would actually know about this product specifically, but maybe someone knows of a site like crazymeds.com for health stuff.
Continuing the thread from here: https://deathisbad.substack.com/p/ea-has-a-pr-problem-in-that-it-cares/comments
I agree with you that an AI programmed exactly as the one you describe is doomed to fail. What I didn't understand is why you think any AI MUST be made that way.
Some confusions of mine: -There is not a real distinction between instrumental and terminal goals in humans. This seems not true to me? I seem to have terminal goals\desires, like hunger and instrumental goals, like going to the store to buy food. Telling me that terminal goals don't exist seems to prove too much. Are you saying that complex goals like "Don't let humanity die" in humans brains are in practice instrumental goals made up of simpler desires?
-Becuase humans don't 'really' have terminal goals, it's impossible to program them into AIs. ?
-AI's can't be made to have 'irrational' goals, like caring about humans more than themselves. This also seems to prove that humans don't exist? Can't humans care about their children more than themselves? AI's couldn't be made to think of humans as valuable as humans think of their children? Or more?
To choose an inflammatory argument, a gay man could think it's irrational for him to want to date men, because that doesn't lead to him having children. But that won't make him want to date women. I have lots of irrational desires that I nevertheless treasure.