Already we have turned all of our critical industries, all of our material resources, over to these . . . things . . . these lumps of silver and paste we call nanorobots. And now we propose to teach them intelligence? What, pray tell, will we do when these little homunculi awaken one day and announce that they have no further need of us?
— Sister Miriam Godwinson, "We Must Dissent"
Also on the old guard problem: Let's say you're one of the luminaries of a field, and you have a lot invested in one particular approach or theory for doing things—maybe you invented this approach and you remain one of the best in the world at it. Now suppose there's a new approach that shows strong signs of being better than the one you're invested in.
If your career is going to end "soon" due to age, then taking the time to learn the new approach and become really good at it—as good as you currently are with your approach—may take up a substantial portion of the remainder of your career. In that case, just from a calculated-risk perspective, it may be in your interest to double down on your current approach even at times when the new approach is clearly the right choice for newcomers. But if you expect your career to last another 200 years, then it's much more likely in your interest to keep up with the times.
"Approach" is a very general term here. It might describe "experimental method and type of equipment used", "sub-area of study", "hypothesized method of treatment of a disease", "programming language and editor used", and lots of other possibilities.
I see, interesting. I've tended to talk about Neosporin and Polysporin interchangeably, not knowing the difference and figuring they were similar; the thing I've been using is actually Polysporin.
Looking into it... Wikipedia does say "In 2005–06, Neomycin was the fifth-most-prevalent allergen in patch test results (10.0%)". Is the 10% the number of people with an allergic reaction? (Would there be >10% with nonzero but subclinical reactions?) Also, if there are bacteria that the Neosporin doesn't kill, then might it actually be good for the body to be conducting a heightened immune response? Or would that interfere with wound healing in the common case? Googling suggests the latter is indeed a problem.
I also find that both bacitracin and neomycin have been named Allergen of the Year. Also, Wikipedia on bacitracin says "In 2005–06, it was the sixth-most-prevalent allergen in patch tests (9.2%)." Wow, that is kind of hilarious in the context of this debate. The article also says "[bacitracin] is generally safe when used topically, but in rare cases may cause hypersensitivity, allergic or anaphylactic reactions, especially in patient[s] allergic to neomycin." My impression of the immune system is that it's not too surprising for it to become highly prejudiced against anything new it encounters in open wounds... So is the one actually better than the other? I do find more internet-people advocating for Polysporin on allergen grounds, and I can believe that putting more potential allergens into a wound is more likely to cause a reaction.
I would agree that, in lots of cases (e.g. a paper cut while indoors), there's little need for antibacterials; I mostly think of the moisturizing and "covering the wound" benefits, which would indeed be served by plain petroleum jelly. That said, if you have a tube of the stuff around, I think you might as well include the antibiotics unless you're allergic.
Regarding antibiotic-resistant bacteria, how does overuse of Neo/Polysporin rate against, say, overuse of antibacterial hand soap and other products? I have a feeling that the latter are much larger culprits. I have one tube, roughly thumb-sized, of Polysporin, which I still haven't used up after >10 years (perhaps any active ingredients have expired).
I have a principled stance against lying. It's been several years since the last time I did something that I consider probably-lying; that thing was hastily answering "yes" to the mother of a friend when she asked whether I enjoyed the play her daughter was in (when the truth was "I enjoyed some parts of it, but overall it was kind of meh"); I then partly corrected myself, but then I think she asked, "Well, did you like it overall?" and I think I gave a strained "yes", when "Hmm, I would have to think about it" [it is a tough call whether I liked-more-than-disliked it] was correct; I remain disappointed with my behavior. Anyway, that is the standard I hold myself to.
I hold myself to this standard so that I am the sort of person for whom lying is just not thinkable, and who has zero practice at doing it. (Which hopefully means I'd suck at it if I tried, which means I won't be tempted to do it, and I'll remain in this state. (I'm amused to note that "failing to develop social skills" has been described with similar mechanics.)) Among other reasons, this is particularly valuable to me because I'm unusual in lots of ways, which means that, compared to the average person, I make implausible-seeming statements more frequently, so I have a stronger need for something that would make me credible. I suspect it's to some extent possible for people to recognize "a person for whom lying is abhorrent and Not Done"—I think I've occasionally perceived this in others—and I hope to benefit from that. I have been told a few times, by someone who knew my honesty policy, that it was valuable that they could believe a comforting statement which others in my position might have made falsely.
(Yes, there are dangers. One group of dangers is: deceiving myself, making misleading but not technically false statements, and passing on uncertain information without certainty tags. Another group of dangers is saying true things with unnecessarily inflammatory phrasing, or saying more than I mean to or need to. Also, games like Mafia and The Resistance don't work well for me.)
I don't know about Yair, but at least for me, the problem with choosing to lie is that it destroys the above edifice. To me, "I have a huge aversion to lying in any circumstance, for any reason" is a coherent stance, a Schelling fence; and adding exceptions makes it much less plausible.
That said, I don't think this situation is lying. More here.
I don't think I see a problem with this situation. "Do you want me to kiss your injury?" is simply asking about their preferences, and makes no assertion about the healing powers of the kiss. "Do you want me to kiss it better?" does make a bit of an assertion, but at least at this point, it's clear that it provides comfort to your son—it makes him feel better—so I think the phrasing is ok.
Also, there is research suggesting that saliva on wounds is in fact helpful:
Our results show that human saliva can stimulate oral and skin wound closure and an inflammatory response. Saliva is therefore a potential novel therapeutic for treating open skin wounds.
Licking wounds is useful elsewhere in the animal kingdom; see Wikipedia. My guess for humans would be "it probably helps a bit—though, if Neosporin is available, I'd prefer the latter". If it does help, then I think we'd expect baby animals who aren't able to lick the wounded area (or who just don't lick it for some reason) to be adapted to, upon getting injured, cry loudly until a parent comes by and licks their wounds / otherwise attends to them; and humans probably inherited the adaptation.
I'd say it's less likely (though possible, with the "inflammatory response"?) that it helps with bruises or other injuries that don't break the skin. But it's also plausible that babies can't tell the difference between a wound that bleeds and one that doesn't, or that evolution didn't find it worthwhile programming an exception for the latter.
So I suspect that your son's crying followed by his "it's magically better now" response may be an evolved behavior. (Even if saliva did nothing, the general "cry for help when you get hurt until someone tends to your wounds" reaction seems reasonable.) In which case kissing his injury is speaking directly to that evolved part of him.
And Elon Musk (who is ... probably buying Twitter) seems enthused about it. Apparently introduced on the ides of March: https://www.techtimes.com/articles/273093/20220316/twitter-downvote-now-available-excites-elon-musk-others-use.htm
Hmm, this depends on assumptions not stated. I was thinking of the situation where Alice has broken into Bob's house, and there are neighbors who might hear a gunshot and call the cops, and might be able to describe Alice's getaway car and possibly its license plate. In other words, Alice shooting Bob carries nontrivial risk of getting her caught.
If we imagine the opposite, that Alice shooting Bob decreases her chance of getting caught, then, after Bob gives her his stuff, why shouldn't Alice just shoot Bob afterward? In which case why should Bob cooperate? To incentivize Bob, Alice would have to promise that she won't shoot him after he cooperates, rather than threaten him. (And it's harder for an apparently-willing-to-murder-you criminal to make a credible promise than a credible threat.)
So let's flesh out the situation I imagined. If Bob cooperates and then Alice kills him, the cops will seriously investigate the murder. If Bob cooperates and Alice leaves him tied up and able to eventually free himself, then the cops won't bother putting so much effort into finding Alice. Then Bob can really believe that, if he cooperates, Alice won't want to shoot him. Now we consider the case where Bob refuses; does Alice prefer to shoot him?
If she does, then we could say that, if both parties understand the situation, then Alice doesn't need to threaten anything. She may need to explain what she wants and show him her gun, but she doesn't need to make herself look like a madman, a hothead, or otherwise irrational; she just needs to honestly convey information. And Bob will benefit from learning this information; if he were deaf or uncomprehending, then Alice would have just killed him.
Whereas if Alice would rather not shoot Bob, then her attempts to convince Bob will involve either lying to him, or visibly making herself angry or otherwise trying to commit herself to the shoot-if-resist choice. In this case, Bob does not benefit from being in a position to receive Alice's communications; if Bob were clearly deaf / didn't know the language / otherwise couldn't be communicated with, then Alice wouldn't try to enrage herself and would probably just leave. (Technically, given that Alice thinks Bob can hear her, Bob benefits from actually hearing her.)
There is an important distinction to be made here. The question is what words to use for each case. I do think it's reasonably common for people to understand the distinction, and, when they are making a distinction, I think they use "threat" for the second case, while the first might be called "a fact" or possibly "a warning".
For a less violent case, consider one company telling their vendor, "If you don't drop your prices by 5% by next month, then we'll stop buying from you." If that's actually in the company's interest—e.g. because they found a competing seller whose prices are 5% lower—then, again, the vendor is glad to know; but if the company is just trying to get a better deal and really hopes they're not put in a position where they have to either follow through or eat their words, then this is a very different thing. I do think that common parlance would say that the latter is a threat, and the former is a (possibly friendly!) warning.
Incidentally, it's clear that people refer to "a thing that might seriously harm X" as "a threat to X". In the "rational psychopath" case, Alice is a threat to Bob, but her words, her line of communication with Bob, are not—they actually help Bob. In the "wannabe madman" case, Alice's words are themselves a threat (or, technically, the fact that Alice thinks Bob is comprehending her words). Likewise, the communication (perhaps a letter) from the company that says they'll stop buying is itself a threat in the second case and not the first. One can also say that the wannabe-madman Alice and the aggressively negotiating company are making a threat—they are creating a danger (maybe fake, but real if they do commit themselves) where none existed.
Now, despite the above arguments, it is possible that the bare word "threat" is not the best term. The relevant Wikipedia article is called "Non-credible threat". I don't think that's a good name, because if Alice truly is a madman (and has a reputation for shooting people who irritated her, and she's managed to evade capture), then, when Alice tells you to do something or she'll shoot you, it can be very credible. I would probably say "game-theoretic threat".
 Though in practice she might need to convince Bob that she, unlike most people, is willing to kill him. Pointing a gun at him would be evidence of this, but I think people would also tend to say that's "threatening"... though waving a gun around might indeed be "trying to convince them that you're irrational enough to carry out an irrational threat". I dunno. In game theory, one often prefers to start with situations in which all parties are rational...
This is true.
I think, if there is any way to interpret any such statements as not being a threat, it would be of the form "I have already made my precommitments; I've already altered my brain so that I assign lower payoffs (due to psychological pain or whatever) to the outcomes where I fail to carry out my threat. I'm not making a new strategic move; I'm informing you of a past strategic move." One could argue that the game is no longer the Ultimatum Game, due to the payoffs not being what they are in the Ultimatum Game.
Of course, both sides would like to do this, and to be "first" to do it. An extreme person in this vein could say "I've altered my brain so that I will reject anything less than 9-1 in my favor", and this could even be true. Two such people would be guaranteed to have a bad time if they ran into one another, and a fairly bad time if they met a dath ilani; but one could choose to be such a person.
If both sides do set up their psychology well in advance of encountering the game, then the strategic moves are effectively made simultaneously. One can then think about the game of "making your strategic move".
Hmm, do you have examples of that? If a robber holds a gun to someone's head and says "I'll kill you if you don't give me your stuff", that's clearly a threat, and I believe it also fits the game theory definition: most robbers would have at least a mild preference to not shoot the person (if only because of the mess it creates).
There is a mathematically precise definition of "threat" in game theory. It's approximately the one Yair semi-explicitly used above. Alice threatens Bob when Alice says that, if Bob performs some action X, then Alice will respond with action Y, where Y (a) harms Bob and (b) harms Alice. (If one wants to be "mathematical", then one could say that each combination of actions is associated with a set of payoffs, and that "action Y harms Bob" == "[Bob's payoff with Y] < [Bob's payoff with not-Y]".) The threat should successfully deter Bob if, and only if, (1) Bob believes Alice's statement; (2) the harm inflicted on Bob by Y exceeds the benefit he gains from X; and (3) because of (b), Bob believes Alice wouldn't just do Y anyway.
If Alice has an action Z that harms Bob and benefits her, then she can't use it in a threat, because Bob would assume she'd do it anyway. But what she can do is make a promise, that if Bob does what she wants, then she'll do action Q, which (a) helps Bob and (b) harms her; in this case Q would be "refrain from Z".
Of course, carrying out a threat or promise is by definition irrational. But being able to change others' behavior is useful, so that's what creates evolutionary value in emotional responses like anger/revenge, gratitude/obligation, etc., and other methods of self-compulsion.
(I learned this from the book "Game Theory and Strategy" by Straffin, but you can see the same definitions given in e.g. http://pi.math.cornell.edu/~mec/2008-2009/Anema/stategicmoves.htm .)
I would be surprised if dath ilan didn't have the base concepts of game-theoretic threats and promises; and if they did, then I'm not sure what other names they would use for them. I'm not certain about this (and have only read one dath ilan story and it wasn't "mad investor chaos"), but I suspect the authors would avoid giving new definitions of terms from Earth economics and game theory that already have precise definitions.