Even a more sane and more continuously distributed measure could yield that result, depending on how you fit the scale. If you measure the likelihood of making a mistake (so zero would be a perfect driver, and one a rabid lemur), I expect the distribution to be hella skewed. Most people drive in a sane way most of the time. But it's the few reckless idiots you remember - and so does every single one of the thousand other drivers who had the misfortune to encounter them. It would not surprise me if driving mistakes followed more-or-less a Pareto distribution.
There probably was a time when killing Hitler had a significant chance of ending the war by enabling peace talks (allowing some high-ranking German generals/politicians to seize power while plausibly denying having wanted this outcome). The window might have been short, and probably a bit after '42, though. I'd guess any time between the Battle of Stalingrad (where Germany stopped winning) and the Battle of Kursk (which made Soviet victory inevitable) should've worked - everyone involved should rationally prefer white peace to the very real possibility of a bloody stalemate. Before, Germany would not accept. Afterwards, the Soviets wouldn't.
Yup. Layer 8 issues are a lot harder to prevent than even Layer 1 issues :)
While air gaps are probably the closest thing to actual computer security I can imagine, even that didn't work out so well for the guys at Natanz... And once you have systems on both sides of the air gap infected, you can even use esoteric techniques like ultrasound from the internal speaker to open up a low bandwith connection to the outside.
And some people would like to make it sit down and write "I will not conjure up what I can't control" a thousand times for this. But I, for one, welcome our efficient market overlords!
Where did you get the impression that European countries do this on a large enough scale to matter*? There are separate bike roads in some cities, but they tend to end abruptly and lead straight into traffic at places where nobody expects cyclists to appear or show similar acts of genius in their design. If you photograph just the right sections, they definitely look neat. But integrating car and bike traffic in a crowded city is a non-trivial problem; especially in Europe where roads tend to follow winding goat paths from the Dark Ages and are way too nar...
I know you intended your comment to be a little tongue-in-cheek, but it is actual energy, measured in Joules, we're talking about. Exerting willpower drains blood glucose levels.
I don't know of studies that indicate intraverts would drain glucose faster than extraverts when socializing, but that seems to be a pretty straightforward thing to measure, and I'd look forward to the results. At least, i can tell from personal experience that I need to exert willpower to stay in social situations (especially when there are lots of people close by or when it's lou...
There's another argument I think you might have missed:
Utilitarism is about being optimal. Instinctive morality is about being failsafe.
Implicit in all decisions is a nonzero possibility that you are wrong. Once you take that into account, having some "hard" rules like not agreeing to torture here (or in other dilemmas), not pushing the fat guy on the tracks in the trolley problem, etc, can save you from making horrible mistakes at the cost of slightly suboptimal decisions. Which is, incidentally, how I would want a friendly AI to decide as well...
Exactly. Stocks are almost always better long-term investments than anything else (if mixed properly; single points of failure are stupid). The point of mixing in "slow" options like bonds or real estate is that it gives you something to take money out of when the stocks are low (and replenish it when the stocks are high). That may look suboptimal, but still beats the alternatives of borrowing money to live from or selling off stocks you expect to rise mid-term. The simulation probably does a poor job of reflecting that.
Intelligence is basically how quickly you learn from experience, so being smart should allow you to get to the same level with much less time put in (which seems to be what the OP is hinting at). I'd also expect diminishing returns, especially if you always socialize with the same (type of) people. At some point, each social group (or even every single person) becomes a skill of its own. Once your generic social skills are at an acceptable level, pick your specializations carefully. Life is too short to waste it on bad friends.
My thoughts exactly. The first commandment of multiclassing in 3rd is "Thou shalt not lose caster levels". Also, Wizards are easily the most OP base class, if played well. Multiclassing them into anything without wizard spell progression is just a waste.
OTOH, using gestalt rules to make a Wizard//Rogue isn't half bad, even if a little short on HP and proficiencies. I prefer Barbarian or even the much ridiculed Monk in place of the Rogue.
I suppose you already drew the obvious conclusion, but I still think it's worth spelling out:
The key to people liking you is making sure they feel good when you're around. Causality is secondary.
A quick google search found this:
Emma Chapman, Simon Baron-Cohen, Bonnie Auyeung, Rebecca Knickmeyer, Kevin Taylor & Gerald Hackett (2006) Fetal testosterone and empathy: Evidence from the Empathy Quotient (EQ) and the “Reading the Mind in the Eyes” Test, Social Neuroscience, 1:2, 135-148, http://dx.doi.org/10.1080/17470910600992239
I can't find a citation for the whole story right now, but as I remember it, it goes something like this: When the first wave of testosterone hits a male fetus, it kills off well over 80% of the brain cells responsible for e...
Only say things that can be heard. If you can anticipate that you are too many inferential steps away, you should talk about something else. Which means in this case: Be patient and build their knowledge from the bottom, not from the top.
If you have already started and notice the problem too late, yeah, you're kinda screwed. The honest answer seems pretty rude, and not saying anything is worse. I'd probably try to salvage what I still can by saying something along the lines of "I know this is a complicated and confusing issue, and it takes a while to ...
There is also something else going on here, which I realized after learning about personality types, especially Jung's theories and the Myers-Briggs Type Indicator. One dimension separates along the primary mode of seeing the world (Sensing vs iNtuitive), with the former ones collecting individual facts and strictly following isolated rules, and the latter ones always looking for the generalized principle behind the facts and questioning the origin and sense of rules.
These two types have a lot of trouble understanding each others' way of thinking and frequ...
Then he asked the wrong question. Straight up asking "Ougi, why did you decide on a formal dress code when this apparently has no meaning for your teachings?" is a different question from "Does wearing robes make us a cult?", and shows a different understanding of what the robes mean. The answer would still be deliberately confusing and enigmatic, but that's kinda the whole point of a koan.
Danger, wild speculation ahead: I'd assume it has something to do with the saying "Engineers can't lie." I can imagine constantly experiencing that doing things in violation with reality leads to failure, while at the same time hearing politicians lie pretty much every time they open their mouth and having them get elected again and again (or not failing in another way), to make quite a few of them seriously fed up with the current government in particular and humanity in general. Some less stable personalities might just want to watch the world burn at that point. Which should make them recruitable as terrorists, if you use the right sales pitch.
It's probably one of the many useful functions of the court jester :)