I'm beginning this post with the assumption that there is no "moral reality," namely that there is no objective truth about what we should or should not do.
Instead, I will (I hope non-controversially) consider morality to be some set of emotional intuitions possessed by any given individual. These moral intuitions, like any other human quality, were formed through a combination of evolutionary benefit and cultural context. And, as a notable characteristic, these intuitions do not form a particularly self-consistent system. A common example used here is the failure to scale: the resources we would expend to do some moral good X three times is not necessarily thrice the amount we would expend... (read 1813 more words →)
I’ve often heard and seen the divide people are talking about where people involved in the tech world are a lot more skeptical than your average non-tech friend. I’m curious what people think is the reason for this. Is the main claim that people in tech have been lulled into a false sense of security by familiarity? Or perhaps that they look down on safety concerns as coming from a lay audience scared by Terminator or vague sci fi ideas without understanding the technology deeply enough?