Zetetic

Wiki Contributions

Comments

Sorted by
Zetetic00

I see your point here, although I will say that decision science is ideally a major component in the skill set for any person in a management position. That being said, what's being proposed in the article here seems to be distinct from what you're driving at.

Managing cognitive biases within an institution doesn't necessarily overlap with the sort of measures being discussed. A wide array of statistical tools and metrics isn't directly relevant to, e.g. battling sunk-cost fallacy or NIH. More relevant to that problem set would be a strong knowledge of known biases and good training in decision science and psychology in general.

That isn't to say that these two approaches can't overlap, they likely could. For example stronger statistical analysis does seem relevant to the issue of over-optimistic projections you bring up in a very straightforward way.

From what I gather you'd want a CRO that has a complimentary knowledge base in relevant areas of psychology alongside more standard risk analysis tools. I definitely agree with that.

Zetetic20

Just thought I'd point out, actuaries can also do enterprise risk management. Also, a lot of organizations do have a Chief Risk Officer.

Zetetic70

I think it's fair to say that most of us here would prefer not to have most Reddit or Facebook users included on this site, the whole "well-kept garden" thing. I like to think LW continues to maintain a pretty high standard when it comes to keeping the sanity waterline high.

Zetetic40

This is part of why I tend to think that for the most part, these works aren't (or if they are, they shouldn't be) aimed at de-converting the faithful (who have already built up a strong meme-plex to fall back on), but rather for interception and prevention for young potential converts and people who are on the fence. Particularly college kids who have left home and are questioning their belief structure.

The side effect is that something that is marketed well towards this group (imo, this is the case with "The God Delusion") comes across as shocking and abrasive to the older converts (and this also plays into its marketability to a younger audience). So there's definitely a trade-off, but getting the numbers right to determine the actual payoff is difficult.

I think a more effective way to increase secular influence is through lobbying. I think in the U.S. there is a great need for a well-funded secular lobby to keep things in check. I found one such lobby but I haven't had the chance to look into it yet.

Zetetic60

I've met both sorts, people turned off by "The God Delusion" who really would have benefited from something like "Greatest Show on Earth", and people who really seemed to come around because of it (both irl and in a wide range of fora). The unfortunate side-effect of successful conversion, in my experience, has been that people who are successfully converted by rhetoric frequently begin to spam similar rhetoric, ineptly, resulting mostly in increased polarization among their friends and family.

It seems pretty hard to control for enough factors to see what kind of impact popular atheist intellectuals actually have on de-conversion rates and belief polarization (much less with specific subset of abrasive works), and I can't find any clear numbers on it. Seems like opinion mining facebook could potentially be useful here.

Zetetic00

First, I do have a couple of nitpicks:

Why evolve a disposition to punish? That makes no sense.

That depends. See here for instance.

Does it make sense to punish somebody for having the wrong genes?

This depends on what you mean by "punish". If by "punish" you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense.

In any event, what you've written is pretty much orthogonal to what I've said; I'm not defending what you're calling evolutionary ethics (nor am I aware of indicating that I hold that view, if anything I took it to be a bit of a strawman). Descriptive evolutionary ethics is potentially useful, but normative evolutionary ethics commits the naturalistic fallacy (as you've pointed out), and I think the Euthyphro argument is fairly weak in comparison to that point.

The view you're attacking doesn't seem to take into account the interplay between genetic, epigenetic and cultural/mememtic factors in how moral intuitions are shaped and can be shaped. It sounds like a pretty flimsy position, and I'm a bit surprised that any ethicist actually holds it. I would be interested if you're willing to cite some people who currently hold the viewpoint you're addressing.

The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed.

Well, really it's more neuroscience that tells us that our values aren't fixed (along with how the valuation works). It also has the potential to tell us to what degree our values are fixed at any given stage of development, and how to take advantage of the present degree of malleability.

Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.

Of course; under your usage of evolutionary ethics this is clearly the case. I'm not sure how this relates to my comment, however.

Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires

I agree that it's pretty obvious that social reinforcement is important because it shapes moral behavior, but I'm not sure if you're trying to make a central point to me, or just airing your own position regardless of the content of my post.

Zetetic30

I'm not sure if it's elementary, but I do have a couple of questions first. You say:

what each of us values to themselves may be relevant to morality

This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.

Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of "morality" if "morality" is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.

Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy - we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered - we have at least three valuation mechanisms, and their interaction isn't yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:

10 years of work (that) established the existence of at least three interrelated subsystems in these brain areas that employ distinct mechanisms for learning and representing value and that interact to produce the valuations that guide choice (Dayan & Balliene, 2002; Balliene, Daw, & O’Doherty, 2008; Niv & Montague, 2008).

The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it's imperative to avoid making the criterion "too high for humanity".

One last thing I'd point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:

What we intuitively value for others is not.

Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.

I can easily imagine an individual from species described along the lines of the author's hypothetical reading the following:

If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to not eat their babies, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would not be good. We could not brag that humans evolved a disposition to be moral because morality would be whatever humans evolved a disposition to do.

And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven't. It's quite an interesting (relevant) short story.

So, I have a bit more to write but I'm short on time at the moment. I'd be interested to hear if there is anything you find particularly objectionable here though.

Zetetic00

I initially wrote up a bit of a rant, but I just want to ask a question for clarification:

Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?

I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)

Zetetic40

As I understand it, because T proves in n symbols that "T can't prove a falsehood in f(n) symbols", taking the specification of R (program length) we could do a formal verification proof that R will not find any proofs, as R only finds a proof if T can prove a falsehood within g(n)<exp(g(n)<<f(n) symbols. So I'm guessing that the slightly-more-than-n-symbols-long is on the order of:

n + Length(proof in T that R won't print with the starting true statement that "T can't prove a falsehood in f(n) symbols")

This would vary some with the length of R and with the choice of T.

Zetetic100

Typically you make a "sink" post with these sorts of polls.

ETA: BTW, I went for the paper. I tend to skim blogs and then skip to the comments. I think the comments make the information content on blogs much more powerful, however.

Load More