I have taken the survey.
I just wanted to say thank you for for including the links to the TED talk and other actionable info (i.e. which plants to buy and how many per person). I have a tendency to see things like the main post and go "oh, that's interesting," but then never really follow-up on them, but knowing that I have a list of which plants to buy was enough additional motivation to make me take the issue more seriously. I'm intending to do a bit more research and get a air quality monitor in the next few days.
Since you mentioned other plants, I am wondering if there are places to look to consider the different plant options. My wife said she "didn't want ugly plants" (if possible), and I was also wondering if there were options I could look at that would be easier to care for (I live in the northern US, so I expect there may be >10week periods where taking a plant outside would be impracticable, not to mention unpleasant since we live in a large apartment building).
I think this is an excellent summary. Having read John L. Mackie's free will argument and Plantinga's transworld depravity free will defense, I think that a theodicy based on free will won't be successful. Trying to define free will such that God can't ensure using his foreknowledge that everyone will act in a morally good way leads to some very odd definitions of free will that don't seem valuable at all, I think.
You're right about the cost per averted headache, but we aren't trying to minimize the cost per averted headache; otherwise we wouldn't use any drug. We're trying to maximize utility. Unless avoiding several hours of a migraine is worth less to you than $5 (which a basic calculation using minimum wage would indicate that it is not, even excluding the unpleasantness of migraines -- and as someone who gets migraines occasionally, I'd gladly pay a great deal more than $5 to avoid them), you should get Drug A.
I largely agree with this answer. My view is that reductionist materialism implies that names are just a convenient way of discussing similar things, but there isn't something that inherently makes what we label a "car"; it's just an object made up of atoms that pattern matches what we term a "car." I suppose that likely makes me lean toward nominalism, but I find the overall debate generally confused.
I've taken several philosophy courses, and I'm always astonished by the absence of agreement or justification that either side can posit. I think the biggest problem is that many philosophers make some assumption without sufficient justification and then create enormously complex systems based on those assumptions. But since they don't argue for strenuous justification for the underlying premises (e.g. Platonic idealism), then ridiculous amounts of time ends up being wasted learning about all the systems, rather than figuring out how to test them for truth (or even avoiding analytical meaninglessness).
Took the survey. It was quite interesting! I'll be curious to see what the results look like . . . .
You could make it an explicit "either . . . or."
I.e. "I think that people who are not made happier by having things either have the wrong things or have them incorrectly."
I agree. For those familiar with RationalWiki, I actually thought that it provided a nice contrasting example, honestly. Eliezer's definition for rationality is (regrettably, in my opinion) rare in a general sense (insofar as I encounter people using the term), and I think the example is worthwhile for illustrative purposes.
But how do you know if someone wanted to upvote your post for cleverness, but didn't want to express the message that they were mugged successfully? Upvoting creates conflicting messages for that specific comment.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I'm hiding in my house. So I'd have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can't tell us how to act, it's literally useless. We have to have some process for deciding on our actions.