In case you haven't realized it, you're being downvoted because your post reads like this is the first thing you've read on this site. Just FYI.
"Universally Preferable Behavior" by Stefan Molyneux, "Argumentation Ethics" by Hans Hermann Hoppe, and of course Objectivism, to name the most famous ones. Generally the ones I'm referring to all try to deduce some sort of Objective Ethics and (surprise) it turns out that property rights are an inherent property of the universe and capitalism is a moral imperative.
Forgive me if you're thinking of some other libertarians who don't have crazy ethical theories. I didn't mean to make gross generalizations. I've just observed that libertarian philosophers who consciously promote their theories of ethics tend to be of this flavor.
Why is the discrimination problem "unfair"? It seems like in any situation where decision theories are actually put into practice, that type of reasoning is likely to be popular. In fact I thought the whole point of advanced decision theories was to deal with that sort of self-referencing reasoning. Am I misunderstanding something?
Maybe "progress" doesn't refer to equality, but autonomy. It does seem like the progression of social organization generally leads to individual autonomy and equality of opportunity. Egalitarianism is a nice talking point for politicians, but when we say "progress" we really mean individual autonomy.
Austrian-minded people definitely have some pretty crazy methods, but their economic conclusions seem pretty sound to me. The problem arises when they apply their crazy methods to areas other than economics (see any libertarian theory of ethics. Crazy stuff)
I think the correct comparison would be, "since no one can agree on the nature of Earth/Earth's existence, Earth must not exist" but this is ridiculous since everyone agrees on at least one fact about Earth: we live on it. The original argument still stands. Denying the existence of god(s) doesn't lead to any ridiculous contradictions of universally experienced observations. Denying Earth's geometry does.
You are merely objecting to Eliezer's choice of scale. The distances between "intelligences" are pretty arbitrary. Plus he's using a linear scale, so there's no room for intelligence curves.
I think the DRH quote is pretty out of context, and Eliezer's commentary on it is pretty unfair. DRH has a deeply personal respect for human intelligence. He doesn't look forward to the singularity because he (correctly) points out that it will be the end of humanity. Most SI/LessWrong people accept that and look forward to it, but for Hofstadter the current view of the singularity is an extremely pessimistic view of the future. Note that this is simply a result of his personal beliefs. He never claims that people are wrong to look forward to superintelligence, brain emulation and things like that, just that he doesn't. See this interview for his thoughts on the subject.
It's not meant to be "serious philosophy". He's not presenting the ideas in the book as being literally true, he's just provoking the reader to look at the issues in the book in a different light. Forcing the reader to consider alternative hypotheses, if you will.