Vaughn Papenhausen

Philosophy PhD student. Interested in ethics, metaethics, AI, EA, disagreement/erisology. Former username Ikaxas

Wiki Contributions

Comments

Best open-source textbooks (goal: make them collaborative)?

Fyi, the link to your site is broken for those viewing on greaterwrong.com; it's interpreting "--a" as part of the link.

[This comment is no longer endorsed by its author]Reply
SERI ML Alignment Theory Scholars Program 2022

Maybe have a special "announcements" section on the frontpage?

Fundamental Uncertainty: Chapter 2 - Why do words have meaning?

The way I like to think about this is that the set of all possible thoughts is like a space that can be carved up into little territories and each of those territories marked with a word to give it a name.

Probably better to say something like "set of all possible concepts." Words denote concepts, complete sentences denote thoughts.

I'm curious if you're explicitly influenced by Quine for the final section, or if the resemblance is just coincidental.

Also, about that final section, you say that "words are grounded in our direct experience of what happens when we say a word." While I was reading I kept wondering what you would say about the following alternative (though not mutually exclusive) hypothesis: "words are grounded in our experience of what happens when others say those words in our presence." Why think the only thing that matters is what happens when we ourselves say a word?

What The Foucault

Master: Now, is Foucault’s work the content you’re looking for, or merely a pointer.

Student: What… does that mean?

Master: Do you think that you think that the value of Foucault for you comes from the specific ideas he had, or in using him to even consider these two topics?

This put words to a feeling I've had a lot. Often I have some ideas, and use thinkers as a kind of handle to point to the ideas in my head (especially when I haven't actually read the thinkers yet). The problem is that this fools me into thinking that the ideas are developed, either by me or by the thinkers. I like this idea of using the thinkers to notice topics, but then developing on the topics yourself, at least if the thinkers don't take those topics in the direction you had in mind to take them.

On a different note, if you're interested in Foucault's methodology, some search terms would be "genealogy" and "conceptual engineering." Here is a LW post on conceptual engineering, and here is a review of a recent book on the topic (which I believe engages with Foucault as well as Nietzsche, Hume, Bernard Williams, and maybe others; I haven't actually read the full book yet, just this review). The book seems to be pretty directly about what you're looking for: "history for finding out where our concepts and values come from, in order to question them."

Bryan Caplan meets Socrates

Yep, check out the Republic, I believe this is in book 5, or if it's not in book 5 it's in book 6.

Is it rational to modify one's utility function?

The received wisdom in this community is that modifying one's utility function is at least usually irrational. The classic source here is Steve Omohundro's 2008 paper, "The Basic AI Drives," and Nick Bostrom gives basically the same argument in Superintelligence, pp. 132-34. The argument is basically this: imagine you have an AI that is solely maximizing the number of paperclips that exist. Obviously, if it abandons that goal, there will be less paperclips than if it maintains that goal. And if it adds another goal, say maximizing staples, then this other goal will compete with the paperclip goal for resources, e.g. time, attention, steel, etc. So again, if it adds the staple goal, there will be less paperclips than if it doesn't. So if it evaluates every option by h many paperclips result in expectation, then it will choose to maintain its paperclip goal unchanged. This argument isn't mathematically rigorous, and allows that there may be special cases where changing one's goal may be useful. But the thought is that, by default, changing one's goal is detrimental from the perspective of one's current goals.

As I said, though, there may be exceptions, at least for certain kinds of agents. Here's an example. It seems as though, at least for humans, we're more motivated to pursue our final goals directly than we are to pursue merely instrumental goals (which child do you think will read more: the one who intrinsically enjoys reading, or the one you pay $5 for every book they finish?). So, if a goal is particularly instrumentally useful, it may be useful to adopt it as a final goal in itself in order to increase your motivation to pursue it. For example, if your goal is to become a diplomat, but you find it extremely boring to read papers on foreign policy... well, first of all, I question why you want to become a diplomat if you're not interested in foreign policy, but more importantly, you might be well-served to cultivate an intrinsic interest in foreign policy papers. This is a bit risky: if circumstances change so that it's no longer as instrumentally useful, it may end up competing with your initial goals as described by the Bostrom/Omohundro argument. But it could work out that, at least some of the time, the expected value of changing your goal for this reason is positive.

Another paper to look at might be Steve Petersen's paper, "Superintelligence as Superethical," though I can't summarize the argument for you off the top of my head.

The ignorance of normative realism bot

I would think the metatheological fact you want to be realist about is something like "there is a fact of the matter about whether the God of Christianity exists." "The God of Christianity doesn't exist" strikes me as an object-level theological fact.

The metaethical nihilist usually makes the cut at claims that entail the existence of normative properties. That is, "pleasure is not good" is not a normative fact, as long as it isn't read to entail that pleasure is bad. "Pleasure is not good" does not by itself entail the existence of any normative property.

Third Time: a better way to work

Really? I'm American and it sounds perfectly normal to me.

A fate worse than death?

I think this post is extremely interesting, and on a very important topic. As I said elsethread, for this reason, I don't think it should be in negative karma territory (and have strong-upvoted to try to counterbalance that).

On the object level, while there's a frame of mind I can get into where I can see how this looks plausible to someone, I'm inclined to think that this post is more of a reductio of some set of unstated assumptions that lead to its conclusion, rather than a compelling argument for that conclusion. I don't have the time right now to think about what exactly those unstated assumptions are or where they go wrong, but I think that would be important. When I get some more time, if I remember, I may come back and think some more about this.

Load More