All of PhilGoetz's Comments + Replies

The Cluster Structure of Thingspace

No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem.  It comes up in many "intro to philosophy" or "philosophy of science" texts or courses.  Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.

The Cluster Structure of Thingspace

I'm glad to see Eliezer addressed this point.  This post doesn't get across how absolutely critical it is to understand that {categories always have exceptions, and that's okay}.  Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek "philosophy" from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).

Many philosophers have ... (read more)

Kenshō

I theorize that you're experiencing at least two different common, related, yet almost opposed mental re-organizations.

One, which I approve of, accounts for many of the effects you describe under "Bemused exasperation here...".  It sounds similar to what I've gotten from writing fiction.

Writing fiction is, mostly, thinking, with focus, persistence, and patience, about other people, often looking into yourself to try to find some point of connection that will enable you to understand them.  This isn't quantifiable, at least not to me; but I would ... (read more)

Kenshō

This sound suspiciously like Plato telling people to stop looking at the shadows on the wall of the cave, turn around, and see the transcendental Forms.

Common knowledge about Leverage Research 1.0

To me, saying that someone is a better philosopher than Kant seems less crazy than saying that saying that someone is a better philosopher than Kant seems crazy.

Isn't the thing Rob is calling crazy that someone "believed he was learning from Kant himself live across time", rather than believing that e.g. Geoff Anders is a better philosopher than Kant?

2Linch2moIt's more crazy after you load in the context that people at Leverage think Kant is more impressive than eg Jeremy Bentham.
Quantum Russian Roulette

An easy reason not to play quantum roulette is that, if your theory justifying it is right, you don't gain any expected utility; you just redistribute it, in a manner most people consider unjust, among different future yous.  If your theory is wrong, the outcome is much worse.  So it's at the very best a break even / lose proposition.

We need a new philosophy of progress

The Von Neumann-Morgenstern theory is bullshit.  It assumes its conclusion.  See the comments by Wei Dai and gjm here.

We need a new philosophy of progress

See the 2nd-to-last paragraph of my revised comment above, and see if any of it jogs your memory.

We need a new philosophy of progress

Republic is the reference. I'm not going to take the hours it would take to give book-and-paragraph citations, because either you haven't read the the entire Republic, or else you've read it, but you want to argue that each of the many terrible things he wrote don't actually represent Plato's opinion or desire.

(You know it's a big book, right? 89,000 words in the Greek.  If you read it in a collection or anthology, it wasn't the whole Republic.)

The task of arguing over what in /Republic/ Plato approves or disapproves of is arduous and, I think, unnece... (read more)

0TAG2moI read the Bloom translation all the way through. Maybe you could tell me which translation you read all the way through.
-3TAG2moYes. Have you? Then I'm not going to believe you.
We need a new philosophy of progress

The most-important thing is to explicitly repudiate these wrong and evil parts of the traditional meaning of "progress":

  • Plato's notion of "perfection", which included his belief that there is exactly one "perfect" society, and that our goal should be to do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to construct it, and then do ABSOLUTELY ANYTHING NO MATTER HOW HORRIBLE to make sure it STAYS THAT WAY FOREVER.
  • Hegel's elaboration on Plato's concept, claiming that not only is there just one perfect end-state, but that there is one and only one path of pro
... (read more)
2TAG2moCitation needed. I've read The Republic , and there's nothing remotely like that in it.
Group selection update

Sorry; your example is interesting and potentially useful, but I don't follow your reasoning.  This manner of fertilization would be evidence that kin selection should be strong in Chimaphila, but I don't see how this manner of fertilization is itself evidence that kin selection has taken place.  Also, I have no good intuitions about what differences kin selection predicts in the variables you mentioned, except that maybe dispersion would be greater in Chimaphila because of teh greater danger of inbreeding.  Also, kin selection isn't controversial, so I don't know where you want to go with this comment.

Rescuing the Extropy Magazine archives

Hi, see above for my email address. Email me a request at that address. I don't have your email. I just sent you a message.

ADDED in 2021: Some people tried to contact me thru LessWrong and Facebook. I check messages there like once a year.  Nobody sent me an email at the email address I gave above. I've edited it to make it more clear what my email address is.

Debate update: Obfuscated arguments problem

[Original first point deleted, on account of describing something that resembled Bayesian updating closely enough to make my point invalid.]

I don't think this approach applies to most actual bad arguments.

The things we argue about the most are ones over which the population is polarized, and polarization is usually caused by conflicts between different worldviews.  Worldviews are constructed to be nearly self-consistent.  So you're not going to be able to reconcile people of different worldviews by comparing proofs.  Wrong beliefs come in se... (read more)

1TAG1yThey might well say the same about you. All arguments are based on fundamental assumptions that are necessarily unproven.
3Beth Barnes1yWhen you say 'this approach', what are you referring to?
1TAG1yIs the set of real numbers simple or complex? What information does it contain? What information doesnt it contain?
Where do (did?) stable, cooperative institutions come from?

"Cynicism is a self-fulfilling prophecy; believing that an institution is bad makes the people within it stop trying, and the good people stop going there."

I think this is a key observation. Western academia has grown continually more cynical since the advent of Marxism, which assumes an almost absolute cynicism as a point of dogma: all actions are political actions motivated by class, except those of bourgeois Marxists who for mysterious reasons advocate the interests of the proletariat.

This cynicism became even worse with Foucault, who taught people to s... (read more)

The Solomonoff Prior is Malign

"At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively."

First, this is irrelevant to most applications of the Solomonoff prior.  If I'm using it to check the randomness of my random number generator, I'm going to be looking at 64-bit strings, and probably very few intelligent-life-producing universe-simulators output just 64 bits, and it's hard to imagine how an alien in a... (read more)

The S. prior is a general-purpose prior which we can apply to any problem. The output string has no meaning except in a particular application and representation, so it seems senseless to try to influence the prior for a string when you don't know how that string will be interpreted.

The claim is that consequentalists in simulated universes will model decisions based on the Solomonoff prior, so they will know how that string will be interpreted.

Can you give an instance of an application of the S. prior in which, if everything you wrote were correct, i

... (read more)
Honoring Petrov Day on LessWrong, in 2020

I think we learned that trolls will destroy the world.

Stupidity as a mental illness

It's only offensive if you still think of mental illness as shameful.

Stupidity as a mental illness

Me: We could be more successful at increasing general human intelligence if we looked at low intelligence as something that people didn't have to be ashamed of, and that could be remedied, much as how we now try to look at depression and other mental illness as illness--a condition which can often be treated and which people don't need to be ashamed of.

You: YOU MONSTER! You want to call stupidity "mental illness", and mental illness is a bad and shameful thing!

1HumaneAutomation1yI think this whole problem is a bit more nuanced than you seem to suggest here. I can't help but at least tentatively give some credit to the assertion that LW is, for lack of a better term, mildly elitist. To be sure, for perhaps roughly the right reasons, but being elitist in whatever measure tends to be detrimental to the chances of getting your point across, especially if it needs to be elucidated to the very folks you're elitist towards ;) Not many behaviors are judged more repulsive than being made to feel a lesser person... I'd say it's pretty close to a cultural universal. It's not right to assert that if one does not agree with your suggestion that stupidity is to be seen as a type of affliction of the same type or category as mental illness, one therefore is disparaging mental illness as shameful; This is a false dichotomy. One can disagree with you for other reasons, not in the least for reasons as remote from shame as evolution... it is nowhere close to a given that nature cares even a single bit about whatever might end up being called intelligence. You will note that most creatures seem to have just the right CPU for their "lifestyle", and while it might be easy for us to imagine how, say, a dog might benefit from being smarter, I'd sooner call that a round-about way of anthropomorphizing than a probable truth. Exhibit B seems to be the most convincing observation that, by the look of things, wanting to "go for max IQ" is hardly on evolution's To-Do list... us, primates, dolphins and a handful of birds aside, most creatures seem perfectly content with being fairly dim and instinct-driven, if the behaviours and habits exhibited by animals are a reliable indication ;) I'll be quiet about the elephant in the room that the vast majority of our important motivations are emotional and non-rational, too... What's more - and I am actually curious what you will respond to this... it could be said that animals, all animals, are more rational than human beings
Group selection update

That's technically true, but it doesn't help a lot. You're assuming one starts with fixation to non-SC in a species. But how does one get to that point of fixation, starting from fixation of SC, which is more advantageous to the individual? That's the problem.

Group selection update

It's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group. ... (read more)

Group selection update

You're assuming that the benefits of an adaptation can only be linear in the fraction of group members with that adaptation. If the benefits are nonlinear, then they can't be modeled by individual selection, or by kin selection, or by the Haystack model, or by the Harpending & Rogers model, in all of which the total group benefit is a linear sum of the individual benefits.

For instance, the benefits of the Greek phalanx are tremendous if 100% of Greek soldiers will hold the line, but negligible if only 99% of them do. We can guess--though I ... (read more)

Group selection update
Group selection, as I've heard it explained before, is the idea that genes spread because their effects are for the good of the species. The whole point of evolution is that genes do well because of what they do for the survival of the gene. The effect isn't on the group, or on the individual, the species, or any other unit other than the unit that gets copied and inherited.

Group selection is group selection: selection of groups. That means the phenotype is group behavior, and the effect of selection is spread equally among members of the group... (read more)

[This comment is no longer endorsed by its author]Reply
2PhilGoetz2yIt's not that I no longer endorse it; it's that I replied to a deleted comment instead of to the identical not-deleted comment.
How SIAI could publish in mainstream cognitive science journals

Thanks!

I have great difficulty finding any philosophy published after 1960 other than post-modern philosophy, probably because my starting point is literary theory, which is completely confined to--the words "dominated" and even "controlled" are too weak--Marxian post-modern identity politics, which views literature only as a political barometer and tool.

Pascal's Mugging: Tiny Probabilities of Vast Utilities

I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.

Consider these possibilities, any one of which would create challenges for your reasoning:

1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all brok... (read more)

Your overall point is right and important but most of your specific historical claims here are false - more mythical than real.

Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources.

Free market economic theory was developed during a period of rapid centralization of power, before which it was common sense that most resource allocation had to be done at the local level, letting peasants mostly alone to farm their own plots. To find a prior epoch of deliberate ce... (read more)

Rescuing the Extropy Magazine archives

I scanned in Extropy 1, 3, 4, 5, 7, 16, and 17, which leaves only #2 missing. How can I send these to you? Contact me at [my LessWrong user name] at gmail.com.

1MARIELLA PITTARI1yThere is a way that you can copy here the link for the missing issues? Or perhaps could you email me with the link?
Why Bayesians should two-box in a one-shot

I just now read that one post. It isn't clear how you think it's relevant. I'm guessing you think that it implies that positing free will is invalid.

You don't have to believe in free will to incorporate it into a model of how humans act. We're all nominalists here; we don't believe that the concepts in our theories actually exist somewhere in Form-space.

When someone asks the question, "Should you one-box?", they're using a model which uses the concept of free will. You can't object to that by saying "You don't really have free will."... (read more)

0ike4yIt's not just the one post, it's the whole sequence of related posts. It's hard for me to summarize it all and do it justice, but it disagrees with the way you're framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of "should" notions being used even when believing in a deterministic world, which you reject. I don't really want to argue the whole thing from scratch, but that is where our disagreement would lie.
37 Ways That Words Can Be Wrong

Yep, nice list. One I didn't see: Defining a word in a way that is less useful (that conveys less information) and rejecting a definition that is more useful (that conveys more information). Always choose the definition that conveys more information; eliminate words that convey zero information. It's common for people to define words that convey zero information. But if everything has the Buddha nature, nothing empirical can be said about what it means and it conveys no information.

Along similar lines, always define words so that no other word conveys... (read more)

37 Ways That Words Can Be Wrong

[moved to top level of replies]

[This comment is no longer endorsed by its author]Reply
37 Ways That Words Can Be Wrong

But you're arguing against Eliezer, as "God" and "miracle" were (and still are) commonly-used words, and so Eliezer is saying those are good, short words for them.

1MugaSofer4yI don't think so - I think Eliezer's just being sloppy here. "God did a miracle" is supposed to be an example of something that sounds simple in plain English [http://lesswrong.com/lw/o1/entropy_and_short_codes/] but is actually complex [http://lesswrong.com/lw/jp/occams_razor/]:
Fallacies of Compression

Great post! There is also the non-discrete aspect of compression: information loss. English has, according to some dictionaries, over a million words. It's unlikely we store most of our information in English. Probably there is some sort of dimension reduction, like PCA. There is in any case probably lossy compression. This means people with different histories will use different frequency tables for their compression, and will throw out different information when encoding a verbal statement. I think you would almost certainly find that if you measu... (read more)

Why Bayesians should two-box in a one-shot

I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)

You... (read more)

0Vaniver4yAs far as I can tell, I would pay Parfit's Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions. This seems wrong to me, if you're explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it's not obvious that the latter leads to better decisions.
Why Bayesians should two-box in a one-shot

I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.

Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:

  • intended to one-box, one-boxed
  • intended to one-box, two-boxed
  • intended to two-box, one-boxed
  • intended to two-box, two-boxed

I don't know if the usual statement o... (read more)

2Vaniver4yI don't think this gets Parfit's Hitchhiker right. You need a decision theory that, when safely returned to the city, pays the rescuer even though they have no external obligation to do so. Otherwise they won't have rescued you.
Why Bayesians should two-box in a one-shot

This was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think.

It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as ... (read more)

0ArisKatsaris4ySo it's the pronouns that matter? If I keep using "Aris Katsaris" rather than "I" that makes a difference to whether the person I'm talking about makes decisions that can be deterministally predicted? Whether someone can predict your decisions has ZERO relevancy on whether you are the one making the decisions or not. This sort of confusion where people think that "free will" means "being unpredictable" is nonsensical - it's the very opposite. For the decisions to be yours, they must be theoretically predictable, arising from the contents of your brains. Adding in randomness and unpredictability, like e.g. using dice or coinflips reduces the ownership of the decisions and hence the free will. This is old and tired territory.
0ike4yHave you read http://lesswrong.com/lw/rb/possibility_and_couldness/ [http://lesswrong.com/lw/rb/possibility_and_couldness/] and the related posts and have some disagreement with them?
Announcing the AI Alignment Prize

I think that first you should elaborate on what you mean by "the goals of humanity". Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.

Okay, I admit it--what I really think is that "goals of humanity" is a nonsensical phrase, especially when spoken by an American academic. It would be a little better ... (read more)

1cousin_it4yFor example, not turning the universe into paperclips is a goal of humanity.
1entirelyuseless4yI considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.
Why Bayesians should two-box in a one-shot

The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don't think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn't many-worlds imply that every game with Omega creates worlds in which Omega is wrong?

If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If yo... (read more)

0Luke_A_Somers4yIf you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.
0ike4yThis was argued against in the Sequences and in general, doesn't seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory - all the functional decision theory stuff is assuming a deterministic decision process, I think. Re QM: sometimes I've seen it stipulated that the world in which the scenario happens is deterministic. It's entirely possible that the amount of noise generated by QM isn't enough to affect your choice (besides for a very unlikely "your brain has a couple bits changed randomly in exactly the right way to change your choice", but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).
The Ancient God Who Rules High School

Sorry. I've been reading English literary journals and lit theory books for the past year, and the default assumption is always that the reader is a Marxist.

Belief in Belief

The rationalist virtue of empiricism...

I'm not disagreeing with any of the content above, but a note about terminology--

LessWrong keeps using the word "rationalism" to mean something like "reason" or possibly even "scientific methodology". In philosophy, however, "rationalism" is not allied to "empiricism", but diametrically opposed to it. What we call science was a gradual development, over a few centuries, of methodologies that harnessed the powers both of rationalism and empiricism, which had previo... (read more)

What conservatives and environmentalists agree on

I was unfairly inserting in the parentheses my own presumption about why Christians saw the world as having been created perfect. The passage I was talking about from Aquinas did not talk about perfection of the environment.

I'd like to see what Aquinas did say. Have you got a citation? I'm pretty sure that the notion that the world was created imperfect has never been tolerated by the Catholic Church. Asserting that creation was imperfect might even be condemned as Manicheeism. Opinions vary on what happened after the Fall, but I find it unlikely that... (read more)

2entirelyuseless5yI know he does make that statement about his opinion being "better and more theological"; however I don't have the specific citation at the moment. However, I did find this text from the disputed questions on power: [http://www.corpusthomisticum.org/qdp4.html#59332] He was not copying Aristotle (since Aristotle thought the world was eternal and would have passed back and forth an infinite number of times between perfection and imperfection), but Augustine. Augustine says that the world was created in an instant, in an imperfect state, but one which contained its perfections in potency. Logically this is even consistent with what actually happened (i.e. Big Bang and evolution). Needless to say neither of them was thinking of any such detail in giving that general account. Both of them think would say that the account in Genesis is true, and in that way avoid heresy. But Augustine's explanation of the text is at any rate extremely metaphorical.
What conservatives and environmentalists agree on

Yep, the argument to justify the imperfection of children, and thus the necessity of growth, is based on Aristotle's notion of perfect and imperfect actualities. Aquinas wrote:

Everything is perfect inasmuch as it is in actuality; imperfect, inasmuch as it is in potentiality, with privation of actuality. ... It is impossible therefore for any effect that is brought into being by action to be of a nobler actuality than is the actuality of the agent. It is possible though for the actuality of the effect to be less perfect than the actuality of the acting c

... (read more)
0entirelyuseless5yIf you think you are giving Aquinas's views there, you are mistaken. He says that the opinion that the environment was created imperfect and gradually perfected is "better and more theological" than the opinion that it was created perfect. He also gives a reason for this to happen, namely that by coming to be gradually, the world can participate in causing its own perfection.
What conservatives and environmentalists agree on

I didn't mean to retract this, but to delete it and move the comment down below.

[This comment is no longer endorsed by its author]Reply
What conservatives and environmentalists agree on

Historically, Christians objected strongly to fossil evidence that some species had gone extinct. They said God would not have created species and then let them go extinct.

Perfection is a crucial part of Christian ontology. God's creation was perfect. That means, in the Christian way of thinking, it is unchanging. Read Christian descriptions of God (who is perfect), and "unchanging" is always one of the adjectives. "Unchanging" is a necessary attribute of perfection in Christian theology, and God's creation is necessarily perfect. ... (read more)

4bogus5y(AIUI, you should be able to delete a comment after retracting it and refreshing the page, at least provided that no one has replied to it in the meantime.)
2PhilGoetz5yYep, the argument to justify the imperfection of children, and thus the necessity of growth, is based on Aristotle's notion of perfect and imperfect actualities. Aquinas wrote: [http://www.ccel.org/ccel/aquinas/gentiles.iv.xxv.html] The reason God created humans so that they have to grow from imperfect childhood (lacking the maturity of a complete human) towards a perfect adult state, rather than being adult, is thus so that they may learn virtue, which is the process of striving for perfection. (The environment does not need to learn virtue; therefore it was created perfect.) I don't know whether humans would have born offspring that were babies if not for the Fall, nor why animals bear babies, if not for the sake of their spiritual growth.
The Ancient God Who Rules High School

The kid says that school is competitive, and that's bad--why can't they all agree to work less hard (presumably so they can have more time to play video games)? "Getting students to accept the reality that they might just not go to the best schools is good, I guess. But unless it also comes with the rallying call of engaging in a full-on socialist revolution, it doesn’t really deal with the whole issue."

This kid is the straw man conservatives present of socialism--the idea that the purpose of labor unions and socialism isn't to have a decent wag... (read more)

5Galap5yThere's a difference between 'working hard' and actually inhumane conditions, which, while I did not experience them in high school, seem to pop up by default in a lot of situations. So I wouldn't be really surprised if it happened in some high schools, because there isn't much defending against it there. So yeah the labor unions having the goal of 'not having to work hard' is a protection against a very serious and insidious problem.
1bogus5yI'm not sure that there is a consistent "straw man" in a way that's relevant to this post. You might as well say: "See, this kid neatly disproves the other straw man conservatives present of socialism--the idea that the purpose of labor unions and socialism isn't to have decent workloads and working conditions, but just plain greed." Six of one, half a dozen of the other...
0lifelonglearner5yHello, I'm the kid. I think the quote is taken out of context: To be clear, I don't actually think that socialism is a good solution (I didn't list it as an actually feasible solution), and it was meant to be humorous.
Against responsibility

I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought.

Start by saying "rationality" means satisficing your goals and values. The issue is what values you have. You certainly have selfish values. A human also has values that lead to optimizing group survival. Behavior oriented primarily towards those goals is called altruistic.

The model of rationality presented on LessWron... (read more)

0MrCogmor5yRationality means achieving your goals and values efficiently and effectively. This is a false dichotomy. Just because a value is not of negative utility doesn't mean it is optimized to benefit the genes. Scott Alexander for example is asexual and there are plenty of gay people. GiveWell exists, Peter Singer exists. The Effective Altrusim movement exists. They may not be perfect utilitarians but most rationalists aren't perfect either, neither are most christians and they still exist. I finally remembered the Less Wrong meta-ethics sequence which you should read. This [http://lesswrong.com/lw/rr/the_moral_void/] in particular.
What's up with Arbital?

This sounds great! There is no FAQ on the linked-to website, though. Is Arbital open-source? What are the key licensing terms? How's it implemented? How does voting work?

If we're all supposed to use the same website, there are advantages to that, but I would be less excited about that.

Also, the home page links to https://arbital.com/explore/math, but that page is blank. Er... https://arbital.com/explore/ai_alignment is also blank for me. Perhaps Arbital doesn't work for Chrome on Windows 7 without flash installed.

1Alexei5yIt's not open sourced. The pages might take a while to load (up to 30 seconds).
Against responsibility

Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains.

No; I object to your claiming the term "rational" for that usage. That's just plain-old Utilitarianism 1.0 anyway; it doesn't take a modifier.

Rationality plus Utilitarianism plus evolutionary psychology leads to the idea that a rational person is one who satisfies their own goals. You can't call trying to achieve the greatest good for the greatest number of people "rational" for an evolved organism.

0MrCogmor5yRationality is the art of making better decisions in service to a goal taking into account imperfect information and the constratints of our mental hardware. When applied to utilitarianism you get posts like this Nobody is perfect, evertyhing is commensurable [http://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/] I don't see how this follows. Evolutionary psychology provides some explanations for our intuitions and instincts that the majority of humans share but that doesn't really say anything about morality as Is Cannot Imply Ought. Some quotes from the wiki page on evolutionary psychology.
Against responsibility

Benquo isn't saying that these attitudes necessarily follow, but that in practice he's seen it happen. There is a lot of unspoken LessWrong / SIAI history here. Eliezer Yudkowsky and many others "at the top" of SIAI felt personally responsible for the fate of the human race. EY believed he needed to develop an AI to save humanity, but for many years he would only discuss his thoughts on AI with one other person, not trusting even the other people in SIAI, and requiring them to leave the area when the two of them talked about AI. (For all I kn... (read more)

0Darklight5yWell, that's... unfortunate. I apparently don't hang around in the same circles, because I have not seen this kind of behaviour among the Effective Altruists I know.
Against responsibility

Great post, and especially appropriate for LW. I add the proviso that you may in some cases be making the most-favorable interpretation rather than the correct interpretation.

I know one person on LessWrong who has talked himself into overwriting his natural morality with his interpretation of rational utilitarianism. This ended up giving him worse-than-human morality, because he assumes that humans are not actually moral--that humans don't derive utility from helping others. He ended up convincing himself to do the selfish things that he thinks are "in his own best interests" in order to be a good rationalist, even in cases where he didn't really want to be selfish--or wouldn't have, before rewriting his goals.

1MrCogmor5yIt sounds less like he rewrote his natural morality and more like he engaged in a lot of motivated reasoning to justify his selfish behaviour. Rational Utilitarianism is the greatest good for the greatest number given the constraints of imperfect information and faulty brains. The idea that other people don't have worth because they aren't as prosocial as you is not Rational Utilitarianism (especially when you aren't actually prosocial because you don't value other people). If whoever it is can't feel much sympathy for people in distant countries then that is fine, plenty of people are like that. The good thing about consequentalism is that it doesn't care about why. You could do it for self-esteem, social status, empathy or whatever but you still save lives either way. Declaring yourself a Rational Utilitarian and then not contributing is just a dishonest way of making yourself feel superior. To be a Rational Utilitarian you need to be a rationalist first and that means examining your beliefs even when they are pleasant.
Stupidity as a mental illness

That's basically what I'm saying--well, I think it was; I can't see my original text now. But IIRC I misused the word "necessarily" because I thought doing so was closer to the truth than not using any modifier at all. I wanted to imply a causative link, and the notion that, even in cases where it appears there is no economic cost, the length of and multiplicity of paths from a nation's values to its economic health are so great that the bias towards finding an economic cost on each such path make it statistically very unlikely that the net economic impact is not negative.

Open thread, Mar. 20 - Mar. 26, 2017

The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

0TheAncientGeek5yYep.
Load More