Doug, I meant ceteris paribus.
Psy and John, I think the idea is this: if you want to buy a hundred shares of OB at ten dollars each, because you think it's going to go way up, you have to buy them from someone else who's willing to sell at that price. But clearly that person does not likewise think that the price of OB is going to go way up, because if she did, why would she sell it to you now, at the current price? So in an efficient market, situations where everyone agrees on the future movement of prices simply don't occur. If everyone thought the price of OB was going to go to thirty dollars a share, then said shares would already be trading at thirty (modulo expectations about interest rates, inflation, &c.).
Kellen: "I am looking for some introductory books on rationality. [...] If any of you [...] have any suggestions [...]"
Cf. "Recommended Rationalist Reading."
"To care about the public image of any virtue [...] is to limit your performance of that virtue to what all others already understand of it. Therefore those who would walk the Way of any virtue must first relinquish all desire to appear virtuous before others, for this will limit and constrain their understanding of the virtue. "
Is it possible to quote this without being guilty of trying to foster the public image of not caring about public image? That's a serious question; I had briefly updated the "Favorite Quotes" section of my Faceb... (read more)
"I assume that underlying this is that you love your own minds and despise your own bodies, or are at best indifferent to them."
Isn't the byline usually given as "Stephen Jay Gould"?
Tom: "Hmmm.. Maybe we should put together a play version of 3WC [...]"
That reminds me! Did anyone ever get a copy of the script to Yudkowski Returns? We could put on a benefit performance for SIAI!
Nick: "Where was it suggested otherwise?"
Oh, no one's explicitly proposed a "wipe culturally-defined values" step; I'm just saying that we shouldn't assume that extrapolated human values converge. Cf. the thread following "Moral Error and Moral Disagreement."
Nick Hay: "[N]either group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value."
Once again I fail to see how culturally-derived values can be brushed away as irrelevant under CEV. When you convince someone with a political argument, you are changing how their brain computes value. Just because the effect is many orders of magnitude subtler than major neurosurgery doesn't mean it's trivial.
I don't think I see how moral-philosophy fiction is problematic at all. When you have a beautiful moral sentiment that you need to offer to the world, of course you bind it up in a glorious work of high art, and let the work stand as your offering. That makes sense. When you have some info you want to share with the world about some dull ordinary thing that actually exists, that's when you write a journal article. When you've got something to protect, something you need to say, some set of notions that you really are entitled to, then you write a novel.
Jus... (read more)
"I'm curious if anyone knows of any of EY's other writings that address the phenomenon of rationality as not requiring consciousness."
Cf. Eliezer-sub-2002 on evolution and rationality.
Geoff: "They also don't withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race."
This is not Star Trek, my Lord.
"All right. Open a channel, transmitting my voice only." [...] Out of sight of the visual frame, Akon gestured [...] [emphasis added]
I suspect it gets worse. Eliezer seems to lean heavily on the psychological unity of humankind, but there's a lot of room for variance within that human dot. My morality is a human morality, but that doesn't mean I'd agree with a weighted sum across all possible extrapolated human moralities. So even if you preserve human morals and metamorals, you could still end up with a future we'd find horrifying (albeit better than a paperclip galaxy). It might be said that that's only a Weirdtopia, that's you're horrified at first, but then you see that it's actuall... (read more)
I'll be horrified for as long as I damn well please.
I'll be horrified for as long as I damn well please.
Well, okay, but the Weirdtopia thesis under consideration makes the empirical falsifiable prediction that "as long as you damn well please" isn't actually a very long time. Also, I call scope neglect: your puny human brain can model some aspects of your local environment, which is a tiny fraction of this Earth, but you're simply not competent to judge the entire future, which is much larger.
Eliezer: " But if we don't get good posts from the readership, we (Robin/Eliezer/Nick) may split off OB again."
I'm worried that this will happen. If we're not getting main post submissions from non-Robin-and-Eliezer people now, how will the community format really change things? For myself, I like to comment on other people's posts, but the community format doesn't appeal to me: to regularly write good main posts, I'd have to commit the time to become a Serious Blogger, and if I wanted to do that, I'd start my own venue, rather than posting to a community site.
"There would never be another Gandhi, another Mandela, another Aung San Suu Kyi—and yes, that was a kind of loss, but would any great leader have sentenced humanity to eternal misery, for the sake of providing a suitable backdrop for eternal heroism? Well, some of them would have. But the down-trodden themselves had better things to do." —from "Border Guards"
I take it the name is a coincidence.
nazgulnarsil: "What is bad about this scenario? the genie himself [sic] said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?"
That's the most horrifying part of all, though--they won't so choose! By the time the women and men reïnvent enough technology to build interplanetary spacecraft, they'll be so happy that they won't want to get back together again. It's tempting to think that the humans can just choose to be unhappy until they build the requisite te... (read more)
Hey, Z. M., you know the things people in your native subculture have been saying about most of human speech being about signaling and politics rather than conveying information? You probably won't understand what I'm talking about for another four years, one month, and perhaps you'd be wise not to listen to this sort of thing coming from anyone but me, but ... the parent is actually a nice case study.
I certainly agree that the world of "Failed Utopia 4-2" is not an optimal future, but as other commenters have pointed out, well ... it is better t... (read more)
On second thought, correction: relativity restoring far away lands, yes, preserving intuitions, no.
"preserve/restore human intuitions and emotions relating to distance (far away lands and so on)"
Arguably Special Relativity already does this for us. Although I freely admit that a space opera is kind of the antithesis of a Weirdtopia.
"[...] which begs the question [sic] of how we can experience these invisible hedons [...]"
Wh--wh--you said you were sympathetic!
Abigail, I don't think we actually disagree. I certainly wouldn't defend the strong Bailey/Blanchard thesis that transwomen can be neatly sorted into autogynephiles and gay men. However, I am confident that autogynephilia is a real phenomenon in at least some people, and that's all I was trying to refer to in my earlier comment--sorry I wasn't clearer.
Eliezer: "[E]very time I can recall hearing someone say 'I want to know what it's like to be the opposite sex', the speaker has been male. I don't know if that's a genuine gender difference in wishes [...]"
sighs There's a name for it.
Eliezer: "Strong enough to disrupt personal identity, if taken in one shot?"
Is it cheating if you deliberately define your personal identity such that the answer is No?
Frelkins: "I mean, if anyone wants to check it out, just try Second Life."
Not exactly what we're looking for, unfortunately ...
Fr... (read more)
Should "Fun" then be consistently capitalized as a term of art? Currently I think we have "Friendly AI theory" (captial-F, lowercase-t) and "Friendliness," but "Fun Theory" (capital-F capital-T) but "fun."
"[...] naturally specializing further as more knowledge is discovered and we become able to conceptualize more complex areas of study [...]"
So, how does this spiral of specialization square with living by one's own strength?
Could there be a niche for generalists?
"A singleton might be justified in prohibiting standardized textbooks in certain fields, so that people have to do their own science [...]"
No textbooks?! CEV had better overrule you on this one, or my future selves across the many worlds are all going to scream bloody murder. It may be said that I'm missing the point: that ex hypothesi the Friendly AI knows better than me.
But I'm still going to cry.
It's easy if you're allowed to keep the law of cosines ...
"I sometimes think that futuristic ideals phrased in terms of 'getting rid of work' would be better reformulated as 'removing low-quality work to make way for high-quality work'."
Alternatively, you could taboo work and play entirely, speaking instead of various forms of activity, and their various costs and benefits.
I'm finding Eliezer's view attractive, but it does have a few counterintuitive consequences of its own. If we somehow encountered shocking new evidence that MWI, &c. is false and that we live in a small world, would weird people suddenly become much more important? Did Eliezer think (or should he have thought) that weird people are more important before coming to believe in a big world?
"How about if it were an issue that you were not too heavily invested in [...]"
Hal, the sort of thing you suggest has already been tried a few times over at Black Belt Bayesian; check it out.
Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns."
To clarify, I'm talking about something like a Moravec transfer, not a chatbot. Maybe a really sophisticated chatbot could pass the Turing test, but if we know that a given program was designed simply to game the Turing test, then we won't be impressed b... (read more)
Michael Tobis, suppose a whole brain emulation of someone is created. You have a long, involved, normal-seeming conversation with the upload, and she claims to have qualia. Even if it is conceded that there's no definitive objective test for consciousness, doesn't it still seem like a pretty good guess that the upload is conscious? Like, a really really good guess?
Catherine: "If you think of yourself as a system whose operations you cannot OR can predict [...]"
But isn't this actually true? I mean, law of the excluded middle, right?
Or am I just trying to hard to be clever?
Eliezer asks why one might be emotionally opposed to the idea of a singleton. One reason might be that Friendly AI is impossible. Life on the rapacious hardscrapple frontier may be bleak, but it sure beats being a paperclip.
I'm sure you've already heard this, but have you tried reading relevant papers rather than random websites?
Personally, I'm kind of giving up on "discipline" as such, in favor of looking for things worth doing and then doing them because they are worth doing. Why torture myself trying to regulate and control every minute, when that doesn't even work? Of course every minute is precious, but just because I'm not following a schedule doesn't mean nothing valuable is getting done. Whatever happened to the power of play? The first virtue is curiosity, ... (read more)
Maybe not, but they really ought to.
"Are you familiar with Ricardo's [...]"
It was cute in "The Simple Truth," but in the book, you might want to consider cutting down on the anachronisms. Intelligences who've never seen an intelligence falls under standard suspension-of-disbelief, but explicit mention of Ricardo or the Wright brothers is a bit grating.
Roland: "Yes, we need a community forum where everyone can post."
At the risk of preëmpting Nick Tarleton, we have one.
My two cents: I like the blog/comments-sections format, and I don't like the community format.
I had a couple of ideas for posts (which I never got around to writing up, natch); and one of the reasons I don't have my own blog is because being a Serious Blogger would be too much of a time commitment. But this idea of seven weekly bloggers intrigues me--do I have enough good OB-type ideas to be part of such an endeavor?--maybe? I'll have to give this further thought.
Eliezer: "To all defending Modern Art: Please point to at least one item available online which exemplifies that which you think I'm ignoring or missing."
This Al Held piece. Upon first glance, it's just a white canvas with a black triangle at the top and the bottom. This is not True Art, you say--but then you read the title, and it all makes sense! Clever! Shocking!
Art! (Hat tip Scott McCloud.)
"I have written a blog post on the issue"
I'd love to read it, but the link here is broken, nor do I see any new posts on the Transhuman Goodness homepage. I hope you saved a local copy!
Julian: "You have to un-assume the decision before you stand any chance of clear thought."
Of course the decision-theoretic logic of this is unassailable, but I continue to worry that the real-world application to humans is nontrivial.
Here, I have a parable. Suppose Jones is a member of a cult, and holds it as a moral principle that it is good and right and virtuous to obey the Great Leader. So she tries to obey, but feels terrible about failing to obey perfectly, and she ends up having a nervous breakdown and removing herself from the cult in sha... (read more)
I worry that blanking your mind is a tad underspecified. If you delete all your current justifications and metajustifications, you become a rock, not a perfect ghost of decision-theoretic emptiness. What we want is to delete just our current notion of a good means, while preserving our ultimate ends, but the human brain just might not be typed that strongly. When asking what is truly valuable, it could sometimes seem that the answer really is "whatever you want it to be"--except that we don't want it to be whatever we want it to be; we want an answer (even knowing that that answer has to be a fact about ourselves, rather than about some universal essence of goodness). Ack!
Alex: "Most of the time this blog seems like it could've been written on some distant planet in the year 5050, totally sealed off from the rest of today's humanity."
Don't you prefer it that way?
POSTSCRIPT-- Strike that stray does in the penultimate sentence. And re compatibilism, the short version is that no, you don't have free will, but you shouldn't feel bad about it.
Henry: "Both of those concepts seem completely apt for describing perfectly deterministic systems. But, in describing the "complexity" of the universe even in something as simple as the 'pattern of stars that exists' one would still have to take into account potential non-deterministic factors such as human behavior. [...] [A]re you saying that you are a strict determinist?"
I'll take this one. Yes, we're presuming determinism here, although the determinism of the Many Worlds Interpretation is a little different from the single-world det... (read more)
Billswift, re rereading the series, check out Andrew Hay's list and associated graphs.
"Davis, what you were saying made sense to me, so I'm confused as to what you could be confused about."
I came up with a nice story (successful reflection decreases passion; failed reflection increases it) that seems to fit the data (Eliezer says reflection begets emotional detachment, whereas I try to reflect and drive myself mad), but my thought process just felt really (I can think of no better word:) muddled, so I'm wondering whether I should wonder whether I made a fake explanation.
But if that were the only issue at hand, then that would generate the prediction that I would be even more unstable (!) if I were less analytical, which is the opposite of what I would have predicted.
Yes, it could possibly be that it is this introspection/reflectivity dichotomy that's tripping me up. A deep conceptual understanding that one's self can be distinct from what-this-brain-is-thinking-and-feeling-right-now does not neces... (read more)
I'm confused. Eliezer, you seem to be saying that reflectivity leads to distance from one's emotions, but this completely contradicts my experience: I'm constantly introspecting and analyzing myself, and yet I am also extremely emotional, not infrequently to the point of hysterical crying fits. Maybe I'm introspective but not reflective in the sense meant here? I will have to think about this for a while.
Maybe I'm introspective but not reflective in the sense meant here?
Maybe I'm introspective but not reflective in the sense meant here?
That's right. Reflection here refers to the skill of reasoning about your own reasoning mechanisms using the same methods you that use to reason about anything else. "Solving your own psychological problems" is then a trivial special case of "solving problems," but with the bonus that solving the problem of making yourself better at solving problems, makes you better at solving future problems. Surprisingly, it turns out that this is actually pretty useful, but you probably won't understand what I'm talking about for another four years and three months.
RSVPing in the affirmative; thank you for getting this together.