So, I walked into my room, and within two seconds, I saw my laptop's desktop background change. I had the laptop set to change backgrounds every 30 minutes, so I did some calculation, and then thought, "Huh, I just consciously experienced a 1-in-1000 event."
Then the background changed again, and I realized I was looking at a screen saver that changed every five seconds.
Moral of the story: 1 in 1000 is rare enough that even if you see it, you shouldn't believe it without further investigation.
That is a truly beautiful story. I wonder how many places there are on Earth where people would appreciate this story.
No! Not for a second! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.
-- Richard P Feynman, on being asked if he thought that the fact that his wife's favorite clock had stopped the moment she died was a supernatural occurrence, quoted from Al Sekel, "The Supernatural Clock"
I've been finding PJ Eby's article The Multiple Self quite useful for fighting procrastination and needless feelings of guilt about getting enough done / not being good enough at things.
I have difficulty describing the article briefly, as I'm afraid that I accidentally omit important points and make people take it less seriously than it deserves, but I'll try. The basic idea is that the conscious part of our mind only does an exceedingly small part of all the things we spend doing in our daily lives. Instead, it tells the unconscious mind, which actually does everything of importance, what it should be doing. As an example - I'm writing this post right now, but I don't actually consciously think about hitting each individual key and their exact locations on my keyboard. Instead I just tell my mind what I want to write, and "outsource" the task of actually hitting the keys to an "external" agent. (Make a function call to a library implementing the I/O, if you want to use a programming metaphor.) Of course, ultimately the words I'm writing come from beyond my conscious mind as well. My conscious mind is primarily concerned with communicating Eby's point well to my...
Until yesterday, a good friend of mine was under the impression that the sun was going to explode in "a couple thousand years." At first I thought that this was an assumption that she'd never really thought about seriously, but apparently she had indeed thought about it occasionally. She was sad for her distant progeny, doomed to a fiery death.
She was moderately relieved to find out that humanity had millions of times longer than she had previously believed.
I wonder how many trivially wrong beliefs we carry around because we've just never checked them. (Probably most of them are mispronunciations of words, at least for people who've read a lot of words they've never heard anybody else use aloud.)
For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.
If you extract the plutonium and make enough warheads, and you have missiles capable of delivering them, it can make you a superpower in a different sense. I'm assuming that you're a large country, of course.
More seriously, nuclear waste is just a combination of the following:
Mostly Uranium-238, which can be used in breeder reactors.
A fair amount of Uranium-235 and Plutonium-239, which can be recycled for use in conventional reactors.
Hot isotopes with short half lives. These are very radioactive, but they decay fast.
Isotopes with medium half lives. These are the part that makes the waste dangerous for a long time. If you separate them out, you can either store them somewhere (e.g. Yucca Mountain or a deep-sea subduction zone) or turn them into other, more pleasant isotopes by bombarding them with some spare neutrons. This is why liquid fluoride thorium reactor waste is only dangerous for a few hundred years: it does this automatically.
And that is why people are simply ignorant when they say that we still have no idea what to do with nuclear waste. It's actually pretty straightforward.
Incidentally, this is a good example of motivated stopping. People who want nuclear waste to be their trump-card argument have an emotional incentive not to look for viable solutions. Hence the continuing widespread ignorance.
Short satire piece:
Artificial Flight and Other Myths, from Dresden Codak.
(Also see A Thinking Ape's Critique of Trans-Simianism.)
Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better? Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?
I wish the site were more inclusive of other value systems ...
This site does tend to implicitly favour a subset of human values, specifically what might be described as 'enlightenment values'. I'm quite happy to come out and explicitly state that we should do things that favour my values, which are largely western/enlightenment values, over other conflicting human values.
Too long has the bacteriophage menace oppressed its prokaryotic brethren! It's time for an algaeocracy!
Hi there. It looks like you're speaking out of ignorance regarding the historical treatment of non-whites by whites. Please choose the country you're from:
United Kingdom
United States
Australia
Canada
South Africa
Germ... nah, you can figure that one out for yourself.
Clippy is now three karma away from being able to make a top level post. That seems both depressing, awesome and strangely fitting for this community.
This will mark the first successful paper-clip-maximizer-unboxing-experiment in human history... ;)
Here's something interesting on gender relations in ancient Greece and Rome.
Why did ancient Greek writers think women were like children? Because they married children - the average woman had her first marriage between the ages of twelve and fifteen, and her husband would usually be in his thirties.
This conversation has been hacked.
The parent comment points to an article presenting a hypothesis. The reply flatly drops an assertion which will predictably derail conversation away from any discussion of the article.
If you're going to make a comment like that, and if you prefix it with something along the lines of "The hypothesis in the article seems superfluous to me; men in all cultures treat women like children because...", and you point to sources for this claim, then I would confidently predict no downvotes will result.
(ETA: well, in this case the downvote is mine, which makes prediction a little too easy - but the point stands.)
The PUA community include people who come across as huge assholes, and that could be an alternative explanation of why people react negatively to the topics, by association. I'm thinking in particular of the blog "Roissy in DC", which is on OB's blogroll.
Offhand, it seems to me that thinking of all women as children entails thinking of some adults as children, which would be a map-territory mistake around the very important topic of personhood.
I did pick up some interesting tips from PUA writing, and I do think there can be valuable insight there if you can ignore the smell long enough to dig around (and wash your hands afterwards, epistemically speaking).
No relevant topics should be off-limits to a community of sincere inquiry. Relevance is the major reason why I wouldn't discuss the beauty of Ruby metaprogramming on LessWrong, and wouldn't discuss cryonics on a project management mailing list.
If discussions around topic X systematically tend to go off the rails, and topic X still appears relevant, then the conclusion is that the topic of "why does X cause us to go off the rails" should be adequately dealt with first, in lexical priority. That isn't censorship, it's dependency management.
Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.
I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.
Thoughts?
The Believable Bible
This post arose when I was pondering the Bible and how easy it is to justify. In the process of writing it, I think I've answered the question for myself. Here it is anyway, for the sake of discussion.
Suppose that there's a world very much like this one, except that it doesn't have the religions we know. Instead, there's a book, titled The Omega-Delta Project, that has been around in its current form for hundreds of years. This is known because a hundreds-of-years-old copy of it happens to exist; it has been carefully and precisely compared to other copies of the book, and they're all identical. It would be unreasonable, given the evidence, to suspect that it had been changed recently. This book is notable because it happens to be very well-written and interesting, and scholars agree it's much better than anything Shakespeare ever wrote.
This book also happens to contain 2,000 prophecies. 500 of them are very precise predictions of things that will happen in the year 2011; none of these prophecies could possibly be self-fulfilling, because they're all things that the human race could not bring about voluntarily (e.g. the discovery of a particular artifact, or the...
The FBI released a bunch of docs about the anthrax letter investigation today. I started reading the summary since I was curious about codes used in the letters. All of a sudden on page 61 I see:
c. Godel, Escher, Bach: the book that Dr. Ivins did not want investigators to find
The next couple of pages talk about GEB and relate some parts of it to the code. It's really weird to see literary analysis of GEB in the middle of an investigation on anthrax attacks.
When new people show up at LW, they are often told to "read the sequences." While Eliezer's writings underpin most of what we talk about, 600 fairly long articles make heavy reading. Might it be advisable that we set up guided tours to the sequences? Do we have enough new visitors that we could get someone to collect all of the newbies once a month (or whatever) and guide them through the backlog, answer questions, etc?
I'm taking a software-enforced three-month hiatus from Less Wrong effective immediately. I can be reached at zackmdavis ATT yahoo fullstahp kahm. I thought it might be polite to post this note in Open Thread, but maybe it's just obnoxious and self-important; please downvote if the latter is the case thx
I feel like the 20-something whose friends are all getting married and quiting drinking. This is lame. The party is just starting guys!
Here's a question that I sure hope someone here knows the answer to:
What do you call it when someone, in an argument, tries to cast two different things as having equal standing, even though they are hardly even comparable? Very common example: in an atheism debate, the believer says "atheism takes just as much faith as religion does!"
It seems like there must be a word for this, but I can't think what it is. ??
Could anyone recommend an introductory or intermediate text on probability and statistics that takes a Bayesian approach from the ground up? All of the big ones I've looked at seem to take an orthodox frequentist approach, aside from being intolerably boring.
Discussions of correctly calibrated cognition, e.g. tracking the predictions of pundits, successes of science, graphing one's own accuracy with tools like PredictionBook, and so on, tend to focus on positive prediction: being right about something we did predict.
Should we also count as a calibration issue the failure to predict something that, in retrospect, should have been not only predictable but predicted? (The proverbial example is "painting yourself into a corner".)
More cryonics: my friend David Gerard has kicked off an expansion of the RationalWiki article on cryonics (which is strongly anti). The quality of argument is breathtakingly bad. It's not strong Bayesian evidence because it's pretty clear at this stage that if there were good arguments I hadn't found, an expert would be needed to give them, but it's not no evidence either.
From http://rationalwiki.com/wiki/RationalWiki :
RationalWiki is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. While RationalWiki uses software originally developed for Wikipedia it is important to realize that it is not trying to be an encyclopedia. Wikipedia has dominated the public understanding of the wiki concept for years, but wikis were originally developed as a much broader tool for any kind of collaborative content creation. In fact, RationalWiki is closer in design to original wikis than Wikipedia.
Our specific mission statement is to:
- Analyze and refute pseudoscience and the anti-science movement, ideas and people.
- Analyze and refute the full range of crank ideas - why do people believe stupid things?
- Develop essays, articles and discussions on authoritarianism, religious fundamentalism, and other social and political constructs
So it's inspired by Traditional Rationality.
A fine mission statement, but my impression from the pages I've looked at is of a bunch of nerds getting together to mock the woo. "Rationality" is their flag, not their method: "the scientific point of view means that our articles take the side of the scientific consensus on an issue."
Voted up, but calling them "nerds" in reply is equally ad-hominem, ya know. Let's just say that they don't seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)
Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.
Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.
Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.
I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums -- which, I admit, had me thinking twice about whether affiliation with "rational" causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)
But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.
But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.
This is actually one of Niven's Laws: "There is no cause so right that one cannot find a fool following it."
You understand this is more or less exactly the problem that Less Wrong was designed to solve.
Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published "elementary" rationalist material in no way takes away from this.
He's way, way far above the level represented by the participants in his namesake forum.
(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)
(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)
Bloggingheads are exactly 60 minutes.
Read his scientific books, and listen to his lectures and conversations. Pay attention to the style of argumentation he uses, as contrasted with other writers on similar topics (e.g. Gould). What you will find is that beautiful combination of clarity, honesty, and -- importantly -- abstraction that is the hallmark of an advanced rationalist.
The "good scientist, but not good rationalist" type utterly fails to match him. Dawkins is not someone who compartmentalizes, or makes excuses for avoiding arguments. He also seems to have a very good intuitive understanding of probability theory -- even to the point of "getting" the issue of many-worlds.
I would indeed put him near Eliezer in terms of rationality skill-level.
Again, it's not just the fact that he does science; it's the way he does science.
Having skill as a rationalist is distinct from specializing in rationality as one's area of research. Dawkins' writings aren't on rational thought (for the most part); they're examples of rational thought.
Just saw this over at Not Exactly Rocket Science: http://scienceblogs.com/notrocketscience/2010/02/quicker_feedback_for_better_performance.php
Quick summary: They asked a bunch of people to give a 4-minute presentation, had people judging, and told the presenter how long it would be before they heard their assessment. Anticipating quicker feedback resulted in better performance, but predictions of worse performance, and anticipating slower feedback had the reverse effect.
The prosecutor's fallacy is aptly named:
Barlow and her fellow counsel, Kwixuan Maloof, were barred from mentioning that Puckett had been identified through a cold hit and from introducing the statistic on the one-in-three likelihood of a coincidental database match in his case—a figure the judge dismissed as "essentially irrelevant."
One thing that I got from the Sequences is that you can't just not assign a probability to an event - I think of this as a core insight of Bayesian rationality. I seem to remember an article in the Sequences about this where Eliezer describes a conversation in which he is challenged to assign a probability to the number of leaves on a particular tree, or the surname of the person walking past the window. But I can't find this article now - can anyone point me to it? Thanks!
How do people decide what comments to upvote? I see two kinds of possible strategies:
My own initial approach belonged to the first class. However, looking at votes on my own comments, I get the impression most people use the second approach. I haven't checked this with enough data to be really certa...
Am I/are we assholes? I posted a link to the frequentist stats case study to reddit:
The only commenter seems to have come to a conclusion from us that Bayesians are assholes.
Is it just that commenter, or are we really that obnoxious? (now that I think about it, I think I've actually seen someone else note something similar about Bayesians.) So... have we gone into happy death spiral "we get bonus points for acting extra obnoxious about those that are not us"?
It is common practice, when debating an issue with someone, to cite examples.
Has anyone else ever noticed how your entire argument can be undermined by stating a single example or fact which is does not stand up to scrutiny, even though your argument may be valid and all other examples robust?
Is this a common phenomenon? Does it have a name? What is the thought process that underlies it and what can you do to rescue your position once this has occurred?
The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
http://lesswrong.com/lw/19d/the_anthropic_trilemma/
A question of rationality. Eliezer, I have...
I'm new to Less Wrong. I have some questions I was hoping you might help me with. You could direct me to posts on these topics if you have them. (1) To which specific organizations should Bayesian utilitarians give their money? (2) How should Bayesian utilitarians invest their money while they're making up their minds about where to give their money? (2a) If your answer is "in an index fund", which and why?
If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971?
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match. The articles have one tone, and then the comments on that article have a completely different tone; it's like the article comes from one site and the comments come from another.
I find that to be a really weird reason not to read Less Wrong, and I have no idea what that person is talking about. Do you?
Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match...I have no idea what that person is talking about. Do you?
Yes.
Back in Overcoming Bias days, I constantly had the impression that the posts were of much higher quality than the comments. The way it typically worked, or so it seemed to me, was that Hanson or Yudkowsky (or occasionally another author) would write a beautifully clear post making a really nice point, and then the comments would be full of snarky, clacky, confused objections that a minute of thought really ought to have dispelled. There were obviously some wonderful exceptions to this, of course, but, by and large, that's how I remember feeling.
Curiously, though, I don't have this feeling with Less Wrong to anything like the same extent. I don't know whether this is because of the karma system, or just the fact that this feels more like a community environment (as opposed to the "Robin and Eliezer Show", as someone once dubbed OB), or what, but I think it has to be counted as a success story.
Oh! Maybe they were looking at the posts that were transplanted from Overcoming Bias and thinking those were representative of Less Wrong as a whole.
Less Wrong, especially commenting on it, is ridiculously intimidating to outsiders. I've thought about this problem, and we need some sort of training grounds. Less Less Wrong or something. It's in my queue of top level posts to write.
So the answer to your question is the karma system.
What's so intimidating? You don't need much to post here, just a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics - oh, and of course to read a sequence of >600 3000+ word articles. So long as you can do that and you're happy with your every word being subject to the anonymous judgment of a fiercely intelligent community, you're good.
Sounds like a pretty good filter for generating intelligent discussion to me. Why would we want to lower the bar?
Being able to comment smartly and in a style that gets you upvoted doesn't really need any grounding in any of those subjects. I just crossed 1500 karma and only have basic grounding in Computer Science, Mathematics, and Philosophy.
When I started out, I hadn't read more than EY's Bayes' for Dummies, The Simple Truth, and one post on Newcomb's.
In my opinion, the following things will help you more than a degree in any of the subjects you mentioned:
I actually think this is a little absurd. There is no where near enough on these topics in the sequences to give someone the background they need to participate comfortably here. Nearly everyone here as a lot of additional background knowledge. The sequences might be a decent enough guide for an autodidact to go off and learn more about a topic but there is no where near enough for most people.
Without new blood communities stagnate. The risk of group think is higher and assumptions are more likely to go unchecked. An extremely homogeneous group such as this one likely has major blind spots which we can help remedy by adding members with different kinds of experiences. I would be shocked if a bunch of white male, likely autism spectrum, CS and hard science types didn't have blind spots. This can be corrected by informing our discussions with a more diverse set of experiences. Also, more diverse backgrounds means more domains we can comfortably apply rationality to.
I also think the world would be a better place if this rationality thing caught on. It is probably impossible (not to mention undesirable) to lower the entry barrier so that everyone can get in. But I think we could lower the barrier so that it is reasonable to think that 80-85+ percentile IQ, youngish, non-religious types could make sense of things. Rationality could benefit them and they being more rational could benefit the world.
Now we don't want to be swamped with newbies and just end up rehashing everything over and over. But we're hardly in any danger of that happening. I could be wrong but I suspect alm...
Reminds me of a Jerry Seinfeld routine, where he talks about people who want and need to exercise at the gym, but are intimidated by the fit people who are already there, so they need a "gym before the gym" or a "pre-gym" or something like that.
(This is not too far from the reason for the success of the franchise Curves.)
This is pretty self-important of me but I'd just like to warn people here that someone is posting at OB under "Jack" that isn't me so if anyone is forming a negative opinion of me on the basis of those comments- don't! Future OB comments will be under the name Jack (LW). The recent string of comments about METI are mine though.
This is what I get for choosing such a common name for my handle.
Apologies to those who have read this whole comment and don't care.
What do you have to protect?
Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?
This is a rather personal question, I know, but I'm very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?
Seth Roberts makes an intriguing observation about North Korea and Penn State. Teaser:
The border between North Korea and China is easy to cross, and about half of the North Koreans who go to China later return, in spite of North Korea’s poverty.
Heilmeier's Catechism, a set of questions credited to George H. Heilmeier that anyone proposing a research project or product development effort should be able to answer.
I mentioned the AI-talking-its-way-out-of-the-sandbox problem to a friend, and he said the solution was to only let people who didn't have the authorization to let the AI out talk with it.
I find this intriguing, but I'm not sure it's sound. The intriguing part is that I hadn't thought in terms of a large enough organization to have those sorts of levels of security.
On the other hand, wouldn't the people who developed the AI be the ones who'd most want to talk with it, and learn the most from the conversation?
Temporarily not letting them have the power to g...
One Week On, One Week Off sounds like a promising idea. The idea is that once you know you'll be able to take the next week off, it's easier to work this whole week full-time and with near-total dedication, and you'll actually end up getting more done than with a traditional schedule.
It's also interesting for noting that you should take your off-week as seriously as your on-week. You're not supposed to just slack off and do nothing, but instead dedicate yourself to personal growth. Meet friends, go travel, tend your garden, attend to personal projects.
I sa...
Hwæt. I've been thinking about humor, why humor exists, and what things we find humorous. I've come up with a proto-theory that seems to work more often than not, and a somewhat reasonable evolutionary justification. This makes it better than any theory you can find on Wikipedia, as none of those theories work even half the time, and their evolutionary justifications are all weak or absent. I think.
So here are four model jokes that are kind of representative of the space of all funny things:
"Why did Jeremy sit on the television? He wanted to be on TV....
An inquiry regarding my posting frequency:
While I'm at the SIAI house, I'm trying to orient towards the local priorities so as to be useful. Among the priorities is building community via Less Wrong, specifically by writing posts. Historically, the limiting factor on how much I post has been a desire not to flood the place - if I started posting as fast as I can write up my ideas, I'd get three or four posts out a week with (I think) no discernible decrease in quality. I have the following questions about this course of action:
Will it annoy people? B
As your goal is to build community, I would time new posts based on posting and commenting activity. For example, whenever there is a lull, this would be an excellent time to make a new post. (I noticed over the weekend there were some times when 45 minutes would pass between subsequent comments and wished for a new post to jazz things up.)
On the other hand, if there are several new posts already, then it would be nice to wait until their activity has waned a bit.
I think that it is optimal to have 1 or 2 posts 'going on' at a time. I prefer the second post when one of them is technical and/or of focused interest to a smaller subset of Less Wrongers.
(But otherwise no limit on the rate of posts.)
http://www.guardian.co.uk/global/2010/feb/23/flat-earth-society
Yeah, so... I'm betting if we could hook this guy up to a perfect lie detector, it would turn out to be a conscious scam. Or am I still underestimating human insanity by that much?
Objections to Coherent Extrapolated Volition
http://www.singinst.org/blog/2007/06/13/objections-to-coherent-extrapolated-volition/
I think that this post doesn't list the strongest objection: CEV would take a long list of scientific miracles to pull off, miracles that whilst not strictly "impossible", are each profound computer science and philosophy questions. To wit:
An AI that can simulate the outcome of human conscious deliberation, without actually instantiating a human consciousness, i.e. a detailed technical understanding of the problem of conscious experience
A way to construct an AI goal system that somehow extracts new concepts from a human upload's brain, and then modifies itself to have a new set of goals defined in terms of those concepts.
A solution to the ontology problem in ethics
A solution to the friendliness structure problem, i.e. a self-improving AI that can reliably self-modify without error or axiological drift.
A solution to the problem of preference aggregation, (EDITED, thanks ciphergoth)
A formal implementation of Rawlesian Reflective Equilibrium for CEV to work
An AI that can solve philosophy problems that are beyond the ability of the designers to even conceive
A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitio
A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.
What you're looking for is a way to construe the extrapolated volition that washes out superstition and dementation.
To the extent that vengefulness turns out to be a simple direct value that survives under many reasonable construals, it seems to me that one simple and morally elegant solution would be to filter, not the people, but the spread of their volitions, by the test, "Would your volition take into account the volition of a human who would unconditionally take into account yours?" This filters out extrapolations that end up perfectly selfish and those which end up with frozen values irrespective of what other people think - something of a hack, but it might be that many genuine reflective equilibria are just like that, and only a values-based decision can rule them out. The "unconditional" qualifier is meant to rule out TDT-like considerations, or they could just be ruled out by fiat, i.e., we want to test for cooperation in the Prisoner's Dilemma, not in ...
What you're looking for is a way to construe the extrapolated volition that washes out superstition and dementation.
you could do that. But if you want a clean shirt out of the washing machine, you don't add in a diaper with poo on it and then look for a really good laundry detergent to "wash it out".
My feeling with the CEV of humanity is that if it is highly insensitive to the set of people you extrapolate, then you lose nothing by extrapolating fewer people. On the other hand, if including more people does change the answer in a direction that you regard as bad, then you gain by excluding people with values dissimilar from yours.
Furthermore, excluding people from the CEV process doesn't mean disenfranchising them - it just means enfranchising them according to what your values count as enfranchisement.
Most people in the world don't hold our values(1). Read, e.g. Haidt on Culturally determined values. Human values are universal in form but local in content. Our should function is parochial.
(1 - note - this doesn't mean that they will be different after extrapolation. f(x) can equal f(y) for x!=y. But it does mean that they might, which is enough to give you an incentive not to include them)
I will not lend my skills to any such thing.
Is that just a bargaining position, or do you truly consider that no human values surviving is preferable to allowing an "unfair" weighing of volitions?
I made a couple posts in the past that I really hoped to get replies to, and yet not only did I get no replies, I got no karma in either direction. So I was hoping that someone would answer me, or at least explain the deafening silence.
This one isn't a question, but I'd like to know if there are holes in my reasoning. http://lesswrong.com/lw/1m7/dennetts_consciousness_explained_prelude/1fpw
Here, I had a question: http://lesswrong.com/lw/17h/the_lifespan_dilemma/13v8
I looked at your consciousness comment. First, consciousness is notoriously difficult to write about in a way that readers find both profound and comprehensible. So you shouldn't take it too badly that your comment didn't catch fire.
Speaking for myself, I didn't find your comment profound (or I failed to comprehend that there was profundity there). You summarize your thesis by writing "Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine." (The singular of "qualia" is "quale", not "qualium", btw.)
The problem is that this is more like a definition of "quale" than an explanation. People find qualia mysterious when they ask themselves why some algorithms "feel like" anything from the inside. The intuition is that you have both
the code — that is, an implementable description of the algorithm; and
the quale — that is, what it feels like to be an implementation of the algorithm.
But the quale doesn't seem to be anywhere in the code, so where does it come from? And, if the quale is not in the code, then why does the code give rise to that quale, rather than to some other one?
T...
I just failed the Wason selection task. Does anyone know any other similarly devilish problems?
Oh, look honey: more proof wine tasting is a crock:
A French court has convicted 12 local winemakers of passing off cheap merlot and shiraz as more expensive pinot noir and selling it to undiscerning Americans, including E&J Gallo, one of the United States' top wineries.
Cue the folks claiming they can really tell the difference...
Nice recap of psychological biases from the Charlie Munger school (of hard knocks and making a billion dollars).
http://www.capitalideasonline.com/articles/index.php?id=3251
I've been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is "Why aren't they?" In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit
They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.
So you can possibly use them as evide...
Suppose I wanted to convince someone that signing up for cryonics was a good idea, but I had little confidence in my ability to persuade them in a face-to-face conversation (or didn't want to drag another discussion too far off-topic) - what is the one link you would give someone that is most likely to save their life? I find the pro-cryonics arguments given by Eliezer and others on this site + Overcoming Bias to be persuasive (I'm convinced that if you don't want to die, it's a good idea to sign up) but all the arguments are in pieces and in different pla...
first use of "shut up and calculate" ?
I liked learning about the bias called the "Matthew effect" The tendency to assign credit to the most eminent among all the plausible candidates from —Mattthew 25:29.
For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.
http://scitation.aip.org/journals/doc/PHTOAD-ft/vol_57/iss_5/10_1.shtml?bypassSSO=1
enjoy
For those Less Wrongians who watch anime/read manga, I have a question: What would you consider the top three that you watch/read and why?
Edit: Upon reading gwern's comment, I see how kinda far off topic that was, even for an open thread. So change the question to what anime/manga was most insightful into LW-style thinking and problems?
The actual intent was to point out that embargoing references past a certain point truly is ridiculous. Referencing a 69 year old movie (EDIT: several hundred year old play) is an attempt at a reductio ad absurdum, made more visceral by technically violating the norm Eliezer is imposing.
Certainly there's no real need to discuss specific plot points of recent manga or anime on this site. This, in fact, holds for any specific example one cares to name. On the other hand, the cumulative cutting off all our cultural references to fiction does impose a real harm to the discourse.
References to fiction let us compress our communications more effectively by pointing at examples of what we mean. My words alone can't have nearly the effect a full color motion picture with surround sound can -- but I can borrow it, if I'm allowed to reference works that most people are broadly familiar with.
I don't think that most recent works count -- they reach too small a segment of LW, and so are the least useful to reference, and the ones most likely to upset those who are spoiler averse. The question is where the line should be set, and that requires context and judgment, not universal bans.
UFO sightings revealed in UK archive files from 1990s
"The Mathematical Foundations of Consciousness," a lecture by Professor Gregg Zuckerman of Yale University
I've been trying to find the original post to explain why it allegedly is so very likely that we live in a simulation, but I've had little luck. Does anyone have a link handy?
This is thoroughly hypothetical, but if there was going to be an unofficial, more social sister site for LW, what name would you suggest for it?
I should really start taking fish oil supplements again. I would especially encourage anyone with children to make sure they get sufficient fish oil while their brains are growing.
I've realized that having my and others' karma listed feels very similar to when Gemstone III started listing everyone's experience level.
The question remains: how much karma to level up?
Italian Court Finds Google Violated Privacy
http://www.nytimes.com/2010/02/25/technology/companies/25google.html
This comment is a response to the claim that Gould's separate magesteria idea is not conceptually coherent. While I don't view reality parsed this way, I thought I would make an effort to establish its coherence and self-consistency (and relevance under certain conditions).
In this comment, by dualism, I'll mean the world view of two separate magisteria; one for science and one for faith. There are other, related meanings of dualism but I do not intend them here.
Physical materialism assumes monism -- there is a single, external reality that we have a limi...
Exercising "rational" self-control can be very unpleasant, therefore resulting in disutility.
Example 1: When I come buy an interesting-looking book on Amazon, I can either have it shipped to me in 8 days for free, or 2 days for a few bucks. The naive rational thing to do is to select the free shipping, but you know what? That 10-day wait is more unpleasant than spending a few bucks.
Example 2: When I come home from the grocery store I'm tempted to eat all the tastiest food first. It would be more "emotionally intelligent" to spread it ou...
Is there a facebook group I can spam my friends to join to save the world via Craiglist ads yet?
Have people previously tried/discussed this calibration diagnostic?
There seems to be a bit of a terminology mess in the area of intelligent systems.
There are generally-intelligent systems, narrowly-intelligent systems, and an umbrella category of all goal-directed systems.
How about the following:
we call the narrowly-intelligent systems "experts", and their degree of expertise their "expertness";
we call the generally-intelligent systems "intelligences", and their degree of cleverness their "intelligence";
we call the umbrella category of goal-directed agents "competent syst
Assuming that some cryonics patient X ever wakes up, what probability do you assign to each of these propositions?
1) X will be glad he did it.
2) X will regret the decision.
3) X will wish he was never born.
Reasoning would be appreciated.
Related to this post, which got no replies:
http://lesswrong.com/lw/1mc/normal_cryonics/1h8j
I remember a post on this site where someone wondered whether a medieval atheist could really confront the certainty of death that existed back then, with no waffling or reaching for false hopes. Or something vaguely along those lines. Am I remembering accurately, and if so, can someone link it?
Geek rapture naysaying:
"Jaron Lanier: Alan Turing and the Tech World's New Religion"
There should be a policy, or strong norm, of "No summary, no link" when starting a thread with a suggested link. That summary should tell the key insights gained, and what about it you found unique.
I hate having to read a long article -- or worse, listen to a long recording -- and find out it's not much different from what I've heard a thousand times before. (That happens more than I would expect here.) Of course, you shouldn't withhold a link just because Silas (or anyone else) already read something similar ... but it tremendously helps to know in advance that it is something similar.
Looking at various definitions, "intelligence" and "instrumental rationality" seem to often be used to mean much the same thing.
Is this redundant terminology? What should be done about that?
Refs:
The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.