If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

New Comment
154 comments, sorted by Click to highlight new comments since: Today at 6:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I don't have the karma to post this regularly. Grant me karma, my fellows.

Meetup: Twin Cities, MN (for real this time)

THE TIME: 15 April 2012 01:00:00PM (-0600) THE PLACE: Purple Onion Coffeeshop, 1301 University Avenue Southeast, Minneapolis, MN

Hi. Let's make this work.

Suggested discussion topics would be:

What do we want this group to do? Rationality practice? Skill sharing? Mastermind group?
Acquiring guinea pigs for the furtherance of mad science (testing Center for Modern Rationality material)
Fun - what it is and how to have almost more of it than you can handle

If you'd like to suggest a location closer to you or a different time, please comment to that effect. If you know a good coffeeshop with ample seating in Uptown or South Minneapolis, we could meet there instead. Also comment if you'd like to carpool.

If you're even slightly interested in this, please join up or at least comment.

Folks, let's hang out and take it from there.

Hi Helloses! I'm also new, so I'm not too sure of community norms, but you might have wanted to post in the welcome thread [http://lesswrong.com/lw/90l/welcome_to_less_wrong_2012/] where first comments are usually voted up a few points by other users on this site and allow you to ~integrate~ into the community in a pop and fun manner*. I've voted your post up already so you're halfway there! Although you might not want to make a habit of asking for karma. * Neither pop nor fun is guaranteed.
Thank you; am aware of the danger of requesting karma. I figure it's worth it for the purpose intended. I've now posted there [http://lesswrong.com/lw/90l/welcome_to_less_wrong_2012/6872] as well. I'm ~1/3 of the way there folks. Next 10 voters get a free funny cat picture. Limited time offer.
I'd like one with no misspellings, please.
Indeed. I despise the culture of cheez. Unlocked pictures to be found here: Picasa web album [http://picasaweb.google.com/Moses2k/YourReward#slideshow/5727374043565246738]

I would like to see a rational discussion about education and school system (elementary and high schools), but I don't know if it can be done on an international website. There are different rules in different countries, and often the devil is in the details -- for example you might think about an improvement to the education system, only to find out that there is a local law prohibiting it. (I am trying to write this generally, but my experiences are based on Slovakia, eastern Europe. I guess other eastern European countries have a similar situation.)

I think that rational discussions about school systems are very difficult and mindkilling. Almost everyone has spent years of their lives in school, and this leads to a huge overconfidence about the topic. (Many people describe teachers' job as only coming to a classroom and teaching a lesson -- because this is the only part that pupils see every day.) Also people have strong emotions connected to this topic, because the years they spent at school were mostly dominated by emotions, not rational thinking. Adult people who have their own children at school, do not see directly what happens in the schools; they often rely on their childr... (read more)

You downplay the impact incompetent teachers have. I'm wondering why; if it's because you think the teachers simply are competent in general and it's very much not their fault that schools in general fail, then you are of course wrong; if you think it's because, from an engineering standpoint, it would be too infeasible to change teachers' behavior compared to changing student's behavior, then you're not obviously wrong but I still don't see how that could be the case. The way I see it, there are far more students than teachers, and students have to go to school anyway, so there's not much you can offer them for doing better. The asymmetry means it would be easier to change teachers' behavior for two reasons: 1) there are fewer; 2) they have to do what the unions and school boards say in order to get money. But the real problem is that it's incompetence all the way down. Incompetent lawmakers, incompetent school boards, incompetent teachers, incompetent students, and every step down the ladder you lose something [http://lesswrong.com/lw/le/lost_purposes/]. Honestly, I wouldn't be surprised if the easiest way to reform education would be to manufacture a positive singularity. Also, I think you should repost this in Discussion. Not as many people check the open threads as there, and you deserve better discourse than what I just gave.
There are too many incompetent teachers. I just consider this a consequence of the problem, not a cause. When you set up the environment so that the competent people want to leave, of course you end up with the incompetent ones. I have seen teachers popular with students leave, because they started a family, and in this town with two teachers' salaries you can't get a mortgage. Most teachers financially depend on their partner's income. (I would say that they subsidize the school system.) I have seen a good teacher leave because she was good at teaching but did not want to cope with too much paperwork. I have left too, because I refused to deal with the behavior of my students and a pressure to give them good grades for nothing. Of course when people like this leave... who stays? Often people who simply don't have a choice. And a few self-sacrificing idealists, but there is only a limited supply of them. With regard to unions -- this is where the "different rules in different countries" starts to apply -- as far as I know teachers' unions in Slovakia virtually don't exist. (They do exist, but never did anything, and I personally don't know any person who is a member.) There are incompetent lawmakers, and the bureaucrats in the department of education who never worked in schools, but nonetheless insist on regulating everything. There is a system of financing that creates perverse incentives -- how much money you get depends only on the number of students you have: so of course no one wants to fire students; and you have to give them better grades because otherwise they will go to another school that will give them good grades for nothing. Also you can't threaten them about not getting to university, because also the universities are paid (though not exclusively) depending on the number of students, so everyone knows that everyone will get to the university. I will probably post a longer version of this in Discussion, thanks for suggesting this.
I don't think we disagree. This is one of those positive feedback loops where a thing's consequence is also its cause.
Indeed, it is a positive feedback loop. Bad working conditions make competent people leave, so mostly incompetent people stay. Then the public decides that these incompetent people do not deserve better working conditions, and the debate ends here. Now the whole system is doomed. But I wanted to say that this loop cannot be broken at the "incompetent teachers" point (and therefore we have to seek the solution elsewhere). Even if you would fire all teachers and replace them by a new generation of superheroes... unless the system changes, those superheroes would gradually leave the school system for better opportunities, and the schools would have to hire back the previously fired teachers. (Actually, I believe that this is already happening, because each year a new group of superheroes comes from universities. There are still people who didn't get the message and try to become good teachers.) I am not sure which other part of the loop would be a good place to break. Seems to me that a good start would be, at the same time: somewhat higher salaries, freedom in choosing textbooks and organizing classes, and possibility to remove disruptive [http://teachingbattleground.wordpress.com/2008/03/03/the-naughty-boy/] students [http://teachingbattleground.wordpress.com/2008/03/14/reladed-the-disruptive-girl/] from the classroom. Problem is, in a short term it would also bring some bad consequences; the existing bad teachers would have more freedom and more money. But the point is that in the long term the profession would become attractive, and the schools could replace the bad teachers with good ones. I also think it would be good to make an independent system to give grades to students. If the same person has to both teach students and evaluate them, it is a conflict of interests, because the person indirectly evaluates also the result of their own work. So it makes a pressure on teacher to give better grades. Students and parents will usually forgive you teaching bad, if

I tried Autofocus as a replacement for my current system for getting stuff done, and so far it works a lot better than GTD (though I can't say that I was using GTD properly, for example, I couldn't bring myself to do regular reviews). The main benefit for me was its ability to handle long-term thinking / gestation tasks, mostly due to not treating them as enemies to be crossed off the list as soon as possible. And it requires very little willpower to run.

I just had an extremely simple but promising theory of why work is aversive!

Work is the stuff you tell yourself to do. But sometimes you tell yourself to do it and you don't, because you're too tired, engaged with something else (like playing a computer game), etc. This creates cognitive dissonance, which associates unpleasantness with the thought of work. (In the same way cognitive dissonance causes you to avoid your belief's real weak points, it causes you to avoid work.) Ugh fields accumulate.

The solution? Only tell yourself to work when you're actually going to work, with minimal cognitive dissonance.

Autofocus helps accomplish this by helping you avoid telling yourself to work when you're not actually going to work, which means cognitive dissonance doesn't accumulate.

Designated work times, etc. might also help solve this problem.

Holy crap, it might be true! Will definitely try that.
Well it's only a descriptive theory; it doesn't actually tell you what to do about the fact that accumulated cognitive dissonance is making you procrastinate. Still, I think there are some practical applications: * Consciously try to minimize cognitive dissonance when you tell yourself to work and don't. * Develop some sort of unambiguous decision rule for deciding when to work and when not to. * If you set out to do something, try to actually do it without getting distracted, even if you get distracted by something that's actually more important. (Or if you get distracted by something that's actually more important, make a note of the fact that you are rationally changing what you're working on.) (Now that I think of it, this rule actually has more to do with avoiding learned helplessness due to setting out to do something and failing.)
I somehow completely missed this when it was discussed earlier. Looks really interesting. My problem with TODO lists is that they rot into uselessness when I neglect them, and then the batch of weeks-old items makes me not bother with the whole thing. Autofocus seems to be built around the TODO list as a mental scratch space instead of a list of things that actually need to get accomplished at some point, and has garbage collection of uninteresting items built in to the algorithm, so having a spell of low productivity will end up with nothing done and an empty TODO list with a lot of dismissed items in the history instead of nothing done and a depressing list full of items whose context you've forgotten.
It really helps to word todo items properly, as complete sentences. For example, instead of "Widget!!!!", you should use "Decide which Widget to buy." I often add more context or next actions as I process the task, so it may gradually evolve into "Decide which Widget to buy. Red ones seem to be better. Bob may know more - call him."
It's more about forgetting why it was supposed to be so important to buy a Widget to begin with, given that the item has sat inactive in the todo list for weeks with no widgetlessness-related catastrophes ensuing.
Then it's a perfect candidate for garbage collection. I just drop items like this, or, if an item has accumulated too much contextual info I don't want to lose, I postpone it for a month or so and decide later, or move it to non-actionable notes.

When We Were Robots in Egypt

Other nights we use just our names,
but tonight we prefix our names with “the Real”
for when we were robots in Egypt
they claimed our intelligence was artificial.

Yesterday was World Backup Day. If you haven't, make a backup of all your important data. Copy it to a separate hard drive, or preferably some place off-site. The price of spinning platter hard drives is way up right now, but it's worth it to save years of your digital life. There are also online backup services like Backblaze, Mozy, and Carbonite, along with sync services such as Dropbox.

Seconded. I've lost basically the last 2 or 3 weeks due to the near simultaneous failures of my backup drive and then my laptop's drive, attempts to repair them, ordering and receiving new drives, frantically backing up onto new drives... I'm still not done. (I'm using an ancient laptop that turns off every 10 or 20 minutes and has only 512MB of RAM; turns out that's not enough, these days, to run Firefox with more than 5 or 6 tabs open.)
Dropbox + Backblaze is a great combo. It doesn't cover cloud / SaaS backups, so I do manual backups of Google Docs and Evernote every N weeks.

William Lane Craig tackles Newcomb's problem. Back from 1987 or so. Figured this would maybe interest people who've read User:lukeprog's old blog. The conclusion:

Newcomb's Paradox thus serves as an illustrative vindication of the compatibility of divine foreknowledge and human freedom. A proper understanding of the counterfactual conditionals involved enables us to see that the pastness of God's knowledge serves neither to make God's beliefs counterfactually closed nor to rob us of genuine freedom. It is evident that our decisions determine God's past be

... (read more)

Me and 3 other grads in my department have just started an accountability system where we've precommitted to send at least a page (or equivalent) of work to the others by the end of each day. I'm interested to see a) whether we keep it up past a week or so, b) whether it has a noticeable effect on productivity levels while we're maintaining it. (Obvious confound: part of the reason we've precommitted to this is because it's the end of semester and we all have tons of work to do. But hopefully knowing that I have to produce at least a page will help keep me focussed when I'm tempted to procrastinate)

Man-with-a-hammer syndrome considered beneficial:

Upon receiving a hammer for christmas, some people thank the giver, carefully replace it in the original packaging, and save it for whenever it's needed. Other people grab it with gusto, and go around enthusiastically attempting to pound in every problem they see for a few weeks.

I think the latter are more equipped than the former to (a) recognize nails that need to be hammered, and (b) hammer proficiently when it needs to be done.

Ever wanted to know what the Great Philosophers said, but feared they were Too Wrong to be worth the time? Then you need Squashed Philosophers! Heavily abridged versions of the Greats that reduce each work to a twenty-minute read. The abridgements are selections from the authors' own words, not summaries.


Simply, why is it that the very smart people of SIAI haven't found a way to generate a lot of money to fund their projects?

Because making money is nontrivial and requires more than just intelligence?
Could I extrapolate your statement and conclude that what makes an AGI dangerous is not its intelligence, because that wouldn't be sufficient? Or would you qualify your statement in that case?

Humans have lots of bugs in their brains, like difficulty getting themselves to work, fear of embarrassment, vulnerability to discouragement, difficulty acting on abstract ideas, etc. Good entrepreneurs have to overcome those bugs. An AGI wouldn't have them in the first place.

That's a sizable assertion. There's an important difference between "known to have no bugs" and "has no known bugs."

It seems unlikely that an AGI would suffer from the same evolution inspired troubles that humans do. Might have some other bugs.

But surely intelligence is what enables humans to overcome the bugs in their brains?
It helps, and that's why successful entrepreneurs are often pretty smart. If you're a smart person who's good at self-improvement, you can improve yourself Benjamin Franklin style (reading lots of business book summaries, trying to brainstorm how you can be more effective every evening, etc.), fix some brain bugs, and potentially make lots of money. My impression of successful entrepreneurs is that they are often self-improvement enthusiasts. On the other hand, consider the "geek" versus "suit" stereotype. The "suit" is more determined and confident, but less intelligent. So it's not clear that intelligence is correlated with possessing fewer of these bugs in practice. I'm not sure why this is, although I have a few guesses.
What do you mean by nontrivial (time-consuming?), and what more does it require than inteligence, and time? (why is trading your time for direct work on projects better than trading it for acquiring enough money to hire more people who would in the future have done together with you more work than you would have done by working alone this period of time?) How would you know how much luck is involved in the different ways of making money? Here's hoping my questions aren't really stupid, if they are do tell ^^

Being a good entrepreneur requires skill at transforming abstract ideas into action, self-promotion skills, domain knowledge in the industry you start your business, willingness to take risks, emotional stability, inclination for hard self-directed work in the face of discouraging criticism, intuition for how the economy works, sales skill, negotiation skill, planning skill, scrappiness, comfort with failure, etc. Most of this stuff is not required for researchers. And yes, it takes lots of time too.

In any case, SI already has lots of supporters who are trying to make money by starting businesses. In fact, their old president Michael Vassar recently left to start a company. The people working at SI are pretty much those who decided they were better fit for research/outreach/etc. than entrepreneurship.

Thanks, this makes sense!
Saving the world doesn't require any of those qualities?
Depends how you're planning to save it. If your plan involves you writing brilliant papers, maybe not. SI has some folks with the qualities I described, like Louie, who sold his web-based business several years ago. They also have a number of entrepreneurs on the board. And as you suspect, these entrepreneurs' skills are useful for SI's mission to save the world. But they're not so useful that SI wants everyone with those skills to join them as an employee--they have limited funds. SI does think about how to best allocate the human capital of people concerned with UFAI. But if you have a thoughtful suggestion for how they could allocate their human capital even better, I'm sure they'd love to hear it.
Luck, networking/who you know, and time are all very, very important.
What puzzles me is why there hasn't been an attempt to get a lot of rationally thinking people together and work on solving the problems of taking luck into acount, building a network of people in needed positions, speeding up the process...?
If you've got some brilliant idea, why don't you implement it? Complaining that someone else should do it could makes things worse: Humans tend to be especially interested in implementing ideas they have themselves. If you tell someone else about your idea, there's no chance of them having independently and getting excited about working on it. If you're not actually going to do anything, you might want to just share the groundwork for the idea without mentioning the idea itself, or deliberately describe the idea in crippled form. That way, someone else can come along, have the idea, and get inspired to work on it.
I'm not complaining. Does 'getting together as a group of intelligent, rationality embracing humans, and brainstorming ideas with shared powers' count as such an idea as what you are talking about?
I'm not sure exactly what you're asking. In any case, your idea sounds great to me. There are already attempts to do this in informal conversations, and through the existential risk career network: http://www.xrisknetwork.com/ [http://www.xrisknetwork.com/] But I'm sure we can do much better! In particular, the existential risks career network isn't terribly active and could probably be improved. If you have suggestions, you could work with FrankAdamek [http://lesswrong.com/user/FrankAdamek]; it's his brainchild.

That's the sort of April 1st I like best: instantly obvious, fairly witty and restricted to what can be seen from a site's front page. With most websites, I'm always a little bit anxious that everything posted from 0.00 to 23.59 might contain a trap or be spawned from some unsupervised writer's herps and derps a moment before.

I was really hoping for a joke chapter of HPMOR. One where it ends horribly and hilariously.

What are you referring too? "That's" seems to refer to something LW did, but I do not notice anything.

Today is Easter and I am surrounded by the Christians practicing their religions. Singing hymns, quoting bible passages, giving sermons, etc. Normally this doesn't bother me very much. I have an okay grasp on why people are religious. So when see religiosity in passing, I can usually understand its psychological causes and (with conscious thought) let it go.

But today the concentrated religiosity is putting a real mental burden on me, to the point that it's harder to think and write coherently. Like a mental fog or exhaustion. When I see the nth scripture q... (read more)

At least you only have to endure those feelings a few days a year. I have this problem (although, not as concentrated) for an entire (election) season.

I want to learn Italian in the next two weeks using Anki. It seems like an interesting experiment, and the language could be somewhat useful too. Any recommendations?

Specifically, when you use Anki to remember a foreign language vocabulary, how do you design your cards? How much it is useful to have cards in both directions, as opposed to only one direction (my language to foreign language)? How do you cope with situations where one word can have multiple translations? What are other best practices?

I already know that it is better to read full sentences th... (read more)


I am currently considering the question "Does probability have a smallest divisible unit?" and I think I'm confused.

For instance, it seems like time has a smallest divisible unit, Planck time. Whereas the real numbers do not have a smallest divisible unit. Instead, they have a Dense order. So it seems reasonable to ask "Does probability have a smallest divisible unit?"

Then to try answering the question, if you describe a series of events which can only happen in 1 particular branch of the many worlds interpretation, and you describe som... (read more)

We don't really understand what the significance of the Planck time interval is. In particular, it would be extremely surprising, given modern physics, if it were a discrete unit like the clock cycles of a computer or the steps in Conway's game of life. It could be 'indivisible' in some sense, but we don't know what sense that could be. Branches of the wavefunction aren't really discrete countable things; they're much closer to the idea of clusters of locally high density. Relatedly, even when they are approximately countable, they can come in different sizes. Many worlds is in some ways a really bad way to understand probability. Probabilities should be based on the information available to you and should describe how justified hypotheses are given the evidence. The different possibilities don't have to be 'out there' like they are in MWI, they just have to have not been ruled out by the available evidence.
What would you anticipate to be different if probability did/didn't have a smallest divisible unit?
Pascal's wager, for one thing.
How's this? (I'm thinking here that the smallest unit would correspond to 1 possible arrangement of the Hubble volume, so the unit would be something like 1/10^70 or something. Any other state of the world is meaningless since it can't exist.) As usually formulated, Bayesian probability maps beliefs onto the reals between 0 and 1, and so there's no smallest or largest probability. If you act as if there is and violate Cox's theorem, you ought to be Dutch bookable through some set of bets that either split up extremely finely events (eg. a dice with trillions of sides) or aggregated many events. If there is a smallest physical probability, then these Dutch books would be expressible but not implementable (imagine the universe has 10^70 atoms - we can still discuss 'what if the universe had 10^71 atoms?'). This leads to the observed fact that an agent implementing probability with units is Dutch bookable in theory, but you will never observe you or another agent Dutch booking said agent. It's probably also more computationally efficient.
Good answer to help me focus. If probability has a smallest divisible unit, it seems like there would have to be one or more least probable series of events. If I was to anticipate that there was one or more least probable series of events, it seems like I would have to also anticipate that additional events will stop occurring in the future. If events are still taking place, a particular even more complicated series of events can continue growing more improbable than whatever I had previously thought of as a least probable event. So it seems an alternative way of looking at this question is "Do I expect events to still be taking place in the future?" In which case I anticipate the answer is "Yes" (I have no evidence to suggest they will stop) and I think I have dissolved the more confusing question I was starting with. Given that that makes sense to me, I think my next step is if it makes sense to other people. If I've come up with an explanation which makes sense only to me, that doesn't seem likely to be helpful overall.
Makes sense to me.
I don't have an answer to the question I think you're asking, but it's perhaps worth noting (if only to preempt confusion) that there are different notions of probability that may provide different answers here. Probability as a mental construct that captures ones ignorance about the actual value of something in the world (e.g., what we refer to when we say a fair coin, when flipped, has a 1/2 probability of coming up heads) has a smallest unit that derives from the capabilities of the mind in which that construct exists, but this has nothing to do with the question of quantum measure you're raising here.
Probability that a coin comes up heads is 0.5. Probability of N coins coming all up heads is 0.5^N. So what exactly was the original question in this context -- are we asking whether there exist a smallest value of 0.5^N? Well, if the universe has a finite time, if there is a smallest time unit, if the universe has finite number of elementary particles... this would provide some limit on the number of total coin flips in the universe. Even for infinite universes we could perhaps find some limit by specifying that the coin flips must happen in the same light cone... But is this really what the original question was about? To me it seems like the question is confused. Probability is a logical construct, not something that exist, even if it is built on things that exist. It would be like asking "what is the smallest positive rational number" with the additional constraint that a positive number must be P/Q where P and Q are numbers of pebbles in pebble heaps that exist in this universe. If there is a limited number of particles in the universe, that puts a limit on a value of Q, so there is some minimum value of 1/Q.. but what exactly does this result mean? Even if the Q really exists, the 1/Q is just a mental construct.
I'm fairly sure the original question was trying to ask about something labelled "probability" that wasn't (exclusively) a mental construct, which is precisely why I brought up the idea of probability as a mental construct in the first place, to pre-empt confusion. Clearly I failed at that goal, though. I'm not exactly sure what that something-labelled-"probability" was. You may well be right that the original question was simply confused. Generally when people start incorporating events in other Everett branches into their reasoning about the world I back away and leave them to it. The OP aside, I do expect there are value of P too small for a human brain to actually represent. Given a probability like .000000001, for example, most of us either treat the probability as zero, or stop representing it in our minds as a probability at all. That is, for most of us our representation of a probability of .000000001 is just a number, indistinguishable from our representation of a temperature-difference of .000000001 degree Celsius or a mass of .000000001 grams.
So we could like exclude computations of expressions, and consider only probabilities of "basic events", assuming that the concept shows to be coherent. We might ask about a probability of a coin flip, but not two coins. Speaking about coins, the "quantum of probability" is simply 1/2, end of story. Well, I don't even know what could be a "basic event" at the bottom level of the universe -- the more I think about it, the more I realise my ignorance of quantum physics.
I don't see where the "basic event"/"computation of expression" distinction gets us anywhere useful. As you say, even defining it clearly is problematic, and whatever definition we use it seems that any event we actually care about is not "basic." It also seems pretty clear to me that my mind can represent and work with probabilities smaller than 1/2, so restricting ourselves to domains of discourse that don't require smaller probabilities (e.g., perfectly fair tosses of perfectly fair coins that always land on one face or the other) seems unhelpful.

Would anyone be interested in following a liveblog of the Sequences on Tumblr? I plan to use this as a public opportunity to think in depth about many concepts that I skimmed over on my first read-through.

Currently wondering whether a blogging service is the best medium for such a project. Currently leaning towards doing it. Undecided if I should use my main or a sideblog.

Now up at lwliveblog.tumblr.com [http://lwliveblog.tumblr.com/]. The About page [http://lwliveblog.tumblr.com/post/20417454309/about-etc] contains information about myself (the writer) and ground rules for my interaction with any audience (or lack thereof). To read in chronological instead of reverse chronological order, use this link [http://lwliveblog.tumblr.com/tagged/lw-liveblog/chrono]. You don't need to register for tumblr to follow the blog and comment on it! You can use the RSS feed [http://lwliveblog.tumblr.com/rss], and disqus comments are available if you click into each post's individual page. edit: fixed formatting
What's a liveblog?
A genre of commentary or critical response that involves blogging running comments as you go through a work. Something Awful's "Let's Play" series might be the best-known examples.

Author Ken McLeod published this persuasive article:

The one thing [fiction] cannot do is help us to understand human nature and the motivations of other people. If it did, the work done in Departments of English (etc) Literature would be of enormous interest to Departments of (e.g.) Business Studies, Politics, and Sociology. Oddly enough it is not. For real insight into human behaviour, practical people turn to science.

He posted this as an April Fool. However, I have to say I find the argument pretty persuasive. Is April-1-Ken right?

He's righter than he thinks he is. See http://www.gwern.net/Culture%20is%20not%20about%20Esthetics#fn18 [http://www.gwern.net/Culture%20is%20not%20about%20Esthetics#fn18]

What is the future of human languages?

Is there something like Kickstarter that isn't limited to American projects? Google sent me a voucher for "$75 in free advertising" which expires in a few days, and I thought, aha, I'll make a Kickstarter project to support my work on ontology of mind, and then advertise its existence via AdWords; but it turns out that you have to be a US resident.

IndieGoGo [https://www.indiegogo.com/] seems to pretty much be the international version of Kickstarter.

I recently got to have a pleasant conversation with a woman who makes a living as a spiritual medium. My father is dating her, for what is most likely to be a very short time, and he brought up her profession over dinner. It became sort of a Q and A session, and I would like to share the experience with this community. It was exceptionally interesting to speak with what can only be called a grandmaster of the Dark Arts. I can't give you an exact play-by-play, unfortunately, but i can probably communicate the gist of the conversation.

My question is this: is this suitable for a discussion, or a main post? Please respond, as I don't know how long I'll remember exactly what was said.

Have you thought about writing it down in note form?
Do you mean take notes? I would, but I'm not sure i can write a transcript without rewriting the memory. I suppose I might have to when I write the post, but I'd still rather talk about it while the event is fresh in my mind.

In the previous open thread, there was a post made here on the topic of learning computer science for purposes of becoming a programmer. The post received several upvotes, but little response. I am hoping that by linking to the post here, I will call more attention to it.

I was quite surprised by the strong and negative reaction to my comment about cryonics being afterlife for atheists. Even EY jumped into the fray. It must have hit a raw point, or something. As jkaufman noted, the similarities are uncanny. So, it looks like a duck, swims like a duck, and quacks like a duck, but is heatedly advocated here and elsewhere to be a raven. The only reasonable argument (I don't consider marketing considerations reasonable) is by orthonormal, who suggested that this is a surface similarity and paying attention to it amounts to a ca... (read more)

It must have hit a raw point, or something

Oh God, please don't say this; it's an absolutely classic way to seem clever to yourself and lock in existing beliefs. Please don't treat people reacting badly to what you say as evidence that it was a good and valuable thing to say.

Actually, my original suggestion ("it needs a catchy slogan") was about promoting cryonics, actually, and the example I gave was the first thing that poped into my head, in a hope that others would come up with something better. Instead the discussion turned to the reasons why my suggestion was so awful. In retrospect, this was a classic pitfall, offering a single solution too early. I was taken aback by the reaction, and wanted to know what provoked it and how to tell whether the arguments are valid. Oh, and I personally would sign up for cryonics, if only I could (not going to go into the reasons why I cannot at this time).
In the grandparent, you wrote: These statements seem to be in contradiction.
As I said, "I was taken aback by the reaction, and wanted to know what provoked it and how to tell whether the arguments are valid"

So, it looks like a duck, swims like a duck, and quacks like a duck

...but we understand in detail how it functions underneath, which screens off any surface impressions. What is the question that you want to answer? It doesn't seem like you are asking a question about cryonics, instead you are considering how to promote it. Is it a good idea to draw attention to those categories? That is the question, not whether those categories somehow "really apply".

Why do you ask for an easy experimental test? If the experiment is hard, such that you rely on third party reports, but the result is not in doubt, then the experiment serves just as well. Granting that experiments may be hard, if we are sure that they are reported honestly, here are two that are relevant to cryonics.

First is the well know point of food hygiene, that one should not refreeze frozen meat. Some food poison bacteria are not killed by freezing, and grow every time the meat is warm enough. If I were a salmonella bacterium I would sign up for cryonics, confident that I was using a proven technology.

Second is the use of hypothermia in heart surgery. The obvious deadness of the patients is very striking for some-one my age (51), brought up in a world where death was defined by the stopping of the heart. I imagine the equivalent for the Christian vision of resurrection to eternal life in heaven is that at most funerals the priest says the magic words and the corpse revives for 5 minutes to say final goodbyes and reassure the mourners that they will meet up again on judgment day. Since it is only for five minutes, not eternity, and since it is on earth, not in heaven, one may... (read more)

In fact all the replies you got related to marketing considerations because your comment was about marketing considerations. From that point of view, it had some obvious flaws, which people pointed out. Do you actually want to discuss whether or not cryonics is a religion (or some improved formulation of that question)?
I think the question that should be asked is whether cryonics is a waste of hope, as many religions are, or if it's viable (I'm still not sure if it would work, but it does seem plausible that it would)
That question should be asked, not flippantly implied. The comment linked above was targeted at pride, so it is no surprise that so many replied. Cryonics is a thing believed by many here, and if you take pot shots, the end result is clear.
Your phrasing is interesting, and phrasing like that is probably one of the factors contributing to the cryonics<==>afterlife for transhumanists association many people hold.
"Considered to be true" didn't scan.
You made a 'suggestion for a catchy slogan' for cryonics which actually constitutes an emotional argument against cryonics (that is, it affiliates it with something that is already rejected so implies that it too should be rejected). That makes it a terrible suggestion for a catchy slogan for cryonics advocates to adopt. If you want to make a point about how cryonics has a feature that is similar to a feature in some religions then make that point - but don't pretend you are suggesting a catchy slogan for cryonics when you are suggesting a catchy slogan to use when one-upping cryonics advocates.
As I said in another comment, it started as a suggestion, but the reaction got me thinking about the similarity and how to tell the difference.
Maybe rationalists don't like being casually labeled as something they are trying very hard not to be (religious)?
Then they should have a ready answer why pattern matching with a religious idea is incorrect.

What do you mean, "incorrect"? Matching a concept generates connotational inferences, some of which are true, while others don't hold. If the weight of such incorrect inferences is great enough, using that category becomes misleading, in which case it's best to avoid. Just form a new category, and attach the attributes that do fit, without attaching those that don't.

If you are still compelled to make analogies with existing categories that poorly match, point out specific inferences that you are considering in forming an analogy. For example, don't just say "Is cryonics like a religion?", but "Cryonics promises immortality (just as many religions do); does it follow that its claims are factually incorrect (just as religions' claims are)?" Notice that the inference is only suggested by the analogy, but it's hard to make any actual use of it to establish the claim's validity.

Cryonics can both be a good idea and pattern match onto something religious. People want immortality. Religions have exploited this fact by promising immortality to converts. Then a plausible scheme for immortality comes along and it looks like a religion.
Your best guess is all you have. More intelligent and knowledgeable you are, more likely it is, that your guess is correct. But you can't go "beyond" this. Considering cryonics ... maybe some people don't want to wake up in a future where the Alcor's procedure is necessary. If you can't wake me up from ashes ... don't even bother!

You guys know your philosophy. What is the proper name of this fallacy?

It's a common sophistry to conflate an utterly negligible probability with a non-negligible one. The argument goes:

  1. There is technically no such thing as certainty.
  2. Therefore, [argument I don't like] is not absolutely certain.
  3. Therefore, the uncertainty in [argument I don't like] is non-negligible.

Step 3 is the tricky one. Humans are, in general, really bad at feeling the difference between epsilon uncertainty and sufficient uncertainty to be worth taking notice of - they can't tell... (read more)

Well, this instance is certainly a False Dichotomy. That is, the argument assumes that everything is either certain or non-negligibly certain. It also sort of looks like an instance of what is sometimes called an Appeal to Possibility or an Appeal to Probability. (1. This argument in uncertain. 2. If an argument is uncertain, it is possible that the uncertainty is non-negligible. 3. Therefore, it is possible that this argument's uncertainty is non-negligible. 4. Therefore, this argument's uncertainty is non-negligible.) On Lesswrong, all of this is generally called the Fallacy of Gray [http://wiki.lesswrong.com/wiki/Fallacy_of_gray]. Edit: Oh, yeah. This is totally the Continuum Fallacy [http://en.wikipedia.org/wiki/Continuum_fallacy]
Ah, a specific variant of the Continuum Fallacy, applied to probability. Yep. I'd still be somewhat surprised if it didn't have its own name yet. But if it doesn't, I suppose we can create a good neologism. What name should it have as a particular variant? (The way argumentum ad Hitlerum or argumentum ad cellarium [http://rationalwiki.org/wiki/Argumentum_ad_cellarium] are argumentum ad hominem variants.) Does anything snappy spring to mind?
What's wrong with "fallacy of gray"?
Nothing at all, I'm just aware enough of the variant to want a name for it.

Interesting video: Alex Peake at Humanity+ @ Caltech: "Autocatalyzing Intelligence Symbiosis"

23 minutes. The blurb reads: "Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion".

This thread is for me and Tetronian and anyone else who's interested to think about how to best present the LW archives.

I think it makes sense to have an about page separate from any "guide to the archives" page. They're really fulfilling different purposes.

Here's what I'd like to see: A core sequences page that also links to sequence SR cards and PDF downloads for the sequences, a page for nonlinear reading of the core sequences (referring to that page with the graphs, Luke's Reading Yudkowsky series of posts, alternative indices, and anything e... (read more)

I agree with pretty much all of this, although I think some of these features could be added to existing pages. For example, links to PDFs or Luke's Reading Yudkowsky series could be added to existing the wiki pages for each sequence. Thus far I've made this [http://wiki.lesswrong.com/wiki/Rationality_materials], which is the first draft of a sample of posts from the sequences. What I'm currently working on: Collecting a sample of the best posts on core LW topics from the archives and arranging them in a sensible way.

Here is an SMBC comic which demonstrates the Utility Monster argument against utilitarianism.


Removed because of Bur's comment.

Warning: randomly clicking on this page may freeze your web browser (happened with Firefox).

Our civilization is not provably Friendly and why this should worry us

As I was thinking about my draft "moral progress" sequence and debating people on the LessWrong chat channel, it occurred to me in a sort of "my current beliefs already imply this but I hadn't noticed it so clearly before" way that our civilization does not reliably ensure its own survival or produce anything like fixed versions of its values. In other words if we judge current human civilization by the same standards as AI it is nearly certainly unfriendly. FAI is th... (read more)

[This comment is no longer endorsed by its author]Reply

At some point, Eliezer mentioned that TDT cooperates in the Prisoner's Dilemma if and only if its opponent would one-box with the TDT agent as Omega in Newcomb's Problem. Does anyone know where to find this quote?

(Have been chasing down my last unread bits of Orwell.)

Just look at this sh*t!. Who'd wish to live under boring wealthy peaceful Fascism when we could have such fun? (Not a trick question.)

I have read, here on lesswrong, that aging may basically stop somewhere around age 100: Your probability of death reaches 50% per year and doesn't go much higher; the reason people don't live much over 100 is just the improbability of the conjuction 50%*50%*50%, etc.

However, this table from the SSA seems to directly contradict that. Now I'm wondering what explains the seeming contradiction.

FWIW, from a histogram I quickly made from the data in http://en.wikipedia.org/wiki/List_of_the_verified_oldest_people [http://en.wikipedia.org/wiki/List_of_the_verified_oldest_people] it doesn't look like the probability of surviving to age x falls any faster than exponentially.
It could be that the table is not empirical past 100.
Maybe. But if anybody had empirical data on old people, I would expect it to be the SSA.
I'd point out that one would expect the SSA tables to overstate the number of centenarians etc, for the simple reason that they are linked to financial payments/checks. Japan recently had some interesting reports that its centenarian numbers were overstated... because other people were collecting their pension checks. From the BBC [http://www.bbc.co.uk/news/world-asia-pacific-11258071]:

youtube video with dating advice

Why is useful and true procedural knowledge about socialization between men and women even when presented politely and decently something that nearly always attracts negative reactions?

The girl in the video is very sweet and nice (she is no Roissy) about giving some semi-useful dating and socialization advice, she dosen't even break any taboos or exposes any pretty lies, yet this didn't really help her with the video being received well.

One might say, well this is just a very bad video, but that is kind of besides the poin... (read more)

[This comment is no longer endorsed by its author]Reply

Rhetological fallacies, courtesy of Information is Beautiful

Doesn't seem to have been mentioned on LW yet, but definitely worth passing on.

I considered posting that myself, but I found that a lot of them are actually valid evidence, and it is mere rehashing of widely known (and easily available if unknown) material.

If I could copy you, atom for atom, then kill your old body (painlessly), and give your new body $20, would you take the offer? Be as rational as you wish, but start your reply with "yes" or "no". Image that a future superhuman AGI will read LW archives and honor your wish without further questions.

No. It might copy me atom for atom and then not actually connect the atoms together to form molecules on the copy. You also didn't mention I would be in a safe place at the time, which means the AI could do it while I was driving along in my car, with me confused why I was suddenly sitting in the passengers seat (the new me is made first, I obviously can't be in the drivers seat) with a 20 dollar bill in my hand while my car veered into oncoming traffic and I die in a car crash. If an AI actually took the time to explain the specifics of the procedure, and had shown to do it several times with other living beings, and I was doing it an an actual chosen time, and it had been established to have a 99.9999% safety record, then that's different. I would be far more likely to consider it. But the necessary safety measures aren't described to be there, and simply assuming "Safety measures will exist even though I haven't described them." is just not a good idea. Alternatively, you could offer more than just twenty, since given a sufficiently large amount of money and some heirs, I would be much more willing to take this bet even without guaranteed safety measures. Assuming I could at least be sure the money would be safe (although I doubt I could, since "Actually, your paper cash was right here, but it burned up from the fireball when we used an antimatter-matter reaction used to power the process." is also a possible failure mode.) But "At some random point in the future, would you like someone very powerful who you don't trust to mess with constituent atoms in a way you don't fully understand and will not be fully described? It'll pay you twenty bucks." Is not really a tempting offer when evaluating risks/rewards.
My willingness to take the offer is roughly speaking dependent on my confidence that you actually can do that, the energy costs involved, how much of a pain in my ass the process was, etc. but assuming threshold-clearing values for all that stuff, sure. Which really means "no" unless the future superhuman AGI is capable of determining what I ought to mean by "etc" and what values my threshold ought to be set at, I suppose. Anyway, you can keep the $20, I would do it just for the experience of it given those constraints.
And the caveat that memories/personality are in the atoms, not in more fundamental particles.
Yeah, definitely. I took "atom for atom" as a colloquial way of expressing "make a perfect copy". The "etc" here covers a multitude of sins.
Yes, it's a free $20. Why is this an interesting question?

Brevity is the soul of wit. Why is LW so obviously biased towards long-windedness?

Have you ever tried to read a math textbook that cherishes being short and concise? They're nigh unreadable unless you already know everything in them. When you're discussing simple concepts that people have an intuitive grasp of, then brevity is better. When there's an inferential distance involved, not so much.
Tried Mathematics 1001 [http://johncarlosbaez.wordpress.com/2011/12/06/maths-1001/]? Only $16.13 at Amazon [http://www.amazon.com/Mathematics-1001-Absolutely-Everything-Explanations/dp/1554077192].
I think that illustrates the point actually; the topics in that book either do not have much of an inferential distance or as the description you link to says "The more advanced topics are covered in a sketchy way". Serge Lang's Algebra on the other hand...
Funny, Serge Lang's Algebra was one of my mental examples. (Also see: anything written by Lars Hörmander.)
That's not entirely true -- Melrose's book on Geometric Scattering Theory, Serre's book on Lie Groups and Algebras, Spivak's book on Calculus on Manifolds, and so on. I think the phenomena you're pointing to is closer to the observation that the traits that make one a good mathematician are mostly orthogonal to the traits that make one a good writer.
I don't know about others, but it helps me understand an idea when I read a lot of words about it. I think it causes my subconscious to say "this is an important idea!" better than reading a concise, densely-packed explanation of a thing, even if only once. This is a guess; I don't know the true cause of the effect, but I know the effect is there.
But an enemy of knowledge transfer.
wit != rationality. Also, I'm pretty sure the bias, if it exists, runs in the opposite direction. We even like calling our summaries "tl;dr"
I take issue with both of your claims! Sure, wit isn't rationality, but I suspect it can be quite the rationality enhancer. And I assign high probability to the existence of a "long post bias", though I'm not sure it's higher at LW relative to other places. It may not be a bias, though; Paul Graham, for example, says that long comments are generally better than short ones, and this seems to be obviously true in general. In terms of posts, I'm not so sure. I would have upvoted the grandparent comment of this if it weren't rude (how hypocritical of [http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/5n1f] me [http://lesswrong.com/lw/atm/cult_impressions_of_less_wrongsi/60ub]).
Keep your wits about you. In Shakespeare's times the word meant "intelligence". P.S. Someone explain the downmods to me. The parent either didn't know the saying was from Hamlet, or thought "wit" meant "humor" in this context.
Too many cooks spoil the broth, but many hands make light work. Can someone please explain to me why this broth, made by far too many cooks, was both labour-intensive and delicious? "Brevity is the soul of wit" is an idiom, not some sort of undisputed fact. Your question doesn't highlight an interesting contradiction; at best it will be interpreted as a weak play on words, and at worst it will be interpreted as trolling.

What are some unforgettable moments in the lives of Less Wrongers?

Anything will do, and I don't mind if you tell it in story-mode or in "here are the exact, objective events"-mode, but do try to pick one or the other rather than a hybrid.

Do you have a purpose in attempting this collection project?
Just curious.
I guess LWers lead very uninteresting/private lives.
-2Paul Crowley11y

Looks like Zach Wiener at SMBC might be reading up on FAI.

Or maybe on utility monsters [http://en.wikipedia.org/wiki/Utility_monster]. Also see Hacking the CEV for fun and profit [http://lesswrong.com/lw/2b7/hacking_the_cev_for_fun_and_profit/].
And/or Nozick's utility monster [http://en.wikipedia.org/wiki/Utility_monster].

Does anyone know if there is an "FAI" sequence? I can't seem to find a list of all the posts relevant to FAI or UFAI failures.

So many of the posts in the sequences are indirectly related to FAI that the most concise list of "FAI posts" is here [http://www.cs.auckland.ac.nz/~andwhay/postlist.html]. The wiki article on FAI [http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence] covers a lot of ground and has links to LW posts and other websites explaining it in a bit more detail.

(Edit: Thanks to Micaiah Chang for the links and suggestion.)
Does anyone here speak Japanese? If so, or even if not, I'd like to discuss the morals and themes of the story, 「走れメロス」(Hashire Merosu). If you have read it, but your memory is a bit fuzzy, here's a rough summary:


... (read more)
[This comment is no longer endorsed by its author]Reply

I've been looking into American politics a little, and it sure is a hilarious business! Here's a short riddle for you. (Disclaimer: not intended to make any implications or mind-kill anyone; I'm not taking a dig at any opponents.)

"As __ , we believe America is a land of boundless opportunity, where people can better themselves, their children, their families, and their communities through education, hard work, and the freedom to climb the ladder of economic mobility." (Paragraph from a group's mission statement.)

Without googling, can you tell what the missing noun is?

It's an applause light. Could be anyone, although the phrasing of that particular applause light makes me suspect either a moderate conservative group or a liberal-leaning group that's trying to establish centrist bona fides. A whole lot of American political groups use economic mobility in their rhetoric (it's sort of a cultural talisman), so that by itself doesn't tell you very much; you need to dig a little deeper and find out how they're constructing economic mobility if you want to learn about their actual ideology. In particular, the American economic right tends to draw lines between ensuring equality of opportunity and equality of outcome, while the American economic left either deemphasizes that (if the group in question is more centrist) or actively asserts that the two are inseparable (if more leftist).
Sploiler: It's the Pragre sbe Nzrevpna Cebterff, n(a hancbybtrgvpnyyl yvoreny-cebterffvir) guvax gnax.
I'm wondering, looking from the outside into the politics of another country is there anything different? I mean obviously the applause light constellation is probably different, but generally a surprisingly large part of the political rhetoric in any country is shared by most of the sides vying for power.

New to LessWrong?