If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
105 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Clarity's bitching and moaning containment comment

No other top level comment by Clarity will be made in this Open Thread will be made because a considerable portion of LW voters don't give a shit

1. Windows 7 phones are problematic

There is currently no way to remove the auto-fill terms you have accumulated over time asides from hard reseting the phone.

-Windows 7 phone search history can't be deleted...

Seriously, Microsoft? People could find I eschew porn to instead listen to songs that remix in or mashup sounds from porn while going about my daily activities.

2. Can your genes exempt you from an altruistic imperative to donate your organs?

Do our SNP's indicate any exceptions to the duty to donate one's organs? rs17319721(A)'s and Rs429... are associated with kidney related issues. It doesn't appear any others are known to. The implications are unclear.

3. El Chapo's getting desperate.

What would you do in his situation? How does me manage to get to cartel to do his bidding when he's on the run when they could usurp him instead? I assume that other high up Sinoloa cartel people just appreciate his intelligence in planning their operations. And, that incidental helpers opportunist... (read more)

  1. I wonder if something like "what clarity writes" (about//quality) is variable monthly, maybe inline with any medications or other monthly things - example regular social events with lw'ers or similar groups who encourage you to maybe hone your ideas so there is less bad and more good in a given week.

No other top level comment by Clarity will be made in this Open Thread will be made because a considerable portion of LW voters don't give a shit

I am much happier about this comment than seeing several on an OT. I wouldn't mind seeing 2 a week in this format if it helps you think. Worth adding that you have numbered, titled, bulleted pointed, linked and self-explained your process which I expect adds to your karma score.

with 2, it's hard to respond - especially not knowing the area very well.

1, 3, 4, 5, 7 all seem like comments (or rhetorical with 7) not questions or really asking for response. If you do the same thing next week and get the same sort of response to some of your posts, you might be able to improve your content by filtering for ("things Clarity wants feedback on" | "thing clarity just wants to share") and dividing into two sections (in one comment is fine).

Upvoted for being one comment. Sure, it's called The Fountainhead, and it's for you. (That is, your obligation to maintain contact with your family is entirely voluntary.) But more helpfully there are lots of therapy / self-help books that are full of useful skills that you can sometimes improve a relationship by giving to the other person. This is typically recommended against because a very important variable in the success of interventions is 'therapeutic alliance'--if someone isn't interested in a book, they won't read it, and if someone doesn't want to work with a therapist, they'll sabotage the experience. It is worth focusing on how you can adjust your behaviors around the difficult people in your life, and this is something that you might want to bring up with a therapist / get books to look into yourself. (When I was dating someone who was anxious, I read self-help books on anxiety, for example.) Pretty sure it's the latter, but in a moving average sort of way--a bad comment not only is likely to be downvoted itself, but also makes it more likely that other comments posted at around the same time are viewed as bad.
"virtuous pedophiles" who do not act on their desires are to be praised, but should not be lumped in with LBGT, because the LBGT movement advocates that people should act on their desires rather than trying to be straight, which is the exact opposite of "virtuous pedophiles". The worry is that support for virtuous pedophiles is a slippery slope towards support for practicing pedophiles.
Re: number 6. My impression is that the "karma attractiveness" of your posts is pretty variable. I'll give it some more thought and see if I can nail down any specific reasons for that impression. Re: number 5. If you'd like to commiserate about unpleasant family members, let me know. :P I have them in spades.
Based on the relative karmic outpouring, it seems people like this format more than single comments. Why?

Because he asked what he could do to improve the reception of his comments, and was told that a major problem was that he spread random ideas across dozens of comments, cluttering threads up, and then proceeded to follow that advice. (Or, in more Less-Wrongian terms, he updated his beliefs, or something.)

People are showing their appreciation for the fact that he listened. Doing so is a great positive reinforcement of community norms, and makes users happy to follow the norms as they get rewarded for doing so, as opposed to the opposing strategy of downvoting deviations from the norm, which might result in resentment, as they would experience only punishment.

Sinoloa creates trust through family ties.

I've created a cost-benefit analysis of embryo selection for intelligence: http://www.gwern.net/Embryo%20selection

Turns out to be fairly challenging but ultimately delivers sensible results: modestly profitable but nothing special at current prices/polygenic-scores. But things get more interesting once we get scores from n>360k studies, and the multi-generational consequences are very interesting if we can get boosts like +9 points. Of course, it's mostly all a moot point and academic because of...

CRISPR. I've had a hard time getting prices because they all sound too good to be true.

To update: I've added estimates of how elites like Harvard undergrads would change in composition under various scenarios; value of information estimates of additional SNP datapoints; a large bibliography of papers on IQ/income/wealth; background on challenges to estimating CRISPR value; implications of being able to create unlimited eggs from stem cells; generalization of embryo selection code; and demonstration of the large gains (>2.8x) from selecting on multiple complex traits and not just IQ.

Rationality for managers, part 374:

If your plans are greater than your capacity to do stuff, you need to set priorities. Prioritizing is a way to achieve some of your goals sooner, at the cost of achieving some other goals later (or not at all). Prioritizing is not a magical tool to achieve all of your goals sooner. If you cannot make an explicit decision to postpone some goals, by definition you are unable to set priorities.

Adding the words "priority: urgent" to a task achieves the desired effect only if there are other tasks without the label "priority: urgent". Giving the label to all your tasks is equivalent to giving it to none of them. For example, if you create a planning spreadsheet or configure a planning software with priorities from 1 (highest) to 5 (lowest) only to find later that each task is assigned a priority 1, then the truth is that your company or division does not have priorities.

Even if you use different priorities, but the tasks labeled "priority: urgent" exceed your capacity to do stuff, then effectively every other label becomes synonymous to "this will never be done", and the label "priority: urgent" itself ... (read more)

I have 90% credence that Old_Gold is the new account of Eugine.

And I have maybe 85% credence that someone is downvoting old comments of mine (in bulk but fairly slowly). It's not hard to guess who, if so. It seems to me that LW moderators should give serious consideration to undoing all the past votes of Eugine's known accounts, if only to reduce the motivation to do what he does. (And implement a not-too-low karma threshold for downvoting, but that involves actually changing the code rather than "just" tweaking things in the database[1].) [1] Anyone who has looked at how the DB is organized will understand why I put "just" in quotation marks there.
Yes, he doesn't even try to disguise the fact by picking a more neutral username.
This may be a time to exercise the virtue of silence: if he's trying to hide, perhaps we shouldn't talk publicly about the ways we identify him. (I'm not sure he is trying to hide.)
He most definitely isn't. Otherwise, he would [redacted as per WP:BEANS].
I don't think it's quite WP:BEANS. The things Eugine might do to try to hide aren't stupid in the way shoving beans up your nose is. It's possible he hasn't thought of doing them, but in that case the error is more like "gosh, I hope he doesn't realise he could just hit me over the head and steal all the money in my wallet" than like "and don't stick beans up your nose".
I am confused why you think that is not an example of WP:BEANS.
WP:BEANS means not telling X "don't do Y" where Y is something that X would be unwise to do, but might do just to cause trouble, or for fun and with indifference to the consequences. Telling Eugine "don't sacrifice a chicken to the Privacy Gods" (pretending for the sake of argument that that would enable him to avoid being spotted by LW moderators and the like) doesn't fit that template because (1) avoiding being spotted wouldn't be unwise for him and (2) his interest in not being spotted isn't because he wants to cause trouble but because he has particular goals that he can't achieve as effectively if he's spotted. It's not far off WP:BEANS, and the difference isn't particularly important, but it doesn't seem to me like it quite fits. (Note that the example in WP:BEANS is of something that would "crash Wikipedia", not e.g. something that would "allow you to modify any page to say anything you like without risk of having it reverted". It's aiming at carelessness and trollery rather than at purposeful abuse of the system.)
Online disinhibition effect One of them is 'you don't know me'. I for one value the right to privacy for all LessWrongers, Eugine included. There are 5 other disinhibiting factors listed on the Wikipedia page. Could we explore those instead?
Hey, it IS just a game. I mean, there are points and scoring and everything.

If we would apply Elon Musk first principle thinking to the problem of building homes in which we live, how would we build homes? Are there any big companies taken up that challenge?

Michael Vassar makes some observations about this in this chat from about 37:50-40:30. He begins describing something called a "hexayurt tridome", some kind of portable desert structure, and finishes saying "for the cost of engineering the 2016 Toyota Corolla and with the level of engineering skill required to engineer the 2016 Toyota Corolla it would probably be possible to engineer a house that would cost less than a Toyota Corolla and that could be deployed more easily and be adequate for any climate pretty much anywhere in the world where there's a reasonable amount of free space".

I think most places where people want to live don't fulfill the criteria of their being "a reasonable amount of free space".

And in a lot of places where that space exists it is illegal to live there. You could park a serviceable RV in a lot more places than you are allowed to live, for one example.
Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.
To the extend that people want to live where other people live it's useful to have a high density. Flat buildings aren't optimal for cities even when they are cheap to build.
I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.
People basically want to live where they can find a well-paying job. In the great leap forward Mao thought that the factories being in cities was simple a coordination problem. He then declared to move them outside of the cities where they were grown organically. It was a disaster. A big company like Google could theoretically move it's business headquarters to the middle of nowhere. On the other hand that would likely be a very bad business decision. It's employees wouldn't simply want to move to the middle of nowhere.
There are also good reasons why "where they can find a well-paying job" usually coincides with "where there are a lot of people." Generally speaking, a person's salary corresponds to a pretty reasonable estimate of how much good they are doing for society. It's easier to do more good for more people when more people are around, e.g. a restaurant does more benefit to more people by being close to a lot of people, instead of being in the middle of nowhere. So generally people will get paid more if they have jobs closer to a lot of people.
Consider the IKEA refugee shelter.
If anyone is curious about genomes: it's unlikely. Veritas only just started offering a $1k genome; no one is announcing something that'll be ready by June which is in 5 months; and if you want to extrapolate from the historical data, while we're clearly recently jumped to a new regime starting in 2015, but extrapolating from the 2015-2016 data, genomes still shouldn't be <$500 for another 10 months or so: R> genome <- c(9408739,9047003,8927342,7147571,3063820,1352982,752080,342502,232735,154714,108065,70333,46774, 31512,31125,29092,20963,16712,10497,7743,7666,5901,5985,6618,5671,5826,5550,5096,4008,4920,4905, 5731,3970,4211,1363,1245,1000) R> l <- lm(log(I(tail(genome, 3))) ~ I(1:3)); l # ...Coefficients: # (Intercept) I(1:3) # 7.3937180 -0.1548441 R> exp(sapply(1:10, function(t) { 7.3937180 + -0.1548441*t } )) # [1] 1392.5249652 1192.7654529 1021.6617017 875.1030055 749.5683444 642.0417932 549.9400652 # [8] 471.0504496 403.4776517 345.5982592 (10 rather than 8 because the first 3 time-units, 1-3, have already passed, and the remaining 5 time-units until $471 is in time-units of 2-months, so 2*5=10. I couldn't be bothered to convert the data to more sensible days/months for a quick extrapolation.) So I would expect any $500 genome to be in 2017 or 2018. Some dark horse could overnight announce a $500 whole-genome at any time... but I wouldn't bet on it, in part because that NHS data may not reflect any such service's existence.
There is a small issue that NIH is quoting a wholesale price, while Veritas is quoting a retail price (including customer acquisition, risk of not using all capacity, and profit), so you should expect the NIH price to be lower.
Yes. The metric that's used is NIH data. Given that they brought sequencing machines in the last years and the cost of the sequencing machine's isn't completely included in the year they are brought and the 2016 sequencing budget likely pays for machines brought in 2014 and 2015 even if great new machine get's introduced in 2016.
Right. Great internal validity since you can count on them to not be playing any games with costs like, say, commercial companies such as Illumina; but imperfect external validity which renders down-to-month precision questionable.

Sorry if this is a stupid question *, but is there any not complicated literature about how mass point geometry is related to, or used to teach more efficiently, or something else, probability theory? I used to fantasize about going through a middle-highschool-level book on MPG with my middle-highschool pupils (with added benefits of reinforcing Newtonian mechanics), and to get them into Bayes law (probability masses as masses and odds as lengths:), but... It was hard to imagine, say, triangles formed by probabilities.

I Googled it up, didn't find "exactly the thing" and moved on.

  • and on the similar note, do we still have them?
Huh, MPG seems like an interesting trick. The obvious way to pursue an analogy is that masses are probabilities, but that seems to not work - the distances have no meaning, and there's no advantage over adding and subtracting probabilities of disjoint events. So what if we make the distances probabilities. Now we can have P(A) be some distance AA', and P(A|B) be AB... No, no good... or if we make the probability the position of a point on the middle, is there anything interesting we can do with the weights of P(A) and P(B) to make P(AB)... So if you fix one endpoint to have mass 1, and the position to be P(A), the value of the other point will be the odds ratio P(A)/P(not-A). But there's no convenient way to multiply values of points... Not sure there's any way to use all the parts of this buffalo, sorry.
Well that is rather what I think, too. It's just that... I mean, I know the next bit is the opposite of rigorous and all that, but since my job is basically to teach kids how to play with shiny toys, I should at least see if they are child-friendly, right?:) so I am asking you as someone who probably has played more. Suppose we have several - say, 9 - 'tests' we've run daily for two months, covering the turn of spring into summer, and the outcomes of all of them should reasonably change as the seasons progress, but not quite as straightforwardly as the development of foliage, fruit, etc.* Like, test A is 'how much acetone will evaporate from a 40 ml bottle in an open shady space between 12.00 and 12.50?', test B - 'how long will it take for a soaking wet handkerchief to dry to constant weight if hung at 12.15?', ..., test I - 'what percentage of shepherd's purse plants in this patch is blooming?' All the tests are but weakly connected to each other, but it is possible, generally speaking, to assign some measure of 'distance' - for example, A and B are clearly closer to each other than each of them is to I (although it seems that 5 or 6 tests are more realistic to juggle). Then, we draw a nonagon (or 5-, or 6-...) Such that the distances between the vertices equal the 'distances between tests'. The masses of the vertices are the probabilities that the given outcomes support the hypothesis that the day is Day X. (The question is, of course, 'what day is it?') We are told that X is either 9, or 24, or 35. Now we can find the respective centers of mass for each combination of probabilities for all three hypotheses. Then we obtain a 'cloud' of centers of mass, each of which is 'more or less in favor' of one hypothesis (except, perhaps, one where the probabilities are equal). If we divide the cloud into three layers (one for each H), and have the masses of the points be the p(H), we can find the center of masses for each day. When we plot the 'final day points', we can

What are the best public places to discuss existential risks?

My list in order of quality:




Semi useful:




Almost dead now:

Lifeboat foundation mailing list - almost dead now, but had good discussions before




http://forum.arctic-sea-ice.net/index.php/boar... (read more)

Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life i... (read more)

I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.
Panexistential risk is a good, intuitive, name.
I think the term is Great Filter.
G0W51 is talking about universal x-risk versus local x-risk. Global thermonuclear war would be relevant for the great filter, but doesn't endanger anyone else in the universe. Whereas if Earth creates UFAI, that's bad for everyone in our light cone.
True. Also, the Great Filter is more akin to an existential catastrophe than to existential risk, that is, the risk of an existential catastrophe.

Events of recent days have made me suspect that I feel less depressed on days when I have little or no beard. But now that I know this hypothesis, I don't know how to test it without priming myself every time I shave. Suggestions?

It could also be that you feel less like shaving when you feel more depressed.
Reformulate your hypothesis as "the act of shaving makes me less depressed for one or two days" and test that. Since the outcome is the state of your own mind, priming doesn't matter.
consider the idea that "rituals of personal care" are effective at convincing yourself that you value yourself more. (rather than specific to shaving). Try also: * Painting your nails * getting a fancy haircut * cooking or eating a fancy meal * going to a performance of some kind of culture * setting aside time for other rituals
If shaving works, doing other rituals to clean both yourself and your flat might also have a similar effect.

[Comment removed because it drew attention to links between online identities of someone who turns out not to want attention drawn to them.]

FWIW, GateOfHeavens started a couple years before aristosophy ended.
There seems to be a facebook account for him, who's friends with a bunch of LW people (from my end there are 13 shared friends). How about simply sending them a message over facebook and a friend request? As a general rule if you think that the person wants to stay annonymous by using a nickname, why publically write a post about the link between different accounts?
[Comment removed because like its grandparent it leaked information about a third party's identities.]
His facebook account itself might be the place where he posts.

Prediction Markets Going Wrong?

How many ways are there for a prediction market to go wrong?

In my story's current draft, once my protagonist upload has made a few copies of himself, I have him start up a prediction market to try to improve his decision-making, such as the likelihood any given plan will reach a useful goal. (Using currency created ab nihilo via a Bitcoin-like blockchain.) I have his similar copies end up coming to an overconfident consensus, leading to an explosive disaster, leading to attempts to deliberately diversity his copies' mindstate... (read more)

It might make sense to read about augur in detail and the justification for their design choices. It's worth understanding how scoring of predictions work. Do you have a central authority? Do you have something like Augur's reputation? For abstract goals like reach a useful goal it's quite important who actually puts forward the wording of a specific question. How can repredictions be judged? Augur has Right/Wrong/Unclear or Immoral. Immoral is particularly interesting. Did a certain person die to fix the prediction market and thus the prediction is an assignation market and should be judged as immoral or isn't it? What are the mechanics of the crypto-currency? If you take Augur than some bets might be make in complicated currencies like Dai. That currency might crash because the there isn't enough insurance to pay for changing price. Prediction markets go wrong when there a high inflation in money because the prediction market requires locking up money for a given time. If the goal is somewhere in the future there can be a requirement to pay subventions to counteract the interest someone would earn on money. How liquid is the prediction market and how much money can be gained by effecting the result by buying shares to manipulate the price?
Prediction markets are driven by money. If someone has more money than the rest of prediction market together, they can burn the money to support a wrong prediction. Why would they do that? Presumably doing this brings them more benefit than the money spent. I can imagine two situations: a) The prediction markets are new, only a few people bet there, so there is not so much money there. You may have a business that would be endangered by prediction markets becoming popular (e.g. people pay you for providing the expert opinion). Thus you spend money hoping to discredit the very concept of the prediction market. b) People start trusting prediction markets blindly. For example, if the market predicts that X will become a president, people will not even bother to vote for the competitors, because "what's the point? they're going to lose anyway". In such case, X may be willing to burn a lot of money in order to convince people that he is going to become the president, because that will become a self-fulfilling prophecy, and his sponsors are willing to invest that money. For the sake of story, let's make it dramatic: the person intends to become a dictator, nationalize a lot of stuff, and give it to his sponsors; this is why the sponsors are willing to invest insane amounts of money, because they bet on getting much more in return. By the way, "prediction market" being right in general doesn't mean that every answer will be correct. There may be answers where most people don't have a clue, and those will attract much less votes (but still some, because people are irrational).
Two things come to mind, but they are possibly only tangentially related. First: the second half of McAfee's economic analysis (available online) is devoted to market inefficiencies. Second: there's a chapter in Jaynes book about group invariance, that is how a piece of information can leave the prediction of a set of agents unchanged. Might be relevant.

I've been reading HPMOR for the past week (currently at TSPE aftermath) and I'd like to recommend it to anyone who hasn't read it yet.

How good is the case for supplementing Vitamin K2 along with Vitamin D3?

I just happen to have finished a work assignment on that subject. The case is pretty good, it seems.

What might be the cause of the perceived difference between the atheists/nontheists in Europe and in the USA?

I have the general feeling that the average atheist in the USA, when asked about religion, will be very open about believing religion to be either evil or ridiculously stupid, and will make at least a few remarks about how idiot those lunatics must be who believe that there are invisible people living on the top of the clouds. On the other hand, in Europe you are more likely to hear that "well, I'm not very religious", but many would cultu... (read more)

I guess that it is the religious freedom and religious diversity that keeps religions in USA more "alive". Religions compete with each other, try to convert each other's followers, keep the religious memes virulent. People work harder to signal belonging to their religion. And atheists living in the same culture work harder to signal their atheism. In Europe historically you often had one mandatory religion per country. Without competition, priests got lazy and religion got boring. Some people lost their faith in religion, some people still believed in the religion but also saw the lazyness and corruption of the church, so you got many people who publicly identified as religious, but tried to do as little as possible about it. Those who identify as atheists are quite similar in behavior to the most lazy of those who identify as religious. Metaphorically, the American religious landscape is a few separated shining colorful dots, atheism being one of them; the traditional European religious landscape is a gray bell curve, atheism being one of the ends. (The people you describe as "culturally still identifying as a Christian" are somewhere at 90% of the curve. There are also real atheists at the end, but they are fewer.) I imagine that the increase of Islam might change this picture in the future; that the group of "lukewarm Christians" may decrease because (1) there will be attempts to convert them to Islam, and in response to that (2) the Christians may also wake up and start radicalizing their lazier members, and (3) some people will start identifying as atheists because they will dislike both of these options. Then, instead of the bell curve, we will have three separate groups.
I would certainly be interested in seeing data on the phenomenon as well. From what I've personally seen, I would think there's a large distinction between a person who was raised atheist/nontheist and a person who had to intentionally and actively extricate themselves from a religious organization (and more importantly all the religious thought that comes with it). Removing yourself from a social group has costs to achieve and lots of detriments socially. Removing yourself from a massive set of ways of thinking (all tied together in a convoluted memeplex) that have been embedded into you by intelligent adults since you were a small child has a high cost to pull off, is easily emotionally troublesome, and is liable to distance you quite a bit from the people, ideas, social groups, accepted customs, and general life support you grew up with. (If you don't see an easy way to imagine this, try thinking of how difficult it might be to remove all of Rationality from your own brain. How would you even begin? What kinds of costs would that have on your thoughts, emotions, relationships, social groups, etc.?) By the time you're done with all of this, then you're going to be less likely to widely respect the organization you removed yourself from and any organizations that are similar. You're likely going to be at least a little bit annoyed at having ever been in a less preferable situation beforehand (do you relish being less rational in the past?) and you're likely to have lost respect for people who are in the previous state that you used to exist in (those poor irrational people out there!). All of this will, of course, vary widely from person to person and some will be able to give up an organization with a weak hold on their thoughts and social contacts (holiday-only catholic churchgoer?) than others will (three times a week charismatic churchgoer?). Overall I see Europeans existing in the first group (2nd+ generation atheists) while many Americans live in the secon
One straightforward theory is that a person who identifies as Christian isn't an atheist, so you're comparing "apples and oranges".
True, but there are plenty of bona-fide atheists in Eastern Europe (e.g. Czech Republic) and they still don't seem to be very loud or make a big deal of their atheism. So Val's point is still true when we compare these cases.
In the US, religion, in particular Christianity, is seen by many atheists as distinctly lower-class, and is associated in particular with the working classes. This makes atheists work overhard at differentiating themselves from Christianity, to the extent of attempting to deny or minimize any cultural influence from religion.
And even if you're just a later-generation atheist trying to fit in with other atheists, you'll end up attempting to conform to that similar level of original distancing.
Yes, it seems to me that outspoken atheism is often a striver trait. Strivers typically feel the need to draw a sharp distinction between themselves and the lower class they come from.
I can't speak to Europe. But I have a friend/fellow grad student that moved here from the Caucuses who calls himself wholeheartedly orthodox Christian despite being decidedly along the above-described 'don't-care' to 'nope' spectrum about actual theological claims. Once called it 'an interface for dealing with stuff that every human has to deal with' loosely quoting.
I notice in my own brain that I have trouble defining or accepting association with religion as something that isn't a 0 or a 1. I think that's obviously an irrational thought, but I'm not sure how to get around it either. This may also occur with other Americans.
Where in Europe? Richard Dawkins is from England and organized things like the infamous atheist bus campaign. Also numerous European countries used to have atheist militants, of the priest-killing or at least send-priests-to-labor-camps variety.

containment thread 2

1. Manipulating financial markets for fun and profit

While emerging market regulators have not identified specific skills sets required of surveillance staff, survey results highlight that the common skills that regulators and exchanges seek in surveillance analysts include data mining and analytical skills and the ability to understand the mechanics of the electronic trading environment, i.e. trade flow and processes and to analyse and evaluate surveillance technology programs and procedures. In addition, maintaining a good contact of

... (read more)
Also worth pointing out is that the feds investigated everyone who shorted airline stocks before 9/11. (See here; they all turned out to be innocuous.)

Okay, stupid and off-topic question:

I want to get a tablet for a Small Child so she doesn't keep bugging me for mine. The one I got her for her birthday broke in an unexpected manner: The glass is not damaged but the screen is displaying black lines and similar garbage. (It was also working at the beginning of a car ride and failed at the end of it - it couldn't have fallen onto a hard surface.) So I'm looking for a tablet that 1) has access to the apps on the Google Play store I've already bought for her; 2) can survive a tumble down a flight of stairs or... (read more)

There are "rugged" tablets for sale but they don't come cheap. OTOH, the failure mode for your tablet suggest that it could be easy to fix by just opening it up and making sure that the internal screen connector is plugged in correctly. Can you find a teardown guide for it on iFixit?
It also doesn't seem to be booting up properly either - I don't see the logo when I turn it on. :(
How about the Kindle Fire kids edition? You can install Google Play on it. It has a "Kid-Proof Case" and a two year guarantee that if it breaks for any reason they replace it.
I've heard that it's hard to install the Google store on Amazon Kindle tablets, that you have to mess with system files or something... Good suggestion, though.
I just installed Google Play on a Kindle Fire tablet using this.

Marriage: Civilization's BIggest Mistake

Something that bothers me about this is the all-too-common idea that kids are unruly and will cause endless destruction. I remember my parents being anxious to leave me alone at home and me thinking "Umm what? What could I do?" and being proud the house didn't look like whatever's left after a direct hit from a nuke.

Why's that?

Seems to me that the problem isn't marriage per se, but atomic families. I assume that many people would prefer "spend 50% of time taking care of 4 children, then have 50% free time" to "spend 100% of time taking care of 2 children, no free time". If your sister lives the next door, it's easy to arrange. If you have good relationship with the neighbors who have children of similar age, still possible.

The article is in my opinion rather stupid. It essentially suggests putting kids into institutional care. The author probably never heard anything about what typically happens to children in such institutions. (That's the charitable assumption; the uncharitable one would be that the author just doesn't care.) My Facebook friend list happens to include a person who frequently interacts with such institutions, and after reading all the horror stories, I think almost anything is better than the institutional care; except when the child is abused at home (and I don't mean "microaggressions" or anything like that).

When a kid is in school, one teacher controls 20-30 kids. That is an efficient system, and the teacher probably doesn’t mind the work.

As a former... (read more)

Anyone have recommendations of fiction along the lines of Worm and HPMOR that are also very long (>400k words)?

Harry Potter and the Natural 20 has a lot of characters optimizing in interesting ways in response to the constraints of their universe, but their universe's laws are a lot less realistic than Worm or HPMOR. The Ethshar books by Lawrence Watt-Evans are not one coherent story but there are a lot of books set in the same world. Miles Vorkosigan has the whole "cunning main character thinks outside the box to beat impossible odds" dynamic and there are a bunch of books that follow him and his life, although the fact that he ages throughout them gives them a pretty different feel than the chronologically much shorter timespan of HPMOR.
You probably already know this, but the author of Worm has written another long (though not so long) serial called Pact and is in the middle of another called Twig. If you enjoyed Worm and haven't read those yet, you might give them a try.
There's a subreddit for rational/rationalist fiction that may interest you or have a longer list of suggestions than here. I think the subreddit was www.reddit.com/r/rational

Did someone change the threshold for hiding comments? Didn't it use to be that -4 comments were hidden and now -3 are?

Yes, that was recently changed. You can adjust your personal thresholds in the user preferences setting. [Edit: only works for posts.]
As your link explains, the setting is a lie.
Hmm, you're right. I only tested the posts setting (which does appear to work), not the comments one.

Miniature brains seem like they could become very important.

Can they be kept alive for long? Can they be enlarged? Can they be trained? Is the distinction between human neurons and other mammalian neurons significant?

To prompt some discussion, say someone tried to build "self-driving" cars in the following way: put a big brain full of rat neurons in a vat, hook it up to a car, and then train it to navigate.

I don't think that would solve any of the problems of driverless cars. The problems of driverless cars are about handling a lot of edge cases that don't come up often that a human can solve by having a decent mental model of the situation he's facing. Why should we expect those rat-brain to have good mental models? According to Musk self-driving cars while need to have a 10x reduction in traffic accidents to be viable. That won't work if you basically train mouse brains to do the same thing that humans do.
Mice with human glial cells are smarter. New Scientist, Cell Stem Cell (pretty sure that's the paper, but it might be an earlier one)

How would a Donald Trump presidency effect the probability we achieve friendly AI before Clippy arrives?

Also, it appears OP used Comic Sans. Hm.

The Comic Sans thing might be my fault...I may have alerted Elo to widespread distaste for the font.
I knew before, I don't really see the font face any more very much because it's so everywhere...
Shh don't take this away from me.
Increase, as Hillary would store the top-secret AGI code on her personal server.
Sorry sorry sorry. Fixed. When I copied the header I forgot to remove the font face that get's pushed to my browser via comic-sans browsing...

Why I don’t want to explore storm water drains

*Or, terrifying excerpts from a how-to guide on exploring drains

If there is a protruding wall and you can't get up a shaft in time, get in close to the downstream side of that wall. This is not very safe but it is better than standing in the path of the oncoming maelstrom. Hanging from a grille is not so good either, you will be dumped on (and may lose your grip) but that might be better than being flushed a few km at high speed. Staying out of the flow is mega-priority... nothing can ruin your day like a de

... (read more)