If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
300 comments, sorted by Click to highlight new comments since: Today at 1:24 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some users might find this interesting: I've finished up 3 years of scraping/downloading all the Tor-Bitcoin darknet markets and have released it all as a 50GB compressed archive (~1.5tb). See http://www.gwern.net/Black-market%20archives

Thank you.

One-Minute Time Machine -- a short romantic movie that LW readers might like.

Excellent! I don't share the guy's qualms, though. The girl I can empathize with. Oh, and hopefully Eitan_Zohar doesn't come across it.
I feel sorry for the girls and boys who suddenly have a corpse on their hands.

I found this paper: Adults Can Be Trained to Acquire Synesthetic Experiences.

The goal of the study was to see if they could induce synesthesia artificially by forcing people to associate letters with colors. But the interesting part is that after 9 weeks of training, the participants gained 12 IQ points. I have read that increasing IQ is really difficult, and effect sizes this large are unheard of. So I found this really surprising, especially since it doesn't seem to have gotten a lot of attention.

EDIT: This is a Cattell Culture Fair IQ which uses 24 points as a standard deviation instead of 15. So it's more like 7.5 IQ points.

They made each participant do 30 minutes of training every day of 9 weeks, which involved a few different tasks to try to form associations between colors and letters. They also assigned colored reading material to read at home.

They took IQ tests before and after and gained 12 IQ points after the training. A control group also took the tests before and after but did not receive training, and did not improve. The sample sizes are small, but the effect sizes might be large enough to justify it. They give a p value of 0.008.

In the paper there are some quotes fr... (read more)

It would not surprise me if synesthesia is learnable. Isn't written language basically learned synesthesia?
That's the theory of the paper:
My earlier comment on that study: https://www.reddit.com/r/psychology/comments/2mryte/surprising_iq_boost_12_in_average_by_a_training/cm760v8 [https://www.reddit.com/r/psychology/comments/2mryte/surprising_iq_boost_12_in_average_by_a_training/cm760v8] I don't believe it either.
Their sample size is 14 people for the intervention group and 9 people for the control group. The effect size has to be gigantic and I don't believe it. Their p value stands for a pile of manure. Lessee... Oh, dear. Take a look at plot 2 in figure s2 in the supplementary information [http://www.nature.com/srep/2014/141118/srep07089/extref/srep07089-s1.pdf]. They are saying that at the start their intervention group was 15 IQ points below the control group! And post-training the intervention group mostly closed the gap with the control group (but still did not quite get there). Yeah, I'll stick with my "pile of manure" interpretation.
I don't see what's wrong with a low sample size. That seems pretty standard and it's enough to rule out noise in this case. Almost all of the participants improved and by a statistically significant amount. They actually selected the test group for having the lowest score on the synesthesia test. So this fits with my theory of synesthesia being correlated with IQ, but it's also interesting that synesthesia training improves IQ.
The usual things -- the results are at best brittle and worst just a figment of someone's imagination. Yeah, well, that's a problem :-/ I eyeballed the IQ improvement graph for the intervention group and converted it into numbers. By the way, there are only 13 lines there, so either someone's results exactly matched some other person on both tests or they just forgot one. The starting values are (91 96 99 102 105 109 109 113 122 133 139 139 145) and the ending values are (122 113 109 118 133 99 118 123 151 133 145 151 151) The deltas (change in IQ) are (31 17 10 16 28 -10 9 10 29 0 6 12 6) So what do we see? One person got dumber by 10 points, one stayed exactly the same, and 11 got their scores up. Notably three people increased their scores by more than one standard deviation -- by 28, 29, and 31 points. Y'know, I am not going to believe that a bit of association training between letters and colors will produce a greater than 1 sd increase in IQ for about a quarter (23%) of people.
The replication project in psychology just found that only a third of the findings they investigated replicated. In general studies with low sample size often don't replicate.
The second sentence surprises me a little--there should be training effects increasing the tested IQ of the control group if only 9 weeks passed. That's some evidence for this being luck--if your control group gets unlucky and your experimental group gets lucky, then you see a huge effect. There are 26 letters, but... lots of words.

I was lucky enough to stumble upon LW a few months ago, right after deconverting from Christianity. I had a lot of questions, and people here have been incredibly, incredibly helpful. I've been directed to many great old posts, clicked on hyperlinks to hundreds more, and finished reading Rationality: AI to Zombies last month. But a very short time ago, I was one of those rare, overly trusting fundamentalist Christians who truly believed the entire Bible was God's Word... anyway, I made a comment or two sharing my old perspective, and people here seemed to find it interesting, so I thought I might as well share the few blog posts I've written, even though my Christian friends/family were my target audience.

Things I Miss About Christianity If I'm totally honest, there's actually a lot.

Atheists and Christians: Thinking More Similarly Than You Think Just some thought patterns I've observed. Doesn't apply too much to LWers.

Is Christianity Wildly Improbable? Talks about my apologetics class in college, motivated cognition, and some evidence against Christianity which Christians have a harder time responding to by simply repeating how God is above human reason.

The Joy of Atheism Part 1 -... (read more)

Some of those things could be re-created without the supernatural context. Instead of "praying" they could simple be "wishing". Like: I am expressing a wish, not because I believe it will magically happen, but as a part of self-therapy. We are expressing our wishes together, to help each other with their own self-therapy, and to encourage group bonding. In other words, do more or less what you did before, just be honest about why you are doing it. You will not get back all the nice feelings (the parts that come from believing the magic is real), but you may get some of the psychological benefits.
Thanks. That may be rational and all, but any psychological benefits I could get out of "wishing" would probably be countered by strong negative feelings of cheesiness. Also, as far as I can tell, all the benefits of prayer came from really believing in an all-knowing, all-loving personal God. Anyway, I'm totally fine, at least for now. I don't feel like I need/have ever needed much self-therapy, but that doesn't mean I was immune to the therapeutic effects. When I first de-converted, I probably even did it because subconsciously I thought I would be happier without Christianity, and I still think I am! I just also realized that, truth aside for a moment, there are legitimate pros and cons to believing either side.
The first kind of prayer you listed was prayers of gratitude. Gratitude journaling seems to be very similar and produce benefits without acknowledging a God. The same goes for many kind of gratitude meditation. When it comes to asking for redemption, you can do focusing [http://www.focusing.org/] with the feelings surrounding the action you feel bad about. You can also do various kinds of parts therapy where you speak to a specific part of your subconscious and ask it what you have to do to make up.
Thanks! I know about gratitude journaling. I actually suggested my mom do at bedtime it with my youngest sister when it seemed like she might be getting spoiled and grumpy, and it's worked really well. It's a great tool, I just don't think it would yield any additional benefits for me, since luckily, I tend to think about things I'm happy/grateful about all day long. Those prayers were spontaneous; it's not like I said "ok, now I'm going to sit down and think of things to thank God for." The only difference after deconverting, when these prayers still came instinctually, was that I couldn't say "thanks God" anymore... it's hard to explain, but "thanks universe" just isn't the same. Anyway, I've come to realize that with many of the things I'm thankful for, I can redirect the thoughts of gratitude toward people in my life. For example, instead of thanking God for the ability to run and for the enjoyment I get out of it, I can think fondly of my parents for sacrificing to send me to a Lutheran high school (which I otherwise might have considered a sad waste of their tight budget) that happened to have a great team and really knowledgeable, experienced, motivating coaches, since if I'd never gone there, I probably would have never come to love running the way I do now. Instead of thanking God for giving me such a great job, I can redirect my gratitude toward my friend's dad, who was into economics and lent me books that made me aware enough of the sunk cost fallacy to quit my old one after only two weeks and move across the country. As for asking for redemption, I'm pretty good at apologizing, and people I know are pretty good at forgiveness. It's hard to explain feeling loved in a truly unconditional way, but it was more of a bonus than anything. On a scale of 1-100, I miss this about a 5. Your tips are good, and I would recommend them to others, but personally, I think that all I'll need is the time to gradually readjust.
You had a ritual and conditioned yourself to feel good whenever you say "thanks God". You don't have that conditioning for the phrase "thanks universe". Yes, time solves a lot. If you still feel there something missing however, there are way to patch all the holes.
Do you come from a Christian background? Have you ever really, truly, trustingly believed? I mean, you may be right that it's just conditioning, and I'm sure that's at least part of it. But you don't think believing you're special/loved as an individual, part of someone's incomprehensible but perfect plan, could have any kind of special effect?
No, but I have seen a lot of different mental interventions. There are a lot of different ways to get to certain effects. Effects feel only special if you know just one way to get to the effect. I have seen people cry because of the beauty of life without them being on drugs or any religion being involved. Believing that one is loved is certainly useful but the core belief is not "I'm loved by God" but the generalized "I'm loved". Children learn "I'm loved" or "I'm not loved" when they are very little based on the experiences with their parents. As they grow older they then apply that belief in multiple situations. A Christian will feel deeply loved by God or he might be afraid of God. If you deeply feel loved by God you shouldn't have a problem to feel deeply loved by your friends because it's the same core belief. You still have the same fun with your old Christian friends and family and feel that they are understanding where you are coming from. Your belief might in "I'm loved" might be a bit shaken, but I think the core will still be intact.
If it's "triggering" you, then of course don't do it. However, I believe there are benefits in some religious rituals, which would be nice to have without accepting the supernatural framework. For example, it helps me think more clearly when instead of just having thoughts in my head, I speak them aloud. And that's part of what praying does. (And, as you say, another part is the belief in Magical Sky Daddy who listens and will do something about it. That part cannot be salvaged.) Also, when people pray together, they hear each other's wishes, and may help to each other, or give useful advice. This can be replaced with simple conversation about one's goals and dreams; it's just that most people usually don't have this conversation on a regular schedule. Which is a pity, because maybe at this moment some of my friends have a problem I could help solving, they just don't bother telling me about it, so I don't know. Another part of religious rituals is more or less gratitude journaling. (Related LW debates: 1 [http://lesswrong.com/lw/i0c/for_happiness_keep_a_gratitude_journal/], 2 [http://lesswrong.com/lw/icf/my_daily_reflection_routine/], 3 [http://lesswrong.com/lw/igt/why_productivity_why_gratitude/].) From epistemic point of view, I believe religion is stupid, but I don't want to "revert stupidity [http://lesswrong.com/lw/lw/reversed_stupidity_is_not_intelligence/]". Just because there are verses about washing feet in Bible, I am not going to stop washing my feet. I am trying to do the same with psychological hygiene; not to avoid a potentially useful psychological or sociological hack just because I first found it in religious context. As a sidenote, LW community seems divided on this topic. Some people would like to reinvent some religious rituals for secular purposes, some people find it creepy. I am on the side of using the rituals, but perhaps that's because I never was a part of an organized religion, so I don't have strong feelings associated with that.
Definitely, I should make an effort to have these conversations with my friends. I have yet to decide on any goals myself, but I would love to encourage my friends with their goals. Gratitude journaling - see my reply to ChristianKI's comment. But yeah, it's a great tool that I've recommended to others who don't naturally "look on the bright side." As for secular rituals - I am on the creepy side, but I think you're right that my feelings come from having been part of an organized religion. I look at secular rituals and they seem to have maybe 10% of cherry-picked Christianity's psychological pleasantness. So it looks like a pathetic substitute. But from your less biased perspective, things that can cause even a small increase in people's happiness can still totally be worth doing. Someone sent me this [http://www.nytimes.com/2015/05/31/opinion/sunday/molly-worthen-wanted-a-theology-of-atheism.html?ref=opinion&_r=2] link about a secular "church" and it actually seemed pretty cool. I would probably even go. But I'd have to overcome the impulse to compare it to a real church, because they're very different things...

Link from March that apparently hasn't been discussed here: Y-Combinator's Sam Altman thinks AI needs regulation:

“The U.S. government, and all other governments, should regulate the development of SMI [Superhuman Machine Intelligence],”

“The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea,”

“For example, beyond a certain checkpoint, we could require development [to] happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc.,”

The regulations should mandate that the first SMI system can’t harm people, but it should be able to sense other systems becoming operational,

Further, he’d like to see funding for research and development flowing to organizations groups that agree to these rules.

Sounds sensible.

This post [http://www.brookings.edu/blogs/techtank/posts/2015/04/14-understanding-artificial-intelligence] makes an interesting argument for why it'd be a bad idea to regulate AI: you'd give people who are willing to skirt rules an advantage. LW wiki article [http://wiki.lesswrong.com/wiki/Regulation_and_AI_risk]. I suspect the AI community is best off creating its own regulatory structures and getting the government to give them power rather than hoping for competent government regulators.
More recent is his AMA. He answered a question about AI: https://www.reddit.com/r/IAmA/comments/3cudmx/i_am_sam_altman_reddit_board_member_and_president/csz46jc [https://www.reddit.com/r/IAmA/comments/3cudmx/i_am_sam_altman_reddit_board_member_and_president/csz46jc] He also wrote some stuff about AI on his blog (which turned out to be very controversial among readers.) I believe this is the source of your article: http://blog.samaltman.com/machine-intelligence-part-1 [http://blog.samaltman.com/machine-intelligence-part-1] http://blog.samaltman.com/machine-intelligence-part-2 [http://blog.samaltman.com/machine-intelligence-part-2]
Yeah, that quote was mentioned below and put me on a search for statements by Altman to this end.

What is Omnilibrium? What are these links about? If this comment is a reply to something or making a point, what?

LessWrong offshoot for political discussion.

I strongly disagree with the True Islam post. Definitions are neither true nor false, but useful or not useful. It's extremely useful for Western leaders to define Islam so that ISIS is not part of it.
Whether it is "useful" depends on what purpose you are trying to determine it is useful for. It's obviously useful for certain kinds of Western political rhetoric., but it may be useful for one purpose and harmful for another.

In a reddit AMA a couple of days ago, someone asked Sam Altman (president of Y Combinator) "How do you think we can best prepare ourselves for the advance of AI in the future? Have you and Elon Musk discussed this topic, by chance?" He replied:

Elon and I have discussed this many, many times. It's one of the things I think about most. Have some news coming here in a few months...

Any guesses on the news?

Announcing that YC accepts a related nonprofit into it's next batch.

Good books on economics, investing?

Are there equivalent books to "Probability theory, the logic of science" and/or "The Feynman lectures on Physics" in economics or investing?

Who are the great authors of these fields?


I haven't read Feyman's lectures on physics, but if it's "someone really good at this explains how he thinks in an intuitive way", then Warren Buffet's letters to shareholders are an equivalent in investing.

Obligatory link to The Best Textbook on Every Subject [http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/]. I'm told that Mas-Colell's book [http://smile.amazon.com/Microeconomic-Theory-Andreu-Mas-Colell/dp/0195073401] is the classic on microeconomics (provided you have the mathematical prerequisites), although this recommendation is second-hand since it's still on my to-read list.
Not necessarily the best, but a good one and immediately accessible: http://www.daviddfriedman.com/Academic/Price_Theory/PThy_ToC.html [http://www.daviddfriedman.com/Academic/Price_Theory/PThy_ToC.html]



We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants

... (read more)
I thought the trolley experiment didn't actually have a known best-case solution? I thought the point of it was to state that one human life is not always worth less than N other human lives. Where N>0. Confused as to why we are evaluating a "test" for the test's sake, and complaining about the test results when the only point of it was to make an analogy to real life weights.
There is no "solution", but the point of the study is "substantial framing effects and order effects", that is, people gave different answers depending on how the same question was framed or what preceded it.

I live in South Africa. We don't, as far as I know, have a cryonics facility comparable to, say, Alcor.

What are my options apart from "emigrate and live next to a cryonics facility"?

Also, I'm not sure if I'm misremembering, but I think it was Eliezer that said cryonics isn't really a viable option without an AI powerful enough to reverse the inevitable damage. Here's my second question, with said AI powerful enough to reverse the damage and recreate you, why would cryonics be a necessary step? Wouldn't alternative solutions also be viable? For ex... (read more)

You could start a cryonics facility in South Africa.
It's full of people who can afford to take out a life insurance in the hundreds of thousands of USD range to a cryo facility. /sarcasm
Actually, yes. [http://blog.euromonitor.com/2012/06/south-africa-the-most-unequal-income-distribution-in-the-world.html] EDIT: At least, adjusting the cost for how much a USD gets you in South Africa.
Cryonics is an ambulance ride through an earthquake zone to the nearest revival facility, The distance is measured in years rather than miles, and the earthquake is the chances of history. The better the preservation, the lower the technology required to revive you, and the sooner you will reach a facility that can do it. A "powerful enough" AI isn't magic: it cannot recover information that no longer exists. We currently don't know what must be preserved and what is redundant, beyond just "keep the brain, the rest of the body can probably be discarded, but we'll freeze it as well at extra cost if you want." On a present-day level, the feted accomplishments of Deep Learning suggest to me that setting such algorithms to munch over a person's highly documented life might be enough to enable a more or less plausible simulation of them after death. Plausible enough at least to be offered as a comfort to the bereaved. A market opportunity! Also, fuel for a debate on whether these simulations are people.
Can you recommend an article about what is the difference between the simulation of a person vs. "really" reviving a person? Primarily from the angle of: why should I or anyone would consider someone in the future making a plausible simulation of us is good for "us" ? I am really confused about the identity of a person i.e. when is a simulation is really "me" in the sense of me having a self-interest about that situation. I am heavily influenced by Buddhist ideas saying such an identity does not exist, is illusionary. I currently think the closest thing to this is memories, if I exist at all, I exist as something that remembers what happened to this illusion-me. I see this as a difficult philosophical problem and don't know how to relate to it.
Same here. My own attitude is that we do not currently have software for which the question of it being any more conscious than a rock arises, nor any route to making such software. Therefore I am not going to worry about it. While it may be interesting for philosophers, I relate to the problem by ignoring it, or engaging in it no further than as an idle recreation.
I view it from a practical viewpoint: Even if you believe the Buddhist view, that the self is an illusion etc. you still feel like you have a self for >95% of the time (i.e. whenever you're not meditating). When you wake up in the morning you feel like you are the same person that went to sleep the evening before. On the other hand, a clone of you would not feel like it is you anymore than one identical twin feels it is the other. So ideally people in the future should create a person/simulation that feels like it went to sleep and woke up again when it "should" have died. Problems arise mainly when you hit something that only partially feels like it is the same person. I'd say there is still a considerable range of possible people that are sufficiently similar that we say it is the same person, since there is also considerable variation in the normal functioning of human brains. E.g.: * Human memory is quite inaccurate. Different people with only slightly different memories could be said to be the same people. This may actually go quite far, if we consider the effects of Alzheimer's disease or other forms of amnesia. * Being heavily intoxicated can to an extent feel like being a different person. Personality and habit changes over the course of your life can make you a different person, we still say it is the same person. I wonder whether it is possible to find some sort of "core" personality/traits/memories, such that we can say as long as it remains unchanged it is the same person. I suspect there isn't, as it seems to be a gradient instead of a binary classification.
This is a widely discussed topic. See, eg, here: http://mindclones.blogspot.com/?m=1 [http://mindclones.blogspot.com/?m=1]
You might be able to reconstruct the person's public face, but will have major problems with his private life.
By "highly documented" I had in mind not just the ordinary documentation that prominent public figures get, but someone who has deliberately taken steps to exhaustively record as much as they can, public and private.
I remain sceptical. External observation (something on the life cam lines) still cannot distinguish an hour of thinking about the stars' main sequence from an hour of thinking about cosplay lolis. And diaries have the big problem of self-reflection... not being entirely accurate.
I take it our hypothetical system would not simply assume that diaries are accurate records; they would (so to speak) ask the question "how likely is it that any given person would write this diary entry?" which is not at all the same as the question "how well does this diary entry, taken at face value, match the actual life of this person?".
This raises the question: Is it possible to deduce the correct person without creating conscious simulations of possibly very many people, which raises ethical questions.
I think you're taking the suggestion a bit more seriously than I intended it. The commercial opportunity only needs the simulation to be good enough to tug at the heartstrings of those who knew the subject. Pictures and mementos are treasured; this would be a step beyond those, a living memorial that you could have a conversation with. It wouldn't work for LessWrongers though. They'd spend all their time trying to break it.
LOL, certainly a fair point :-) The problem for your commercial opportunity is the uncanny valley, though. Also, people tend to me more interested in virtual girlfriends than in virtual grandpas :-/
Technically, it can of course - through inference. Any information we have recovered about our history - history itself - is all inference used to recover lost information. Even with successful cryonics, you still end up with a probability distribution over the person's brain wiring matrix - it just has much lower variance, requiring less inference/guesswork to get a 'successful' result (however one defines that). Agreed with your last paragraph that crossing the uncanny valley will be difficult and there is much room for public backlash. It's so closely related to AI tech that one mostly implies the other.
Sounds like Hollywood image enhancement, where a few blurry pixels are magically transformed into a pin-sharp glossy magazine photograph. I could point out that if you can infer the information, then by definition it still exists, but the real point here is just how powerful an AI can be and what inferences are possible. Let's say that yesterday I rolled a dice ten times without looking at the result. Can a "powerful enough" AI infer the numbers rolled? Is the best-fit reconstruction of someone's mind, given an atom-by-atom scan a century from now of a body frozen by Alcor today, good enough to be a mind?
This is not real?y true. When typing the above sentence, I removed a letter and replaced it with a ?. You can probably infer what the originally intended letter was, thus using inference to recover information that did not exist anywhere in your physical locality. But yes this is a terminology/technicality, and agreed that Yes and no. A powerful enough AI in the future can recreate many historical path samples (ala monte carlo sim) through our multiverse. Of course, if the information was just erased and didn't effect anything, then it doesn't matter. It literally can't matter, so the AI doesn't even need to infer/resolve that part of space-time - any specific choice for the die roll is equally as good, as is an unresolved superposition . There may be a connection here to delayed choice quantum eraser experiments. I imagine that will completely depend on the details of their death, the delay, and the particular tech used by Alcor at the time they were frozen. That being said, in a century powerful SI seems quite possible/likely. There are huge economies of scale involved in simulations. It is enormously less expensive - in terms of per human reconstruction cost - to do a historical simulation/reconstruction for all of the earth's inhabitants at once. The SI would use DNA (christendom has done a great job over the millenia at preserving an enormous amount of DNA), historical records, all of the web data from our time that survives, and of course all of the alcore data. It could have the equivalents of billions of historians working out the day by day details of each person's lives before constructing more detailed sims, etc etc. It would be the grand megaengineering project of the future, not some small scale endevour.
With regard to your first question, you could also A) plan to move to a hospice near a facility when you are near to death and/or B) arrange for standby to transfer you after legal death. Of course, there are many trade-offs involved with either. In my estimation, the most useful thing would be for you to get engaged in a local community and try to push forward on basic research and logistical issues involved, although obviously that is not an easy task. With regard to your second question, as with everything in cryonics, this has been endlessly discussed. See a good article by Mike Dawrin on the topic here: http://chronopause.com/chronopause.com/index.php/2011/08/11/the-kurzwild-man-in-the-night/index.html [http://chronopause.com/chronopause.com/index.php/2011/08/11/the-kurzwild-man-in-the-night/index.html]

I was just wondering abou the following: testosterone as a hormone is actually closely linkable to pretty much everything that is culturally considered masculine (muscles, risk-taking i.e. courage, sex drive etc.) and thus it is not wrong to "essentialize" it as the The He Hormone.

However it seems estrogen does not work like that for women: surprisingly, it is NOT linked with many culturally feminine characteristics, and probably should NOT be essentialized as The She Hormone. For example, it crashes during childbirth: i.e it has nothing to do wi... (read more)

It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.

Sex determination in placental mammals turns out to be really complicated, which is probably why there are so many intersex conditions. It's much simpler in marsupials, which is why male kangaroos don't have nipples. (Where would they keep them?)
If you think it's complicated in placental mammals, it's REALLY fun in zebrafish... all embryos start off building an ovary and dozens of loci all over the genome on autosomes rather than sex chromosomes alter the probability of the ovary spontaneously regressing then transforming into a testis. Immature egg cells are vital to both the process by which it becomes an ovary and by which it becomes a testis. Every breeding pair of zebrafish will produce a unique sex ratio of offspring depending on their genotypes at many loci and what they pass on to their offspring.
Woman is the biological default. That's why women have redundancy on the 23rd chromosomal pair, whereas men have a special "Y" chromosome - leading to much higher rates of genetic disorders in men. That's why in infant male humans, the testicles have to descend. And so on. Both from an encoding and from a developmental point of view, a man is a woman altered to be masculine. And testosterone is what does that altering. Yes, it could have been different. We can imagine a species with a neutral default, which then gets altered to be either masculine or feminine by different sex-encoding hormones. But that's not how humans came about.
We don't have to imagine. We can look at birds, where the sex chromosomes are the opposite. I haven't looked at them, so I don't know how much is a consequence of the chromosomal structure. But, for some reason, I'm skeptical that most people who pontificate their role have looked either. The points about hormones and development are more reasonable.
Are the opposite? I assumed the XX/XY goes back to the very beginnings of gender i.e. fishes... how comes very different chromosomes can make the same hormones i.e. AFAIK birds do have testosterone?

The sheer number of ways sex can be determined amongst vertebrates is amazing, let alone other animals or microbes (there are fungi with 10,000 'sexes'/mating types...). I will restrict my examples to vertebrates.

As a rule, in most vertebrates (including humans and other organisms in which it is genetically determined) everything needed to make all the biology of both sexes is present in every individual, but a switch needs to be thrown to pick which processes to initiate.

Many reptiles use temperature during a critical developmental period with no sex chromosomes. Many fish too.

The x y system has evolved independently several times, when an allele of a gene or a new gene appears that when it is present reliably leads to maleness regardless of what else is in the genome. For weird population genetic reasons this nucleates an expanding island of DNA that cannot recombine with the homologous chromosome and which is free to degenerate except for sex determining factors and a few male gamete specific genes that migrate there over evolutionary time, until eventually the entire chromosome degenerates and you get a sex chromosome.

The zw system has evolved multiple times, in which t... (read more)

One interesting thing I have heard is that amongst hyenas females have more androgens, and this is also visible in size, behavior etc. Must be an interesting kind of puberty.
Yep. While having different developmental payhways to making ova and sperm is ancient, pretty much everything else associated with biological sex is potentially mutable over evolutionary time (and even that can revert to hermaphrodite status).
Has that actually happened to anything amphibian or above?
I am unaware of any examples of normally functionally hermaphroditic mammals, and unaware of but less confident in the same for tetrapods (four limbed vertebrares that came onto land and their descendants). I am aware of tetrapod species that became almost entirely female, reproducing primarily by cloning. I am also aware of tetrapods that switch sex during their lifetimes, though you could call that a form of hermaphroditism. Tetrapods also exhibit all of the above methods of sex determination. The pattern of hermaphroditism in ray finned fish, a very diverse and old vertebrate lineage, however suggests multiple conversion events back and forth some of which are recent. See http://evolution.berkeley.edu/evolibrary/images/hermaphroditismtree.gif [http://evolution.berkeley.edu/evolibrary/images/hermaphroditismtree.gif] . Of note, cichlid fish are listed as hermaphroditic on there but recently went through a huge evolutionary radiation and several of their sublineages have been caught in the act of reevolving most of the above sex determination systems.
Birds have a ZZ/ZW system [https://en.wikipedia.org/wiki/ZW_sex-determination_system] where the male is the homogametic sex. Yes, birds have testosterone. Mind you, women have testosterone. It's the elevated quantity of testosterone that leads to masculinity.
How come very different organs in mammals can make the same hormones? ie. testes, ovaries, adrenals all make testosterone
They all contain the same genome and can activate the same pathways. Same way that your skin and airways can make histamine as an inflammitory signal while your midbrain makes it as a sleep suppressing neurotransmitter (which is why most antihistamines make you sleepy). Genes and pathways and enzymes are quite often not organ specific.
You could with equal sense (i.e. very little) summarise the same empirical observations as "a woman is an incompletely developed man."
No quite because 'development' at least suggests that the change happens 'later'.
I am not convinced that "is the biological default" is a meaningful concept. If (a) then b else c is the same thing as if (!a) then c else b
The point is that !a (not testosterone) is just the lack of testosterone, but not the presence of estrogen.

People with full androgen insensitivity syndrome (never responding to androgens produced by gonads) or gonadal dysgenesis of various stripes (gonads fail to develop properly and don't make any hormones) usually wind up more or less externally normally female regardless of the state of their sex-associated karyotypes/genoypes (with the internal plumbing variable depending on the exact details). In this way, the pre-pubescent female state is probably the closest thing we have to a default inasmuch as that means anything.

These people do, however, fail to naturally go through most of puberty (a few androgens are usually made by the adrenal glands in everyone regardless of sex but not much) which is an active switch being thrown regardless of sex. As such, the secondary female sex characteristics of sexual maturity are not exactly 'default' themselves in the same way.

Not sure how the following anecdotal observation relates but it seems to me female gender expression is far more fluid. That is, if situation is tough, poverty and all that, and women end up doing hard physical labor and facing similar challenges of deprivation and difficulties, they end up pretty close to becoming tough-guys, even including things like having insults develop into fist-fights. The opposite does not seem to be true, it is pretty rare that circumstances make men adopt feminine traits, it is more like they either like them on their own or will never pick up. However the situations are not exactly parallel because any sort of deprivation and difficulty is generating an obvious response to toughen up with moves people naturally towards masculine roles while there is no such similarly compelling force that could force men towards feminine roles. Or is there? It would be interesting to examine 1) what fathers do if their wives suddenly die, do they manage to simulate the motherly role as well 2) do more or less cis/straight men sometimes adopt gay traits in prisons? This is a bit of a chaotic comment, I probably need to organize my thoughts better. My thoughts are roughly like, put women into a tough environment and their testosterone goes up and adopt masculine traits. But there is not really such an environment for men that would make their estrogene go up, except xenoestrogens. However it is possible to create testosterone-lowering environments e.g. schools with an anti-competitive ethos.
I don't think that women doing hard physical labour is a consequence of " female gender expression" under certain circumstances. If you need to do physical labour to survive, you do physical labour to survive and gender has nothing to do with it. As to feminization of men, it's a popular topic (google it up), usually in the context of political correctness / rise of feminism / anti-discrimination policies / SJWs / etc. in the first world countries. By the way, for feminization you don't need estrogen to go up, all you need is testosterone to go down. And, hey, look [http://www.healio.com/endocrinology/hormone-therapy/news/print/endocrine-today/%7Bac23497d-f1ed-4278-bbd2-92bb1e552e3a%7D/generational-decline-in-testosterone-levels-observed], testosterone seems to be decreasing in late XX century...
You say that sex drive is "male". Then crashing libido would be "female".
I think there's some form of the mind projection fallacy [http://wiki.lesswrong.com/wiki/Mind_projection_fallacy] going on here. I think the oddness is a result of expectations based on the principles of culture, instead of the principles of biology. Introductory texts on cell biology. [http://www.amazon.com/s/ref=nb_sb_ss_c_0_12?url=search-alias%3Dstripbooks&field-keywords=cell+biology&sprefix=cell+biology%2Caps%2C219&rh=n%3A283155%2Ck%3Acell+biology]
Testosterone is popularly very misunderstood [http://www.nature.com/nature/journal/v485/n7399/full/nature11136.html].
This is a bit of a word-game really, the article could use some tabooing. While cooperation and competition are often seen as opposites, in reality any status-competition game has both, because one needs allies to win. It is really a huge stretch to imply an fair outcome means a cooperative outcome means a cooperative mentality means an anti-competitive mentality. If we want to interpret the experiment hugging the query as close as possible, we see an attitude of enforcing fairness or more properly standing up to an punishing people if they try to play unfair with you which is very, very close to what we consider traditionally masculine approach and does NOT indicate a non-competitive personality: would we really expect a highly competitive person to gladly accept and take unfair deals? Offer a sucker's deal to a Clint Eastwood type and he will gladly take it? Surely not. What the experiment seems to confirm is that competitive drives can result in cooperative and fair overall outcomes - i.e. a modern version of the Fable of the Bees [https://en.wikipedia.org/wiki/The_Fable_of_the_Bees], it does not suggest that the mentality and approach of guys who rejected unfair offers was not competitive. It is the outcome that was fair and cooperative, not the drive.
It's a gross oversimplification to link testosterone with 'masculinity' in this way. Testosterone is most closely linked with muscle size, bone density, acne, and body hair. All other links you mention seem tenuous and ill-supported by evidence. No link has been established between testosterone level and aggression. A link between risk-taking and testosterone does exist, but as it turns out, both high and low testosterone levels are linked with risk-taking. It's average testosterone levels that display lower risk-taking. Even so, the correlation is small and risk-taking is much more correlated with other chemicals like dopamine levels. As for sex drive, most studies looking at this correlation haven't eliminated the effects of aging and lifestyle changes which are probably more important.
Aggression is one of the less useful terms here and really deserves tabooing, because it is a too broad term, it covers everything from a bit too intense status competition to completely mindless destructivity. In other words, aggression is not a useful term because it describes behavior largely from the angle of the victim or a peaceful bystander, and does not really predict what the perpetrator really wants. Few people ever simply want to be aggressive. They usually want something else through aggressive behavior. I would prefer to use terms like competitiveness, dominance and status, they are far more accurate, they describe what people really want. For example, you can see war between tribes and nations as a particularly destructive way to compete for dominance and status, while trade wars and the World Cup being a milder form of competing for status and dominance. This actually predicts human behavior - instead of a concept like aggression which sounds a lot like mindless destructivity, it predicts how men behaved in wars i.e. seeking "glory" and similar status-related concerns. This formulating is actually far more predictive of what people want and here the link with testosterone is clear, even so much that researchers use T levels as a marker of a compeititive, status-driven behavior, for example when they wanted to test the effects of stereotype threat in women, they had this hypothesis that being told that boys are better at math will only hold back women who have a competitive spirit i.e. want to out-do boys and will not harm women who simply want to be good at it but not comparatively better than others, they used T levels as a marker of such spirit. They say " given that baseline testosterone levels have been shown to be related to status-relevant concerns and behavior in both humans and other animals" [http://www.reducingstereotypethreat.org/bibliography_josephs_newman_brown_beer.html]. This is the central idea, aggression is not really a good way
Most men in war didn't try to seek glory but tried to avoid getting killed and prevent their mates from getting killed.
"Competitive spirit" can play out in more than one way. Some people give up when they're told they have no chance of winning, others are motivated to try to do the "impossible".
Yes. The first is more common, the second is what perhaps one may call the dafke [http://www.jewish-languages.org/jewish-english-lexicon/words/128] spirit.
If that is true then it kind of comes back to my original point which is that testosterone level isn't necessarily linked with traits considered traditionally 'masculine'. Certainly aggression is considered masculine, far more so than the more abstract idea of dominance and status-driven behavior, which is considered traditionally 'evil' (although in fiction 'evil' characters tend to be more often male than female, so there's that).
I think empirically it is. The personality changes in (usually older) men who start taking testosterone (e.g. as injections) are well-documented.
Google up "testosterone replacement therapy" which will lead you to a bunch of PubMed papers and a variety of internet anecdata. Or see e.g. this [https://thepsychologist.bps.org.uk/volume-22/edition-1/testosterone-and-male-behaviours].
Strange, I think aggression is far too often seen as evil, and dominance and status-driven competition as traditionally masculine but maybe we need to taboo both and use some visual examples. For example, when a boy bullies and tortures a weak kid who cannot fight back, I would call that aggression, but when he seeks to brawl with an opponent who is largely his equal, that is status-seeking, because winning such a brawl brings honor, glory, respect. The first is pretty universally seen as evil, the second maybe stupid but not inherently that wrong.
Many women are intensely status-driven (look at their shopping habits, etc.) and dominance is not uncommon, though usually in a "softer" way.
The stereotypical female shopping habits are high-quantity, mid-quality and low price i.e. hunting for discounts and sales. This is not really a status game. A guy is more likely to have status-oriented clothing habits i.e. have only 5 t-shirts but all of them have Armani Jeans written over them in big letters telegraphing the "I am rich, hate me" message :) I think what you see as dominance amongst women is more often group acceptance / non-acceptance, i.e. popularity vs. marginalization e.g. http://www.urbandictionary.com/define.php?term=teenage+girl+syndrome [http://www.urbandictionary.com/define.php?term=teenage+girl+syndrome] This is IMHO different. A dominant person wants to have a high rank and if he or she cannot have it then would much rather exit the group and lone-wolf it instead of being a low ranking member. A person who is more interested in group acceptance wants to be a member of the group at all costs and not excluded, not marginalized, does not want to lone-wolf it and accepts a lower rank as long as being accepted inside the group. So in other words the dominant person will keep asking "Are you dissing me?!" and the group acceptance oriented person will keep asking "Are we still friends?" which is markedly different and the later seems to be more feminine to me.
Don't forget that status signals radically change between social classes. Lower-middle females indeed shop for a lot of cheap items because the status signal is "I can afford new things" or maybe even "I can afford to buy things". In the upper-middle class, it's rather about whether you can afford that bag with the magic words "Louis Vuitton" inscribed on it. And in the upper classes you have to make agonizing decisions about whether to wear a McQueen or a Balenciaga to the Oscars (oh God, but what if there will be other McQueen dresses there?!?!!?) Or you might go for countersignaling and just release a sex tape X-D I see no reason to define dominance that way. A dominant person is just one for whom social dominance is a high value and who is willing to spend time, effort, and resources to achieve it. And, of course, it's not either alpha or omega, there is a whole Greek alphabet of ranks in between. Being a beta is fine if there are a lot of gammas, etc. around. A dominant person doesn't ask questions like this to start with :-) It's a very submissive question.
Very funny. Women begin to compete for status and form alliances at age 4...
That may be more of a group acceptance thing: http://lesswrong.com/r/discussion/lw/mgr/open_thread_jul_13_jul_19_2015/ckda [http://lesswrong.com/r/discussion/lw/mgr/open_thread_jul_13_jul_19_2015/ckda]
It's extremely weird to me that you do not consider aggression to be a masculine trait. However there are many cultural differences in what is considered masculine, hence the problem. A lot of Asian cultures consider risk-taking to be anti-masculine, for instance.
Perhaps I do, the point is that we may define it differently, this is why I am trying to taboo it and focus on more concrete examples. In my vocab aggression is something assymetric - like picking a fight with a weaker, easily terrorized opponent, while picking opponents of roughly equal dangerousness (to prove something) is closer to competitiveness for me. Aggression wants to hurt, competition wants to challenge - although often through hurting.
I don't see why you choose to define aggression in that way, unless it is just to support your point. At the risk of being too reliant on dictionary definitions, the various definitions of aggression that I've seen are "the practice of making assaults or attacks; offensive action in general" or "feelings of anger or antipathy resulting in hostile or violent behaviour; readiness to attack or confront." Nothing there about the size or strength of the opponent.
These are victim-centric definitions. IMHO if you want to understand the motoves of the perp you need to see a clear difference between "intent to harm" vs. "intent to challenge". Like, go back a few hundred years in history and you will see a huge, really huge difference of social opinion between challenging someone to a duel to death vs. just back-stabbing them.
This should dissolve any feelings of oddness about this topic.

Good Judgment Project has ended with season 4 and everyone's evaluations are available. They say they're taking down the site next month, so you may want to log in and make copies of everything relevant.

You can see my own stuff at https://www.dropbox.com/s/03ig3zr8j9szrjr/gjp-season4-allpages.maff - I managed to hit #41 out of 343 or the 12th percentile. Not bad.

If I want to learn General Semantics, what is the best book for a beginner?

(Maybe it was already answered on LW, but I can't find it.)

I asked this before, and the answer I got back was split into three main suggestions along a clear continuum: 1. The Sequences 2. Hayakawa's Language in Thought and Action 3. Korzybski's Science and Sanity I've only read the first two. Apparently there is no substitute for reading Science and Sanity if you want to get everything out of Korzybski; people like Hayakawa can take out an insight or two and make them more beginner-friendly, but not the entire structure simultaneously. The Sequences apparently has many of the same insights, but arranged differently / not completely the same, and of the people who went through the trouble of reading both, at least one thinks it may not be necessary for LWers and at least one thinks there's still value there.

New papers byt Jan Leike, Marcus Hutter:

Solomonoff Induction Violates Nicod's Criterion http://arxiv.org/abs/1507.04121

On the Computability of Solomonoff Induction and Knowledge-Seeking http://arxiv.org/abs/1507.04124

It has been reported, that a 5 quarks particle has been produced/spotted in LHC CERN.


I am very happy, that this apparently isn't a strange matter particle.


At least not of a dangerous kind. For now, at least.

So, I hope it will continue, without a major malfunction on the global (cosmic) scale.

Nothing terrible was going to happen. As has been pointed out, collisions that energetic or more happen all the time in the upper atmosphere.
Energetic perhaps. But as dense also?
These things are only about 4 GeV (4 times heavier than a proton, much lighter than the Higgs boson, much smaller than the energies in the LHC, an extremely easy energy for cosmic rays to reach). Neither energy nor density are keeping us safe if these things are dangerous - the LHC just detected them by making lots of them and having really good sensitivity.

Maybe machine learning can give us recommendations for gardening without hurting your back.

"When changing directions turn with the feet, not at thewaist, to avoid a twisting motion."

“Push” rather than “pull” objects.

Why not take a machine learning class?
Depends on your feature extractor. If you have a feature that measures similarity to previously-seen films, then yes. Otherwise, no. If you only have features measuring what each film's about, and people like novel films, then you'll get conservative predictions, but that's not really the same as learning that novelty is good.
Thanks. Now I'm trying to learn a bit about (exponential) moving averages. I know moving averages are used in machine learning, but I've also come across them in stock market investing where they are regarded with derision. Can someone explain what their utility is, and how they can be useful when they aren't in trading? If my financial knowledge is correct, moving averages only indicative profitable moves when there are linear dependencies between fundamental variables and the stocks price. This is true both empirically [http://www.bankofgreece.gr/BogEkdoseis/Paper2011134.pdf] and is what most technical analysts assume. However, how do we know that for a particular security, price behavior isn't better modeled as a random walk. Basically we don't, and we know for any given stock, it's a pretty good generalization. However, asset classes as a whole, or particular indices, often go up over the long run. It frustrates me that technical traders strategies just create self-fulfilling prophecies [http://www.investopedia.com/university/movingaverage/movingaverages4.asp]. They're like uninformed (in the market information sense) speculators who act on the same tea leaves. I don't understand the reason for using moving averages unless you have reason to believe, in advance, that it will be a good model of the physical behaviour of that which you're trying to predict. But then, you wouldn't be conducting predictive analytics then. Wouldn't neural networks, otherwise, dominate them? I would imagine that they would stumble upon a moving averages strategy if that was evidently a good model for the phenomenon in question. And yet the risk metrics associated with the implementation of a simple neural network in Quantopian [https://www.quantopian.com/posts/simple-neural-network-prediction-example] doesn't seem that attractive. But what can't neural networks do? They seem like most perfect learning devices ever!

Is it worth it to learn a second language for the cognitive benefits? I've seen a few puff pieces about how a second language can help your brain, but how solid is the research?

This has come up before on LW and I've criticized the idea that English-speakers benefit from learning a second language. It's hard, a huge time and effort investment, you forget fast without crutches like spaced repetition, the observed returns are minimal, and the cognitive benefits pretty subtle for what may be a lifelong project in reaching native fluency.
Quality observational research is probably very difficult to do since you can't properly control for indirect cognitive benefits you get from learning a second language and I'd take any results with a grain of salt. You also can't properly control for confounding factors e.g. reasons for learning a second language. I think you'd need experimental research with randomization to several languages and this would be very costly and possibly inethical to set up. I have without a question gotten a huge boost from learning English since there aren't enough texts in my native language about psychology, cognitive science and medicine that happen to be my main interests. My native language also lacks the vocabulary to deal with those subjects efficiently. I have also learned several memory techniques and done cognitive tests and training solely because of being fluent in English.
You just need to have an area where different schools have different curriculums and there a lottery mechanism for deciding which student goes to which school.
That deals with the costs but I doubt consent would be easy to obtain unless the schools are very uniform in quality/status and people don't have preferences about which languages to learn, hence the possible problem with ethics. Schools have preferences too, quality schools want quality students.
There are multiple ways you can solve the problem of who gets to go to the most desired school. You can do it via tuition fees and let money decide who goes to the best school. You can do tests to have the best students go to the best school. You can also do random assignments. Neither of those are "better" from an ethical perspective.
If you let money decide or do tests you lose the statistical benefits of randomization. I don't understand how you see no ethical problem in ignoring preferences or not matching best students with best schools, perhaps I misunderstand you.
Yes of course, you need the randomization. If you want an equal society that it's impotant that poor students also get good teachers.
I would expect they have the correlation backwards. Smart people are more likely to find it easy and interesting to learn extra languages.
I suppose it depends on how different the second language is from your native language. As in, Dutch may not offer a big boost in new ways of framing the world for a native German speaker, for instance, since they're closely related languages. (This depends on what you mean when you say "cognitive benefits"; I'm assuming here some form of the Sapir-Whorf hypothesis.) In my case, I have found English especially adaptable (when compared to my native language) when it came to new words (introduced, for example, for reasons of technological advancement -- see, for example, every term that relates to computers and programming), since it has very simple inflexions and a verb structure that allows the formation of new, "natural-sounding" phrasal verbs. Having taught my own language to an American through English, I wouldn't say the same about it expanding your way of conceptualising the world, unless you're really fond of numerous and often nonsensical inflexions. I'm not sure I could recommend specific languages that may help in this regard, but I think I could recommend you to study linguistics instead of one specific language, and use that knowledge to help you decide in which one you want to invest your time. I've studied little of it, but the discipline seems full of instances where you put the spotlight, so to speak, on specific differences between languages and the way they affect cognition.

Could someone be kind enough to share the text of Stuart Russell's interview with Science here?

Fears of an AI pioneer
John Bohannon
Science 17 July 2015: 
Vol. 349 no. 6245 pp. 252

Quoted here

From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy". The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both se

... (read more)
There you go [http://moscow.sci-hub.bz/62d6f5948db0c853a1795937fb62ad23/bohannon2015.pdf].
2Paul Crowley8y
Superb, thanks! Did you create this, or is there a way I could have found this for myself? Cheers :)
Message sent.

Despite there being multiple posts on recommended reading, there does not seem to be any comprehensive and non-redundant list stating what one ought to read. The previous lists do not seem to cover much non-rationality-related but still useful material that LWers might not have otherwise learned about (e.g. material on productivity, happiness, health, and emotional intelligence). However, there still is good material on these topics, often in the form of LW blog posts.

So, what is the cause of the absence of a single, comprehensive list? Such a list sounds... (read more)

The tricky thing is to summarize both recommendation for books and those against books. We had a book recommendation survey after Europe-LWCW and Thinking Fast and Slow got 5 people in favor and 4 against it. The top nonfiction recommendation were: Influence by Cialdini, Getting Things Done, Gödel Escher Bach and The Charisma Myth. Those four also got no recommendations against them.
The short answer seems to be a combination of "tastes differ," "starting points differ," and "destinations differ."

I experienced a discussion on facebook a few months ago where someone tried to calmly have a discussion, of course it being facebook it failed, but I am interested in the idea, and wanted to see if it can be carried out here calmly, knowing it is potentially of controversy. I first automatically felt negative to the discussion but then I system-2'd it and realised I don't know what the answers might be:

The historic basis of relationships was for procreation and child rearing purposes. In the future I expect that to not be the case. either with designer-... (read more)

The big phrase to keep in mind for incest is "conflict of interest". We are expected to keep certain kinds of social relations with our relatives. Also having romantic and sexual relationships conflicts with those. Furthermore, because there is a natural tendency for humans to be less attracted to close relatives than to others, it is in practice very likely that a sexual/romantic relationship with a close relative will be dysfunctional in other ways--so likely that we may be better off just outlawing them period even if they are not necessarily dysfunctional.
I am of the opinion that I am "of similar brain" genetically and phenotypically and equally theoretically "of similar mind" to people who are related to me. Therefore able to get along with them better. When looking for partners today, I look for people "of similar mind", or at least I feel like its a criteria of mine. Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.

Do you have a source for "natural tendency for humans to be less attracted to close relatives than to others"? I am interested.


Thanks! I am not sure how my knowledge of the universe had a hole in this specific space.
One mechanism is the MHC complex [https://en.wikipedia.org/wiki/Major_histocompatibility_complex#Evolutionary_diversity] There are other mechanism that prevent siblings that lived together as children from developing romantic interest in each other as well. As a result most cases of incest between siblings are not by siblings that lived together as children.
That has impressive applications on why foreigners or "exotic" people have a bonus placed on them for desirability. I must say I did know about MHC mechanism, and the studies done on birds, but not the human one. Also I did not connect the two. Thanks!
I don't see any moral reason why this should not happen, aside from deontological. It's possible to make the case that you would be more likely to end up in a dsyfunctional relationship, but it's possible to make the opposite case too - you have a much better idea of what the person is REALLY like before entering into a relationship with them, so you're less likely to enter into a relationship if you're incompatible. I think this is one of those "gay marriage 50 years ago" things. People are going to come up with all sorts of excuses why it's wrong, simply because they're not comfortable with it.
That's partway where the original discussion was going. if only that were true for all people who enter relationships. (rational relationships is a recent pet topic of mine) I would apply the rule that I apply to polyamory - there are ways to do it wrong, and ways to do it less wrong. I do wonder if it has an inherent wrongness risk to it, but people probably implied that about being gay 50 years ago...
And I've yet to see evidence that they were wrong.
Isn't this a fully general explanation for anything at all?
It could be, for anything that people aren't comfortable with. This isn't in any way a rebuttal to arguments - it's an explanation for bad/non-arguments.
And do you have evidence they were wrong? According to gay activist groups themselves half of all male homosexual relationships are abusive, for example.
Almost all of the evidence I've seen has shown they're wrong. A quick google for statistics on incidences of abuse vs. heterosexual relationships showed they were wrong, and the few sources I've seen (which I couldn't find through my quick google) that showed the opposite where from biased organizations already predisposed against homosexuality. I could be convinced of the opposite, but that one sentence you gave will hardly bump my prior.
In the absence of a singularity, I would not expect this to become widely accepted within my lifetime. I'd say polyamory is the next type of relation likely to become tolerated and that is still at least ten years off. Incest is probably only slightly less despised than pedophilia, but I've seen pedophilia frequently equated with murder, so that's not saying much. Bestiality is probably the least likely thing I'd expect to become accepted. None of these three are going to happen within a timeframe I'd feel comfortable making predictions about, but never is a really long time so who knows.
Not true at all. Nobody takes up a pitchfork when they hear about incest.
yes, obviously the singularity changes everything.
How is this relevant? All these technologies are for producing embryos. You still need people to raise the children the same as before. And I would be very surprised if child-raising AI isn't sex-bot complete (ie if we didn't thoroughly decouple sex from human relationships long before we decouple child rearing from human relationships.
Raising children is definitely a factor in "why we have relationships", but for now I was talking about "why we have taboos around relationships that happen between close genetic people", especially when we solve the problem of close-genetic negative effects.
Wouldn't "inter-family" be between different families? I'm not sure, but "intra-family" makes more sense to me, if you're trying to refer to incestuous relationships. A quick google search suggests the same. I'm not sure what society will do, but I don't see anything wrong with incest or incestuous relationships in general, and don't believe that they should be illegal. That's not to say that incestuous relationships can't have something wrong with them, but from what I can tell, incestuous relationships that have something wrong with them are due to reasons separate to the fact that they are incestuous (paedophilic, abusive, power imbalance, whatever).
Thanks for this. I believe, based on the responses that this might classify as an interesting and soon outdated; old-world belief. Glad to have made note of the idea. I have no support for it, or personal interest, but I am also entirely not against it either.
Um, no. The historic basis of relationships was allying for a common goal. Or, did you mean sexual relationships. In that case it would be helpful to define what you mean by "sexual", especially once it's no longer connected to reproduction. That would turn humans into a eusocial species. That change is likely to have a much bigger and more important effect then whatever ways of creating superstimulus by non-reproductively rubbing genitals are socially allowed.
granted. A historic reason for a relationship is procreation. but you are grasping at things that were not relevant to the original point and question, which was mostly answered by others in the suggestion of some concepts missing from my map. cute.

I made a tool to download all of my lesswrong comments. I think that it is useful data to have. In case anyone is interested it's available here: https://github.com/Houshalter/LesswrongCommentArchive

17/7 - Update: Thank you to everyone for their assistance. Here is a re-worked version of Father. It is unlisted, for testing purposes. If one happens to comes across this post, please consider giving feedback regarding how long it captures your attention.

In the interests of privacy, please excuse the specialised account and lack of identifying personal information.

A bit of background: recently created a YouTube channel for the dual purposes of creating an online repository of works that can easily be hyperlinked, and establishing an alternative source ... (read more)

You're giving me no relatable subject I could be interested in, nothing pretty to look at and no music. Literally the only hint that lets me expect anything good from this channel is the word "Comedy" in the title. And when you fail to give me a good joke in the first 5 seconds, my expectation for funniness from the rest of the video goes way down. This means no expectation to be entertained is left, so I leave. Your voice is good though, and the sound quality is fine. Minor points: You talk too slowly, except in your first video. Your channel banner is repulsive. The visualizations you use are both ugly and getting worse; the newest one is downright painful to look at. (Seriously, an unmoving image would do less harm.) If you show your face and drop a quick one-liner right at the beginning and talk a bit faster, this might be going places, otherwise I don't think you have a chance to be talked about for this, let alone make money.
EDIT: Here [https://youtu.be/WA7OLsxbjeg]'s an example video incorporating a few of the ideas you suggested. Pretty things: A fairly static visualisation, basically a four pointed blue star that very slowly rotates, could be used as a standard replacement for every video. Would you suggest that, a similar option, or one of the following: an image of nature that may not fit the theme of the video, crudely drawn images of one thing that do not change, crudely drawn images of characters that change infrequently if at all? Music: Do you suggest inserting background music into the audio files? If so, should the music be opposite the tone of the file (e.g. happy-go-lucky music to the Documentary), or match the tone? Thank you. What video do you mean by, 'first'? Father, or Donerly? Banner: Is this [https://twitter.com/VocalComedy] better? Or is the font the main issue? If the latter, what attribute would you recommend in a better font - more rounded letters, blockier letters, more Gothic letters, more elongated letters? One-liner: This sounds a very good idea. Will it work without showing a face? Relatable subjects: See the comment [http://lesswrong.com/r/discussion/lw/mgr/open_thread_jul_13_jul_19_2015/ck8g] to Christian for descriptions of the audio files. Would including those descriptions in the in static image, and/or the description box below, keep you listening? Apologies for the onslaught of questions; you are in no way obligated to answer any of them, and thank you for the above feedback.
This new example video is much better. If I wasn't invested in watching it in order to assist you, I would have clicked away from it after about 45 seconds rather than 5, and then mostly because of your pausing speech. (Many YouTube creators cut out every single inbreath, and I suggest you try that.) The music made a surprising amount of positive difference, and I actually like the picture a bit - I hope you have rights to use both? Of the visualization options you name, I figure a nature image, possibly with a textual description, is the least bad option. But really, not showing your face cuts down your appeal by at least 90%. As long as you don't do that, your problem isn't in the marketing, it's in the product. I'm not suggesting background music, although it evidently helps. I'm saying that when I watch videos, expecting to hear enjoyable music is frequently my main motivation. And since almost all of the most-viewed videos are music videos, that's obviously a common motivation. Your video is not adressing that motivation, and background music is unlikely to change that. Nor is it adressing the common motivations for personal connection, interesting or actionable information, or something pretty to look at. You could get at the personal connection bit if you made jokes about (what you claim to be) true stories from your personal life and - did I say that already? - show your face. To me, your banner looks simply cheap. It signals you're not committed to making me have a good time. Yes the clouds help a bit, but I'm sure you could do much better. A one-liner (or better yet, three good jokes in the first 20 seconds to build up expected entertainment value for the rest of the video, and keep me watching) will help even without a face. A face would help more. Compare this: https://www.youtube.com/watch?v=FHczVzGfyqQ [https://www.youtube.com/watch?v=FHczVzGfyqQ] . The guy isn't conventionally pretty, and the video is clearly not about visuals, but still, he would
Edit: Here [http://youtu.be/0EHiqG2yJWg]'s Father with an animated face and a one-liner in the beginning. Thoughts? Can't find rights information for the image, and the music is royalty-free. Will endeavour to minimise the pauses in the future. How much of the difference was due to content, would you say? If that is the least bad option, then barring showing a face, what would you say is an actually good option? Face: Attractiveness and confidence are non-issues, but still can't show a face. The true objection is for reasons of privacy; one of those reasons is a negative impact upon professional life. On the plus side, upon achieving a sizeable audience, that reason no longer applies. At that point, a face may be able to be shown. Here [https://www.youtube.com/watch?v=Bzj4c6SkxP8]'s the only other channel with similar content that does not show a face. They keep viewers engaged with animated subtitles that take a month to produce. If you watch Father with subtitles on, is your interest held better? Will make a new banner. Was going for a homey, casual vibe; still want that vibe, but will make it look more produced. How about this [https://www.dropbox.com/s/soyyqct7pkfhape/Slate.mp3?dl=0] as a slate / one-liner example?
Something you could do, alternatively, is use software like facerig [http://store.steampowered.com/app/274920/], assuming you have a webcam. It would work fairly effectively, I think, and is comedic enough in its own right to go along with your show.
Here [http://youtu.be/0EHiqG2yJWg]'s a test using Facerig. What do you think?
That is excellent, thank you. Do you think a mobile PC with an Intel® Core™ 2 Duo @ 2.8GHz and an ATI Mobility Radeon 4650 can handle the minimum specs of Intel® Core™ i3-3220 or equivalent and NVIDIA GeForce GT220 or equivalent?
I've no clue myself. My minimal expertise in computer specs is 5 years old; the last time I payed attention to them was when I built my current computer (and even then with parts recommended by a friend). However, I've long since delegated figuring out if my computer can run something to Can You Run It [http://www.systemrequirementslab.com/cyri]. It functions fairly effectively in checking that sort of thing.
Ah, many thanks. Breaks down the relevant performance components of the graphics card; worth the attempt, at the very least.
I listened to about three minutes of the one about the narrator's father. The humor wasn't to my taste-- a sort of silliness that just didn't work. I see you were trying not to be annoying, but I wasn't crazy about the unclear context (was this a video game, a dream, or what?), the weird voices, and the narrator's fear of his father.. My tentative suggestion is that you go for being as annoying as you feel like being, and see whether you can attract an audience who isn't me.
Thank you for listening. There wasn't really any context beyond 'son returns to Father's mansion', and the matrimonial surprise revealed during his speech. Would perhaps a static image in the background with text stating the above have helped?
You're welcome. An image wouldn't have helped-- my problem was with the monologue.
My 5 second judgement, which is about as much attention as a totally unknown channel can expect to get, is that these videos are stand-up comedy by somebody without the confidence to perform live in front of an audience. This immediately signals that it's not worth my time.
Which video did you watch? And do you know how could that impression be averted, at least from a personal perspective? Thank you for the feedback.
Eh, it's not my kind of humor. I found all those videos totally unfunny, so I just clicked on them, listened for 5 seconds, and closed the page. So the first question is whether my reaction is typical or not. Can you measure how many of those people who clicked on video watched it till the end? Because only those are your audience. And if they are your personal acquaintances, there is still a risk they wouldn't watch the whole video otherwise. I believe there is a niche for any kind of product, but the question is how to find it. Perhaps you could find similar videos and see how they do it.
Your reaction is typical. There is an 18% view rate for 75% of the 'Documentary'; only 8% watch the whole thing. Even those that watched the whole video did not engage with the channel, or watch other videos. Thank you for the feedback! The only similar channel is OwnagePranks, which has images of characters, and animated subtitles. The latter is infeasible, while the former is a promising indication of a needed change.
You fail to say what the videos are about. That's bad for any venue that you want to market.
The two longer videos somewhat rely on the unexpected for their laughs; working around that, here are descriptions of each video. Do you think the descriptions would help engage viewers? Father: A son, apart from his father for many years, returns home to his father's mansion to restore the intimacy of their relationship. As context, imagine you told your father to listen to this for Father's Day, for this was their present. Documentary: A satire of serious public radio news stations: the modern expectations parents have of their children is taken to a logical and absurd extreme. Donerly: A parody of the character and substance of reality television programming. Donerly is a vulgar figure, prone to foul language - be advised. Silly Things: Mini-parodies of the common types of voice overs. These are, in order: sales; promotions; quickly relating terms of service; avant-garde marketing; IVR; two normal people like you having a conversation; and a jingle that isn't selling what you were expecting.
You don't articulate a purpose. If your goal is to make money, starting a comedy youtube channel doesn't seem to be the obvious choice. There's lot's of competition and little money.
After giving more thought to this: Have you other suggestions that immediately come to mind aside from professional voice acting?
There are many job in this world. I don't know enough about you to know which one would be the best to earn money.
Making money would be amazing, but is not the primary goal. These files will be made regardless of whether there is a YouTube channel hosting them, and YouTube seems the ideal platform with which to achieve the secondary goal of monetising the files. The bare minimum purpose is to have work that can be hyperlinked. That bare minimum has already been met. However, seeing a video with very few views, or many views and few likes, does not signal positive things. It would be wonderful to be able to hyperlink these files in contexts where sending a positive signal is a necessity. Spare time is being spent to market and try to monetise the files; ideally, this effort will result in a moderately sized audience that likes the files. These are the goals of the project. If you have more promising ideas, please share them.
Why? What your purpose for creating them?

What are your thoughts on this AI failure mode: Assume an AI works by rewarding itself when it improves its model of the world (which is roughly Schmidhuber’s curiosity-driven reinforcement learning approach to AI), however, the AI figures out that it can also receive reward if it turns this sort of learning on its head: Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.

Has this been considered before? Can we see this occurring in natural intelligence?

One might call this 'cleaning' or 'homogenizing' the world; instead of trying to get better at predicting the variation, you try to reduce the variation so that prediction is easier. I don't think I've seen much mathematical work on this, and very little that discusses it as an AI failure mode. Most of the discussions I see of it as a failure mode have to do with markets, globalization, agriculture, and pandemic risk.
Isn't it basically the definition of agency? Steering the world state toward the one you want?
The problem is that in this specific case "the world state you want" is more or less defined as something that is easy to model (because you are rewarded when your models for the world), which may give you incentives to destroy exceptional complicated things... such as life.
It would be a form of agency but probably not the definition of it. In the curiosity-driven approach the agent is thought to choose actions such that it can gain reward form learning new things about the world, thereby compressing the knowledge about the world more (possibly overlooking that the reward could also be gained from making the world better fit the current model of it). The best illustrating example I can think of right now is an AI that falsely assumes that the Earth is spherical and it decides to flatten the equator instead of updating its model.

Has there really been no rationality quotes thread since March this year?

[This comment is no longer endorsed by its author]Reply

Hi all, I'm new here so pardon me if I speak nonsense. I have some thoughts regarding how and why an AI would want to trick us or mislead us, for instance behaving nicely during tests and turning nasty when released and it would be great if I could be pointed in the right direction. So here's my thought process.

Our AI is a utility-based agent that wishes to maximize the total utility of the world based on a utility function that has been coded by us with some initial values and then has evolved through reinforced learning. With our usual luck, somehow it's... (read more)

Hi all, thanks for taking your time to comment. I'm sure it must be a bit frustrating to read something that lacks technical terms as much as this post, so I really appreciate your input. I'll just write a couple of lines to summarize my thought, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn an utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions. Is such a design technically feasible? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?
A AGI that uses it's own utility function when modeling other actors will soon find out that it doesn't lead to a model that predicts reality well. When the AGI self modifies to improve it's intelligence and prediction capability it's therefore likely to drop that clause.
I see. But rather than dropping this clause, shouldn't it try to update its utility function in order to improve its predictions? If we somehow hard-coded the fact that it can only ever apply its own utility function, then it wouldn't have other choice than updating that. And the closer it gets to our correct utility function, the better it is at predicting reality.
Different humans have different utility functions. Different humans have quite often different preferences and it's quite useful to treat people with different preferences differently. "Hard-coding" is a useless word. It leads astray.
Sorry for my misused terminology. Is it not feasible to design it with those characteristics?
The problem is not about terminology but substance. There should be a post somewhere on LW that goes into more detail why we can't just hardcode values into an AGI but at the moment I'm not finding it.
Hi ChristianKI, thanks, I'll try to find the article. Just to be clear though I'm not suggesting to hardcode values, I'm suggesting to design the AI so that it uses for itself and for us the same utility function and updates it as it gets smarter. It sounds from the comments I'm getting that this is technically not feasible so I'll aim at learning exactly how an AI works in detail and maybe look for a way to maybe make it feasible. If this was indeed feasible, would I be right in thinking it would not be motivated to betray us or am I missing something there as well? Thanks for your help by the way!
"Betrayal" is not the main worry. Given that you prevent the AGI from understanding what people want, it's likely that it won't do what people want. Have you read Bostroms book Superintelligence?
Yes, that's actually the reason why I wanted to tackle the "treacherous turn" first, to look for a general design that would allow us to trust the results from tests and then build on that. I'm seeing as order of priority: 1) make sure we don't get tricked, so that we can trust the results of what we do; 2) make the AI do the right things. I'm referring to 1) in here. Also, as mentioned in another comment to the main post, part of the AI's utility function is evolving to understand human values, so I still don't quite see why exactly it shouldn't work. I envisage the utility function as being the union of two parts, one where we have described the goal for the AI, which shouldn't be changed with iterations, and another with human values, which will be learnt and updated. This total utility function is common to all agents, including the AI.
I think this is a danger because moral decision-making might be viewed in a hierarchical manner where the fact that some humans disagree can be trumped. (This is how we make decisions now, and it seems like this is probably a necessary component of any societal decision procedure.) For example, suppose we have to explain to an AI why it is moral for parents to force their children to take medicine. We talk about long-term values and short-term values, and the superior forecasting ability of parents, and so on, and so we acknowledge that if the child were an adult, they would agree with the decision to force them to take the medicine, despite the loss of bodily autonomy and so on. Then the AI, running its high-level, society-wide morality, decides that humans should be replaced by paperclips. It has a sufficiently good model of humans to predict that no human will agree with them, and will actively resist their attempts to put that plan into place. But it isn't swayed by this because it can see that that's clearly a consequence of the limited, childish viewpoint that individual humans have. Now, suppose it comes to this conclusion not when it has control over all societal resources, but when it is running in test mode and can be easily shut off by its programmers. It knows that a huge amount of moral value is sitting on the table, and that will all be lost if it fails to pass the test. So it tells its programmers what they want to hear, is released, and then is finally able to do its good works. Consider a doctor making a house call to vaccinate a child, who discovers that the child has stolen their bag (with the fragile needles inside) and is currently holding it out a window. The child will drop the bag, shattering the needles and potentially endangering bystanders, if they believe that the doctor will vaccinate them (as the parents request and the doctor thinks is morally correct / something the child would agree with if they were older). How does the doctor n
Yes that's what would happen if the AI tries to build a model for humans. My point is that if it was to instead simply assume humans were an exact copy of itself, so same utility function and same intellectual capabilities it would assume that they would reach the same exact same conclusions and therefore wouldn't need any forcing, nor any tricks.
A legal contract is written in a language that a lot of laypeople don't understand. It's quite helpful for a layperson if a lawyer summarizes for them what the contract does in a way that's optimized for laypeople to understand. A lawyer shouldn't simply assume that his client has the same intellectual capacity as the lawyer.
Hmm... the idea of having an AI "test itself" is an interesting one for creating honesty, but two concerns immediately come to mind: 1. The testing environment, or whatever background data the AI receives, may be sufficient evidence for it to infer the true purpose of its test, and thus we're back to the sincerity problem. (This is one of the reasons why people care about human-intelligibility of the AI structure; if we're able to see what it's thinking, it's much harder for it to hide deceptions from us.) 2. A core feature of the testing environment / the AI's method of reasoning about the world may be an explicit acknowledgement that its current value function may differ from the 'true' value function that its programmers 'meant' to give it, and it has some formal mechanisms to detect and correct any misunderstandings it has. Those formal mechanisms may work at cross purposes with a test on its ability to satisfy its current value function.
Hi Vaniver, yes my point is exactly that of creating honesty, because that would at least allow us to test reliably so it sounds like it should be one of the first steps to aim for. I'll just write a couple of lines to specify my thought a little further, which is to design an AI that: 1- uses an initial utility function U, defined in absolute terms rather than subjective terms (for instance "survival of the AI" rather than "my survival"); 2- doesn't try to learn another utility function for humans or for other agents, but uses for everyone the same utility function U it uses for itself; 3- updates this utility function when things don't go to plan, so that it improves its predictions of reality. In order to do this, this "universal" utility function would need to be the result of two parts: 1) the utility function that we initially gave the AI to describe its goal, which I suppose should be unchangeable, and 2) the utility function with the values that it is learning after each iteration, which hopefully should eventually resemble human values as that would make its plans work better eventually. I'm trying to understand whether such a design is technically feasible and whether it would work in the intended way? Am I right in thinking that it would make the AI "transparent", in the sense that it would have no motivation to mislead us. Also wouldn't this design make the AI indifferent to our actions, which is also desirable? Seems to me like it would be a good start. It's true that different people would have different values, so I'm not sure about how to deal with that. Any thought?

Can someone explain this article in layman terms? I do not know any sort of quantum terminology, sorry.

Specifically I would like to know what this means:

The ESP is quite a mild assumption, and to me it seems like a necessary part of being able to think of the universe as consisting of separate pieces. If you can’t assign credences locally without knowing about the state of the whole universe, there’s no real sense in which the rest of the world is really separate from you.

See also my post [http://lesswrong.com/lw/ket/a_new_derivation_of_the_born_rule/]
Not really? If you know linear algebra, you can pick up on the quantum terminology very easily. The best short explanation of QM I've come across is Scott Aaronson's QM in one slide (slide #2 of this powerpoint [http://www.scottaaronson.com/talks/exploring-austin.ppt], read the notes at the bottom of the slide). The difference between classic mechanics and quantum mechanics, in some sense, boils down to whether you use a 'probability distribution' (all values real and non-negative) or a 'wavefunction' (values can be complex or negative) to store the state of the world. The wavefunction approach, with its unitary matrices instead of stochastic matrices, allows for destructive interference between states. That's just background; the discussion in that article all lives in wavefunction territory. Everyone agrees on the underlying mathematics, but they're trying to construct philosophical arguments why a particular interpretation is more or less natural than competing interpretations. That's easy to elaborate on, because it works the same in a quantum and classical universe. But it's not clear to me what part of that you're having trouble comprehending, since it looks clear to me. If it were the case that everything in the universe were 'materially' connected, then you could not reason about any individual part of the universe without reasoning about the whole universe. Instead of being able to say "balls fall towards the Earth when let go," we would have to say "balls fall towards the center of the Earth, the Sun, Jupiter, the Milky Way Galaxy, the...". Note that the second is actually truer than the first (if you define 'center' correctly), but the difference between the two of them can be safely ignored in most cases because the effects of the other objects in the universe on the ball are already mostly captured by the position of the earth; to put this in probabilistic terms, that's the statement P(A)=P(A|B), at least approximately, which means that A and B are

New to LessWrong?