After HeartBleed, I got really irritated at how much time it took to hunt down the "change password" links for all the services I used. So, in the name of fighting trivial inconveniences, I made a list of direct password-and-two-factor-updating links for various popular services: UpdateYourPasswords.
I'm reading Ayn Rand's "The Virtue of Selfishness" and it seems to me that (a part of) what she tried to say was approximately this:
Some ethical systems put false dichotomy between "doing what one wants" and "helping other people". And then they derive an 'ethical' conclusion that "doing what one wants" is evil, and "helping other people" is good, by definition. Which is nonsense. Also, humans can't psychologically completely abstain from "doing what they want" part (even after removing "helping other people" from it), but instead of realising the nonsense of such ethics, they feel guilty, which makes them easier to control.
I don't read philosophy, so I can't tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it's not exactly what the original philosopher said; maybe it's just a popular simplification.) There is the unspoken assumption that when people "do what they want", that does not include caring about others; that people must be forced into p...
I've been reading Pinker's "Better Angels of Our Nature" and it seems to me that people don't need to be psychopaths to have difficulty feeling empathy and concern for other people. If you've read HPMOR, the villagers that used to enjoy cat burning are a good example, which Pinker uses. He suggests that our feelings of empathy have increased over time, although he's not sure for what reason. So earlier, a couple of people in their better moments might have claimed caring about others was important, but generally people were more selfish, so that the two did become out of sync.
I mean, even today when you say you care about other people, you don't suddenly donate all of the money that isn't keeping you alive to effective charities, because of the empathy you don't feel with every single other person on this earth. You don't have to be a psychopath for that happen.
I have recently discovered a technique called "ranger rolling" which has proven ridiculously useful in dealing with my clothing. It basically allows you to turn each item of clothing into an individual block, which you then use to play real life Tetris. This is a much better system than treating them as stacks of paper (which is what happens when you fold them) or as amorphous blobs (which is what happens when you shove them into drawers however you can).
Looks interesting, but I'm assuming this doesn't work if I like to iron my clothes before storing them. Is that right, or does the rolling not majorly detract from the ironing?
I don't iron my clothes before storing them, so I couldn't tell you, but surely this is an opportunity to practice the virtue of empiricism? Iron a couple of shirts, carefully roll them, leave them for a day or two, and check how the wrinkling compares to your usual method of storage. Then share your results for goodwill and karma.
An idea: prestige-based prediction market.
Prediction markets (a.k.a. putting your money where your mouth is) are popular among rationalists, but kinda unpopular with governments. It is too easy to classify them as gambling. But if we remove the money, people have less incentive to get things right.
But there is also a non-monetary thing people deeply care about: prestige. This can't be used on a website with anonymous users, but could be used with famous people who care about what others think about their opinions: journalists or analytics. So here is a plan:
A newspaper (existing or a new one) could make a "Predictions" section, where experts would be asked to assign probabilities to various outcomes. If they guessed correctly, they would gain points; if they guessed incorrectly, they would lose points. The points would influence their position on the page: Opinions of predictors with more points would be at the top of the page (with larger font); opinions of predictors with less points would be at the bottom (with smaller font). Everyone starts with some given number of points; if someone drops below zero, they are removed from this newspaper section, forever. And a new p...
If the only way to see their names at the top of the page in big font is to provide correct predictions... why not?
The classical prediction market relies on opinions of people who got selected because they really wanted to make money on the prediction market. What's the big difference?
Okay... I can imagine that if someone's goal is to bring attention to themselves, they might make correct predictions to get to the top, and then intentionally make shocking (incorrect) predictions to bring even more attention to them. Kinda like people with too much karma sometimes start trolling, because, why not.
Does anyone have a good grasp of the literature on the relationship between drinking and intelligence?
Yes, you would be right. I don't think drinking helps with IQ -- it's mostly used as a stress reliever and a social lubricant, in which roles it functions well.
Question: what are the norms on showing up to meetups for the first time? I happen to be in Berkeley this week, and since there's a meetup this evening I thought I might check it out; should I just show up, or should I get in touch with the organizers and let them know I'm coming/around?
I predict that the answer will be something like "new attendees are welcome, announced or otherwise, but {insert local peculiarity here, e.g. 'Don't worry about the sign that says BEWARE OF THE LEOPARD, we're just getting ready for Towel Day'}". However, enough of my probability mass is elsewhere that I thought I'd check. Also, I couldn't find a definitive statement of the community norms within reach of Google, so I thought change that by asking reasonably publicly.
What happened to the plans of creating more thematic subforums? Is anyone who's in charge willing to implement them?
The Person of Interest TV show is apparently getting pretty explicit about real-world AGI concerns.
...With Finch trying to build a machine that can predict violent, aberrant human behavior, he finally realized that the only solution was to build something at least as smart as a human. And that’s the moment we’re in right now in history. Forget the show. We are currently engaged in an arms race — a very real one. But it’s being conducted not by governments, as in our show, but by private corporations to build an AGI — to build artificial intelligence roughly as intelligent as a human that can be industrialized and used toward specific applications.
...I’m pretty confident that we’re going to see the emergence of AGI in the next 10 years. We have friends and sources within Silicon Valley — there is currently a headlong rush and race between a couple of very rich people to try to solve this problem. Maybe it will even happen in a way that no one knows about; that’s the premise we take for our show. But we thought it would be a fun idea that the Manhattan Project of our era — which is preventing nuclear terrorism, that’s the quiet thing that people have been diligently working on for 10
Throwing a half-formed idea out there for feedback.
For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues. Perhaps society should take that into account, and weigh their opinions more heavily. Right now, this would mean that gay marriage, marijuana legalization, abortion, etc would all very quickly become legal (In the US at least).
Possible counterarguments:
Younger people haven't been right, they merely won the demographic battle and had their way. Current norms are wor
I think there's some truth to your counterarguments 1 and 2. Young people are easier to sway into any change-oriented movement, so any push for sweeping change will have a lot of youth behind it, even if it's an older person pulling the strings and reaping the benefits.
It was the youthful Red Guards who were guilty of the worst Cultural Revolution atrocities, and Pol Pot's regime was even more reliant on adolescent murderers killing everyone who had criminal traditional values or had received a traditional education.
In contrast, Deng Xiaoping was over 70 years old when he instituted his post Cultural Revolution reforms.
...Aside from teaching basic mathematical skills and literacy, the major goal of the new educational system was to instill revolutionary values in the young. For a regime at war with most of Cambodia's traditional values, this meant that it was necessary to create a gap between the values of the young and the values of the nonrevolutionary old.
The regime recruited children to spy on adults. The pliancy of the younger generation made them, in the Angkar's words, the "dictatorial instrument of the party."[citation needed] In 1962 the communists had created a
I'm Against Moral Progress. I don't think moral progress the way we usually talk about it is well founded. We observe moral change, then we decide since past moral change made values ever more like our present values on average, something that is nearly a tautology, the process itself must be good, despite us having no clear understanding of how it works.
Such confusion fogs many people on a similar process, evolution, having noticed they like opposable thumbs and that over time past hominids have come to resemble present hominids ever more they often imagine evolution to be an inherently good process. This is a horribly wrong perception.
Younger people haven't been right, they merely won the demographic battle and had their way.
Young people in general are good at picking winners, and quickly adapting to what is popular. Younger people's status quo bias will also fixate on newer norms compared to older people with aliefs the status quo is something else. Winners will also tend to try and influence them, especially in our society where voting power and public opinion grant legitimacy.
Younger people haven't been right, but despite being a young person who has over the past 3 ye...
For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.
I guess you're not from a country that had Stalin Youth around in the 1970s. (We weren't an Eastern Bloc country either, they were just useful idiots.)
1970s intelligent young American students at Harvard favored the the Khmer Rouge.
...Since the U.S. incursion into Cambodia in the spring of 1970, and the subsequent saturation-bombings The Crimson has supported the Khmer Rouge in its efforts to form a revolutionary government in that country. …
In the days following the mass exodus from Phnom Penh, reports in the western press of brutality and coercion put these assumptions into doubt. But there were other reports on the exodus. William Goodfellow in the New York Times and Richard Boyle, the last American to leave Phnom Penn in the Colorado Daily reported that the exodus from major cities had been planned since February, that unless the people were moved out of the capital city they would have starved and that there was a strong possibility of a cholera epidemic. The exodus, according to these reports, was orderly; there were regroupment centers on all of the major roads leading out of Phnom Penh and people were reassigned to rural areas, where the food supplies were more plentiful.
There is no way to assess the merits of these conflicting reports—and if there were instances of brutality and coercion, we condemn them—but the goals of
For the past few decades, or perhaps even centuries, it seems younger people have consistently been right in their position on social issues.
Taboo "right."
So recently I've been philosophizing about love, sex, and relationships. I'm a man, and I experience lust much more often than love. However, it seems like long-term relationships are better than short-term relationships for a variety of reasons: consistently having sex through short-term relationships as a man requires that you spend a lot of time chasing women, and I've read about many mental health benefits that come with being in a loving relationship that I assume don't come in to play if you're only having short-term relationships.
I'm a outgoing, m...
This writing style has become a cliche:
Imagine a hypothetical situation X. In situation X, you would surely agree that we ought to do Y. Surprise! We actually are living in hypothetical situation X. So we obliged to do Y.
Are there any ways to make it less obnoxious?
There's a woman that has recently started to treat me poorly, and I can't figure out why. I would like help in designing the most efficient social experiment that helps me to solve this riddle.
If it's not clear from the disclaimer above, this post is about a personal situation and contains details about the two persons involved and their feelings.
Some possibly useful background info: it's about a year that we dance regularly together. I like her a lot and some months ago I have told her so, trying to be as level-headed as possible. She replied that she is ...
Here's an intervention, rather than a test: If she says something that hurts your feelings again, just say, "I know you're joking around, but that kind of hurts my feelings."
Instead of informing your model, inform hers.
I am far from an expert in these matters, but would advise against both teasing back and saying explicitly that you interpret the teasing as "treating me worse than usual".
[EDITED to add: To be clear, I mean "don't do both of these together" rather than "both of these are individually bad ideas".]
This never occurred to me until today, but can you solve the 'three wishes from a mischievous but rule abiding genie' problem just by spending your first wish on asking for a perspicuous explanation of what you should wish for? What could go wrong?
Would an average year in the life of an em in Hanson's Malthusian explosion scenario really be >0 QALY? Hanson has kinda defended this scenario because the ems would want to be alive but I don't think that means anything. I remember reading about mice and painful wireheading (probably Yvain's post) and how you can make mice want that kind of wireheading even though it's painful. Similarly it's easy to imagine how people would want to live painful and miserable lives.
Meetup posts have started appearing on the RSS feed for lesswrong Main (http://lesswrong.com/new/.rss).
I could switch my RSS feed to only include promoted posts, but that would increase the problem of the hiddenness of non-promoted Main posts. Is there something else that I could do, or does this need to be fixed on Less Wrong's end?
I recently posted on the rationality diary thread about a study deadline / accountability system I've been doing with zedzed. So far it's worked well for me, and I'd be happy to help others in the same way that zedzed is helping me. If anybody wants to use such a system for what they're studying, just ask. Unfortunately for most subjects I can't provide anything more than a deadline and some accountability, since I probably don't know the subject too well.
Also, if anybody else is willing to provide a similar service to the community (and perhaps can even provide some subject-specific guidance), please respond below so that people can contact you.
I'm having a problem with posting comments to Slate Star Codex-- they're rejected as spam, even though it's the email address I've been using and I haven't included any links. Anyone else having this problem?
Edited to add: whatever it was got fixed.
"Super Rationality Adventure Pals the Saturday morning cartoon! On 1080p from a BitTorrent near you." Please post plotlines and excerpts.
[an old comment I thought I'd revive.]
I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups...
New family of materials discovered by accident
Does this suggest a problem with using Bayes to generate hypotheses? My impression is that Bayes includes generating hypotheses which look in the most likely places. Are there productive ways of generating accidents, or is paying attention when something weird happens the best we can do?
Does anyone understand how the mutant-cyborg monster image RationalWiki uses represents Less Wrong? I've never understood that.
Shit rationalists say: a fellow LessWronger describing a person from Facebook:
He is an intellectual. Well, not in the "he reads LessWrong" sense, but in the "he can express himself using a whole sentence" sense.
I kept laughing for a few minutes (sorry, it's probably less funny here out of context), and I promised I will post this on LW, keeping the author's identity secret. Ignoring the in-group applause lights, it is a nice way to express a difference between a person who merely provides interesting opinions, and a person who also ...
I’m a moderately long term lurker (a couple of years), and in the last ~6 months have had much more free time due to dropping all my online projects to travel around Asia. As a result of this is I’ve ended up reading a lot of of LW and having a huge amount of time to think. I really like understanding things, and it feels like a lot of parts of how the interesting bits of reality work which were tricky for many years are making much more sense. This is pretty awesome and I can’t wait to get back home and talk to people about it (maybe visit some LW meetups so there’s people with less inferential distance to talk to).
I’ve got a few ideas which seem relevant and possibly interesting/useful/new to LWers, but am hesitant to post up without some feedback because it’s more than a little intimidating, especially since most of the posts I’d like to make seem like they should go in main not discussion. I’d like someone to bounce ideas off and look over my posts so I know I’m not just going over old ground, skipping ahead too fast without explaining each step properly, or making silly mistakes.
An outline of the posts I’d like to write: Fuzzy Pattern Theory of Identity - Like all concepts in conceptspace, “me”ness is non-binary and blurry , with the central example being me right now, close examples being past and future selves or mes in alternate branches, more distant examples including other humans, and really distant examples including a dog. Proximity to “me” in thingsspace seems most usefully defined as “how similar to me in the ways I attribute importance to is this configuration of matter”, examples to test your intuitions about this (e.g. a version of you without a brain is physically very like you, but you probably consider them much harder to identify with than a sibling, or perhaps even your pet cat). Possibly some stuff about the evolutionary usefulness of identity, how proximity to “me now” can be used as a fairly good measure of how much to follow a being’s preferences, or that may come later.
The Layers of Evolution - Briefly going through the layers of evolution: Single strand RNA can replicate, but two strands which generate each other are much more efficient , and more complex webs of chemical reactions are more efficient still, but “selfish” molecules could hijack accessible energy from the web to multiply without contributing. Cells separate different webs of self-replicating chemical reaction, giving some protection from rouge molecules at the cost of maintaining a cell wall and slowing molecule level evolution. Multicellular organisms can be more efficient reproducers in certain ways due to cell specialization and ability to act on a larger scale, but suffer from individual cells taking more than their fair share and growing to harm the organism. They counteract this by making all cells share the same DNA and not sharing cells, so an organism which has cancerous growth will die but not spread. Tribes of humans work effectively because division of labour and being able to accumulate larger stores of food make us more efficient in groups than as individuals, but make us vulnerable to selfish individuals taking more than their share at the cost to the group . Some more specific parallels drawn between levels of evolution, and how each layer specifically acts to prevent the layer below it from evolving towards “selfish” behaviour, why this happens (co-operation is a great strategy when it works), and why this is difficult (evolution is innately selfish and will lead individuals to exploit the group for if they can).
Morality and Maths - Most of the features of morality which occur reliably in many different cultures indicate that it’s a method of enforcing co-operation between members of a group, with parallels to the lower levels of evolution. Examples w/ explanation (avoid harming others, share fairly, help those in need, reproductive co-operation), and the limits of each. Other common trends (often don’t apply outside own tribe/family, don’t override self-preservation generally, punishing non-punishers, having unusual in-group specific traditions, how much of modern globalized society reacts). An argument that it is okay and kind of awesome that morality emerges from statistically advantageous strategies evolution ends up following, and how since conflict on a specific level is inherently unstable while relatively stable co-operation is definitely possible at lower levels and widely agreed to be beneficial, general co-operation may be the eventual rest state for humanity (though likely not perfect co-operation, resources needed to check everyone’s playing fair and discouraging those who are not).
Chairman Yang’s quote about “Extend the self of body outward to the self of group and the self of humanity.”, and how each level of evolution (including morality) can be seen as partially extending the “self” or the priorities of the individual evolving unit to include a larger group in order to gain the co-operation of that group.
Fuzzy Identity and Decision Theory - If it makes sense to talk about how “me” something is as a slightly blurry non-binary property, this has interesting implications for decision theory. For example, it can help explain hyperbolic discounting (far future-ete is less me than near future-ete, so has smaller weight), and how working with your future selves/being worked with by your past selves also has slight parallels with the expanding self for more co-operation thing. An analysis of how treating identity as non-binary affects each decision theory, how many of the disjunctions between how a decision theory tells us to act in a situation and our intuition directs us towards arise from the DTs treating identity as binary, how TDT can be seen as a partial implementation of a Fuzzy Identity Decision Theory, and the effects of fully embracing fuzzy identity (I think it can solve counterfactual mugging while giving sane answers to every other problem I’ve considered so far, but have not formalized and I could be missing something).
The above posts seem like they could be a mini sequence, since they all depend fairly heavily on each other and share a theme. Not sure of the title though, my best current is “Identity, Morality, Decision Theory”, which seems like it could be improved.
The Strange Loop of Consciousness - Treating consciousness as a strange loop; the mind sensing its current state, including its sense of current state, has reduced my confusion on the issue dramatically, though not completely. Some speculation on reasons why evolution could have produced key features of consciousness, labelled as potential just-so stories. I wrote half a post up on this a few months ago, but abandoned it mostly because I was worried my point would not come across properly and it would get a bad reaction. Probably best start from scratch if I get here again.
A Guided Tour of LessWrong - A post talking through the main areas of LW content and important ideas with plenty of links to key posts/places to start exploring and very brief summaries of key points. There’s a lot of really interesting content which is buried pretty deeply in the archives or in long lists of links, it’d be nice to point users at a bunch of nodes from one place.
There’s a bunch of other rough ideas I have for posts, but the above ones (plus something about logical first movers I’m waiting for more feedback from the decision theory google group on before posting) are the things I think I could potentially write a decent post on soon. Rough future ideas include some ideas on raising curious and intelligent children (needs research+talking to people with experience), improving the LW wiki (I’ve founded and run a couple of wikis and know my way around MW, the LW wiki has significant room for improvement), post(s) explaining my opinions on common LW positions (AI takeoff, cryonics, etc).
So, is anyone interested in helping out? I’ve got a lot more detailed reasoning in my head than above, so if you’ve got specific questions about justifications for some part which I’m likely to respond to anyway maybe best to hold them until I’ve made a draft for that post. Pointing me at posts which cover things I’m talking about would be good though, I may have missed them and would prefer not to duplicate effort if something’s already been said. I’m thinking I’ll probably write on google docs and give read/add comments power to anyone who wants it.
I guess you just have to try it.
Make one article. Make it a standalone article about one topic. (Not an introduction to a planned long series of articles; just write the first article of the series. Not just the first half of an article, to be continued later; instead choose a narrower topic for the first article. As a general rule: links to already written articles are okay, but links to yet unexisting articles are bad; especially if those yet unexisting articles are used as an excuse for why the existing articles don't have a conclusion.)
Put the article ...
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.