If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
What has happened to Metamed? Their site is down :-( http://www.metamed.com
It appears that MetaMed has gone out of business. Wikipedia uses the past tense "was" in their page for MetaMed, and provides this as a source for it.
Key quote from the article:
Zvi discusses it a bit on his blog, here
It seems like the business model of charging individuals prices that are that high just doesn't work for a startup without a proven brand.
I wonder if Metamed's problem was that if you were smart and well informed enough to understand the company's value to the average person, you personally didn't need it because you could do the research yourself.
Found a five years old comment about HPMoR:
I'm currently twenty-two years old. Over the last two weeks, I've discussed with a couple friends that among the "millenial" generation, i.e., people currently under the age of thirty-five, people profess having goals for some kind of romantic relationships, but they don't act in a way which will let them achieve those goals. Whether they:
it seems the proportion of young people who are and stay single is greater than I would expect. I don't just mean how the fastest-growing household configuration since the 1980s (in the United States) has been single adults. I mean how most of my friends profess a preference for having some romantic relationship in their life, yet most of my single friends stay single, and don't appear to be dating much or doing something else to correct this. Maybe popular culture exerts a normative socia... (read more)
I have something sort of a potential explanation to it, but it is difficult to formulate it in a way that it will be not misunderstood in the wrong way. Please everybody try to take this post with maximal charity and benefit of doubt.
History tends to swing from one extreme to another, as people tend to OVERreact to the problems they see.
Given that it is an OVERreaction, they are usually wrong, but it also points out a problem. You can diagnose the original problems from the overreactions to them.
These overreactions are sometimes exaggerated only in "quantity", in which case a more moderate version of them would be okay, or they often get the direction completely wrong, still they point out how something is a problem and the issues they raise often have SOME truth to them.
For example, Communism/Bolshevism was a huge OVERreaction to the condition of workers under capitalism, it was not a good solution at all, and even making it more moderate (a moderate, limited dictatorship of people who call themselves proletarians?) would not help much, but it pointed out a problem and now we have better solutions to that problem, such as unions striking when they want a wage ra
Tangentially, how much is it a problem of "dating", and how much a problem of "dating with sane people", when the pool of sane people is already small?
When I was younger, I wanted to have a romantic relationship with a person whom I would perceive as intellectually equal (plus or minus the LessWrong level). Since I barely knew such people... not much luck.
If I could send a message in time back to myself, it would be: "It will take decades until you find someone you can have meaningful conversation with. Meanwhile, relax, and try to fuck any nice body, but don't get attached. Otherwise you will later regret the wasted time." The only problem is, my younger self would be horrified to hear such advice.
How do you suggest people actually implement this 'just date more'?
However attractive, well dressed, confidant you are, you still need to know how to actually approach someone.
A problem is that any attempt to improve attractiveness will lead some people to declare that you are evil or otherwise defective. Its not just PUA stuff, this is far more general: if a guy lifts, that makes him a 'dickhead' according to members of my peer group, while a woman not shaving her armpits makes her strong & empowered (does a man not shaving his face make him empowered?). Conversely, some people believe that not taking care of your appearance makes you a slob.
Then there's the problem that confidence is key. You need to be 110% confident of everything you say, and to truly believe this, you need to internalise it. The problem is then that it spills over into other aspects of life, and you become very badly credence calibrated, potentially leading to serious mistakes because you can't admit that you might be wrong. When you are in a group containing more than one 'alpha male' it becomes impossible to get anything done, even something as simple as choosing a pub to go to, because one alpha male decides to go to one pub, the other decides to go to a different pub... (read more)
Sure, that's the stereotype. But the problem is actually that the signaling model is wrong. Our stereotype wants to associate himself with some concept, so he throws on an item that he associates with that concept: a pinstripe fedora if he likes Thirties mobsters, let's say, or a leather trench if he's seen The Matrix one too many times. It's out of context, it clashes, and the outfit ends up looking worse than the sum of its parts (and being overweight and poorly groomed never helps).
The principle is easy to state: clothes should work in context, including the context of your body. But the point is that those cues are not obvious. There's a whole visual language that needs to be learned before you can reliably present yourself as e.g. gentlemanly, and keeping a laser focus on whatever stereotype you feel like projecting actually isn't the most efficient way to get there. Better to start with the basics.
It doesn't appear this is discussed much, so I thought I'd start a conversation:
Who on LessWrong is uncomfortable with or doesn't like so much discussion of effective altruism here? if so, why?
I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn't express their opinions much. Also, while I identify with effective altruism, I don't only value this site as a means to altruistic ends, and I don't want other parts of the rationalist community to feel neglected.
Personally, I'm indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously. I don't really disagree with it, just not interested. As I've mentioned before, I care about myself, my family, my friends, and maybe some prominent people who don't know me, but whose work makes my life better. I feel for the proverbial African children, but not enough for anything more than a token contribution. If LW had a budget, /r/EA would be a good subreddit, though one of those I would rarely, if ever, visit. As it is, I skip the EA discussions, but I don't find them annoyingly pervasive.
That is exactly my own view. I can see the force of the arguments for EA, but remain unmoved by them. I don't mind it being discussed here, but take little interest in the discussions. I have no arguments against it (although the unfortunate end of George Price is a cautionary tale, a warning of a dragon on the way), and I certainly don't want to persuade anyone to do less good in the world.
It's rather like the Christian call to sainthood. Many are called, but few are chosen.
ETA: I am interested, as a spectator, in seeing how the movement develops.
On my part, it strikes me as the greatest and most important contribution this place has had on my life.
(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)
It appears to me that there are two LessWrongs.
The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner's dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.
The first LessWrong results in serious insights that should be integrated into one's life. In Program Equilibrium in the Prisoner's Dilemma via Lob's Theorem, the authors take a moment to discuss the issue of "Defecting Against CooperateBot"--if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner's dilemma. On the latest survey, I ins... (read more)
Seeing as, in terms of absolute as well as disposable income, I'm probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It's something I know I couldn't participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn't afford anything else. Maybe one day when I'll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain't got no money, take yo' broke ass home!
Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good mo... (read more)
I think that the image of EA on LW has been excessively donation-focused, but I'd like to point out that things like earning to give are only one part of EA.
EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can't donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:... (read more)
I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don't really think short term EA makes sense. I'm surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is 'far' in the overcoming bias sense, well, so are starving children in Africa.
My brain filters it out automatically. Altruism is not even on my mind AT ALL, until I sorted out my own problems and feel the life of me and my family is reasonably secure, happy, safe, and going up and up. I don't feel I have any surplus for altruism.
I guess in practice I do altruistic things all the time. People ask me for help, I don't say no. I just don't seek out opportunities to.
My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
I propose that some major academic organization such as the American Economic Association randomly and secretly choose a few members and request that they attempt to get fraudulent work accepted into the highest ranked journals they can. They reveal the fraud as soon as an article is accepted. This procedure would give us some idea how of easy it is to engage in fraud, and give journals additional incentives to search for it. For some academic disciplines the incentives to engage in fraud seem similar to that with illegal performance enhancing drugs and professional sports, and I wonder if the outcomes are similar.
Every so often someone proposes this (and sometimes someone who thinks they are clever actually carries it out) and it's always a terrible idea. The purpose of peer review is not to uncover fraud. It's not even to make sure what's in the paper is correct. The purpose of peer review is just to make sure what's in the paper is plausible and sane, and worth being presented to a wider audience. The purpose is to weed out obvious low-quality material such as perpetual motion machines or people who are duplicating other's work as their own. Could you get fraudulent papers accepted in a journal? Of course. A scientist sufficiently knowledgeable of their field could definitely fool almost any arbitrarily rigorous peer review procedure. Does fraud exist in the scientific world? Of course it does. Peer review is just one of the many mechanisms that serve to uncover it. Real review of one's work begins after peer review is over and the work is examined by the scientific community at large.
And this is OK if the fraud rate is low, and unacceptable if it's high.
I doubt this happens to more than a tiny number of papers, although probably the more important the result the more likely it will get reviewed.
Let's make a top level thread collecting websites that are useful for any purpose. From curetogether.com to pomodoro timers. Also includes download sites of useful software. Eventually this should make it into the wiki.
What would be a good way to do it? Perhaps similar to media threads.
I also know the space I propose to search is ginormous, but the goal is not to make it exhaustive, the goal is to list the favorite web-based tools / learning materials / software / other useful things on the web of LW members. With the hidden hope that we will get a better... (read more)
A few thoughts on Mark_Friedenbach's recent departure:
I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, "good luck and best wishes." We are here for reasons, and when those reasons wane, I wouldn't begrudge anyone looking elsewhere or doing other things.
(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. ... (read more)
That draws the wrong people.
When doing Quantified Self community building we found in in Germany that while we were featured plenty in mainstream media, the interesting people who came to our meetups hadn't heard of us over that channel. We learned that it doesn't make sense to hold QS meetups in German in Berlin, because everybody who's interesting speaks English but not everybody who's interesting speaks German.
You don't want the people who take popular newspapers seriously.
(Akrasia, because that's all I ever talk about):
I do not know to whose attention I should bring this so as to combat the problem, so I'm asking here:
I have a stupidly difficult time talking to people, too, especially my parents (who pretty much have to manage all the details, because of course they do). This does not help.
Yes, I've read all the Akrasia articles on Lesswrong that I can find. Mostly, I'm hoping there's someone better equipped to fix this than me or the internet, and that someone can help me find that... (read more)
I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.
What do all of you think the awareness of concerns about superhuman intelligence as a catastrophic risk going "mainstream", culturally, will be now?
Bill Gates, Stephen Hawking, and Elon Musk voice concern over Artificial Intelligence after Superintelligence was published
Network TV series Elementary covers AI risk
The release of major Hollywood films touching upon what dangerous AI might really be like: Transcendence, Ex Machina, Avengers: Age of Ultron
The pop-culture examples demonstrate an awareness of what Nick Bostrom or Eli... (read more)
Watch Ex Machina. This is pretty close to what you are talking about, and I was it was well done.
Non-binary views on health and depression
I used to have a binary view on health. You either have the beetus or you don't have the beetus. You are ill/a patient or not. It turned out, it is better to view these things on on a sliding scale. You really want your muscles to be highly sensitive to insulin as it both makes you not fat and not tired (removes at least one reason for tiredness: muscles refusing fuel). So you can function highly optimally here, or less optimally, or somewhat disfunctionally, and when your insulin sensitivity is really low you can c... (read more)
According to the 2014 LessWrong Survey Results, 15.1 % of the LessWrong community prefers polyamorous relationships to monogamous ones. If you don't know what polyamory is, it's been described as "consensual, ethical, and responsible non-monogamy. (Source). This excludes (formerly) monogamous relationships where one party is "cheating", i.e., engaged in an additional sexual or romantic relationship in secret. Semi-monogamous, or "monogamish", marriages and relationships may be count as polyamorous in the minds of some, but may be e... (read more)
No formal studies to share.
I know a lot of poly folk in N-way relationships who seem reasonably happy about it and would likely be less happy in monogamous relationships; I know a lot of monogamous folks in 2-way relationships who seem reasonably happy about it and would likely be less happy in polygamous relationships; I know a fair number of folks in 2-way relationships who would likely be happier in polygamous relationships; I know a larger number of folks who have tried polygamous relationships and decided it wasn't for them. Mostly my conclusion from all this is that different people have different preferences.
As to whether those differing preferences are the result of genetic variation, gestational differences, early childhood experiences, lifestyle choices made as adolescents and adults, something else entirely, or some combination thereof... I haven't a clue.
I don't see where it matters much for practical purposes.
I mean, I recognize that there's a social convention that we're not permitted to condemn people for characteristics they were "born with," but that mostly seems irrelevant to me, since I see no reason to condemn people for preferring poly relationships regardless of whether they were born that way, acquired it as learned behavior, or (as seems likeliest) some combination.
I finally finished reading Plato's Camera, and it was fairly good, for a philosophy book. In fact, it does quite a lot better than most ML/AGI books at talking about how a real mind can work. I'm thinking of fetching my highlights and notes off my Kindle to write a book review. Would that be helpful to anyone?
I've noticed recently that listening to music with lyrics significantly hampers comprehension for reading texts as well as essay-writing ability, but has no (or even slightly positive) effects on doing math. My informal model of the problem is that the words of the song disrupt the words being formed in my head. Has anyone else experienced anything similar?
Not particularly LWish, but possibly of interest: towards a taxonomy of logic puzzles.
There seem to exist non-negligible differences in individual RMR.
And, though it is my understanding that the '3500 kcals = 1lb of body fat' is a less-than-precise rule of thumb that fails to account for many variables in regard to weight loss, it stills stand to reason that a couple hundred kcals per day would add up to something substantial over the course of, say, even a few years.
So, it seems you could have persons A and B—each eating and exercising at exactly the same level—end up tens of pounds apart from one another in body weight over a relatively... (read more)
I wonder if this objection to MIRI's project has been made so far: EY recognizes that placing present day humans in an environment reached by CEV would be immoral, right? Doesn't this call into question the desirability of instant salvation? Perhaps what is really desirable is reaching the CEV state, but doing so only gradually. Otherwise, we might never reach our CEV state, and we arguably do want to reach it eventually. We can still have a friendly AI, but perhaps it's role should be to slowly guide us to the CEV state while making sure we don't get into deep trouble in the mean time. Eg. We shouldn't be maimed for life as the result of an instant's inattention, etc.
Hi everyone! I am in Berkeley for a MIRI workshop that starts tomorrow. Today my entire day is free so if by some fluke of chance someone in the area wants to hang out with me, feel free to contact me at email@example.com.
A neighborhood of San Francisco just put a moratorium on the building on "luxux flats" http://www.citylab.com/housing/2015/05/san-francisco-may-put-a-moratorium-on-new-development-in-the-mission/393857/
It seems to me that any building of new flats should lower the pressure on the rents of existing flats via basic economics of supply and demand. Whether or not the new flats are "affordable housing" prices should be lower if there more housing.
Are the people who favor those policies simply stupid and lack understanding of economics or is ... (read more)
If you're interested in robotics, this video is a must see: https://youtu.be/EtMyH_--vnU?t=32m34s
I have to say I'm baffled. I was genuinely shocked watching the thing. Its speed is incredible. I remember writing off general robots after closely following Willow Robotics' Work. That was only three years ago. Again, I'm pretty shocked.
This forum doesn't allow you to comment if you have <2 karma. How does one get their first 2 karma then?
This seems like just a mistake: The current LW terms of service (posted on April 23) forbid the use of automated scripts to export your posts or comments from the site.
This would include some of the tools listed on the site FAQ.
Is Solomonoff induction a theorem for making optimal probability distributions or a definition of them? That is to say, has anyone proved that Solomonoff induction produces probability distributions that are "optimal," or was Solomonoff induction created to formalize what it means for a prediction to be optimal. In the former case, how could they define optimality?
I'm learning about Turing Machines for the first time. I think I get the gist of it, and I'm asking myself the question, "What's the big deal?". Here's my attempt at an answer:
Consider the idea of Thingspace. Things are there components/properties. You could plot a point in Thingspace that describes everything about, say John Smith.
You could encode that point in thingspace. Ie. you could create a code that says, "001010111010101001...1010101010101" represents point (42343,12312,11,343223423432423,...,123123123123) in Thingspace.
This is the place to post random thoughts, right? I have been thinking about what kind of community I would least regret living in until the singularity comes along. (Without deadening my faculties with drugs, etc. Optimal means "least bad" as well as "most good", right?)
I recently read this article about the origins of analytic philosophy: http://ontology.buffalo.edu/smith/articles/Polish_Philosophy.pdf It says that analytic philosophy was born in states where there was no "official culture", but there were multiple ideologic... (read more)
This old comment says business is a notoriously all-male province in what I suppose is the United States. This does not match my experience in Europe. If under business we mean being employed by a corporation, generally speaking here sales and logistics and tech are male, marketing, HR, finance and support and testing are female. This is of course a broad generalization. Sales is closer to having gender balance than logistics or tech. HR and marketing are strongly female because they are understood as a "human" field, organizing trade shows or i... (read more)
"The thing is, I actually do endorse polyamory. I mean, not in the sense of thinking everyone should do it, but in the sense of thinking it should be an option. I think there are some people who tend to do better in monogamous relationships, some people who are naturally polyamorous, and some people who can go either way."
According to the 2014 LessWrong Survey Results, 15.1 % of the LessWrong community prefers polyamorous relationships to monogamous ones. If you don't know what polyamory is, it's been described as "conse... (read more)
The current LW terms of service say:
Is it necessary to use the G-word (and capitalize it)?
I remember following Willow Robotics's work a couple years ago and writing general robots off as a lost cause - it really did seem hopeless. Now I notice I am confused. Anyone else watch this video: https://www.youtube.com/watch?v=JeVppkoloXs. It's baffling.