It appears that MetaMed has gone out of business. Wikipedia uses the past tense "was" in their page for MetaMed, and provides this as a source for it.
Key quote from the article:
Tallinn learned the importance of feedback loops himself the hard way, after seeing the demise of one of his startups, medical consulting firm Metamed.
It seems like the business model of charging individuals prices that are that high just doesn't work for a startup without a proven brand.
I wonder if Metamed's problem was that if you were smart and well informed enough to understand the company's value to the average person, you personally didn't need it because you could do the research yourself.
Found a five years old comment about HPMoR:
I think the biggest problem Yudkowsky will have with this will involve Hermione - A rational and knowledgeable Harry makes her basically redundant. Well, that, and the fact that a good 90% of each book consisted of "Harry screws up repeatedly because he forgot from the last book that he should just always go to Dumbledore first with any new problem"... I don't see this Harry having that same problem.
Heh.
I'm currently twenty-two years old. Over the last two weeks, I've discussed with a couple friends that among the "millenial" generation, i.e., people currently under the age of thirty-five, people profess having goals for some kind of romantic relationships, but they don't act in a way which will let them achieve those goals. Whether they:
it seems the proportion of young people who are and stay single is greater than I would expect. I don't just mean how the fastest-growing household configuration since the 1980s (in the United States) has been single adults. I mean how most of my friends profess a preference for having some romantic relationship in their life, yet most of my single friends stay single, and don't appear to be dating much or doing something else to correct this. Maybe popular culture exerts a normative socia...
I have something sort of a potential explanation to it, but it is difficult to formulate it in a way that it will be not misunderstood in the wrong way. Please everybody try to take this post with maximal charity and benefit of doubt.
History tends to swing from one extreme to another, as people tend to OVERreact to the problems they see.
Given that it is an OVERreaction, they are usually wrong, but it also points out a problem. You can diagnose the original problems from the overreactions to them.
These overreactions are sometimes exaggerated only in "quantity", in which case a more moderate version of them would be okay, or they often get the direction completely wrong, still they point out how something is a problem and the issues they raise often have SOME truth to them.
For example, Communism/Bolshevism was a huge OVERreaction to the condition of workers under capitalism, it was not a good solution at all, and even making it more moderate (a moderate, limited dictatorship of people who call themselves proletarians?) would not help much, but it pointed out a problem and now we have better solutions to that problem, such as unions striking when they want a wage ra
Tangentially, how much is it a problem of "dating", and how much a problem of "dating with sane people", when the pool of sane people is already small?
When I was younger, I wanted to have a romantic relationship with a person whom I would perceive as intellectually equal (plus or minus the LessWrong level). Since I barely knew such people... not much luck.
If I could send a message in time back to myself, it would be: "It will take decades until you find someone you can have meaningful conversation with. Meanwhile, relax, and try to fuck any nice body, but don't get attached. Otherwise you will later regret the wasted time." The only problem is, my younger self would be horrified to hear such advice.
I think it makes sense for us to try dating and relationships more, because there may not be as much time and opportunity as we hope later in life.
How do you suggest people actually implement this 'just date more'?
However attractive, well dressed, confidant you are, you still need to know how to actually approach someone.
A problem is that any attempt to improve attractiveness will lead some people to declare that you are evil or otherwise defective. Its not just PUA stuff, this is far more general: if a guy lifts, that makes him a 'dickhead' according to members of my peer group, while a woman not shaving her armpits makes her strong & empowered (does a man not shaving his face make him empowered?). Conversely, some people believe that not taking care of your appearance makes you a slob.
Then there's the problem that confidence is key. You need to be 110% confident of everything you say, and to truly believe this, you need to internalise it. The problem is then that it spills over into other aspects of life, and you become very badly credence calibrated, potentially leading to serious mistakes because you can't admit that you might be wrong. When you are in a group containing more than one 'alpha male' it becomes impossible to get anything done, even something as simple as choosing a pub to go to, because one alpha male decides to go to one pub, the other decides to go to a different pub...
Sure, that's the stereotype. But the problem is actually that the signaling model is wrong. Our stereotype wants to associate himself with some concept, so he throws on an item that he associates with that concept: a pinstripe fedora if he likes Thirties mobsters, let's say, or a leather trench if he's seen The Matrix one too many times. It's out of context, it clashes, and the outfit ends up looking worse than the sum of its parts (and being overweight and poorly groomed never helps).
The principle is easy to state: clothes should work in context, including the context of your body. But the point is that those cues are not obvious. There's a whole visual language that needs to be learned before you can reliably present yourself as e.g. gentlemanly, and keeping a laser focus on whatever stereotype you feel like projecting actually isn't the most efficient way to get there. Better to start with the basics.
It doesn't appear this is discussed much, so I thought I'd start a conversation:
Who on LessWrong is uncomfortable with or doesn't like so much discussion of effective altruism here? if so, why?
Other Questions:
I want to discuss it because what proportion of the LessWrong community is averse or even indifferent or disinterested in effective altruism doesn't express their opinions much. Also, while I identify with effective altruism, I don't only value this site as a means to altruistic ends, and I don't want other parts of the rationalist community to feel neglected.
Personally, I'm indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously. I don't really disagree with it, just not interested. As I've mentioned before, I care about myself, my family, my friends, and maybe some prominent people who don't know me, but whose work makes my life better. I feel for the proverbial African children, but not enough for anything more than a token contribution. If LW had a budget, /r/EA would be a good subreddit, though one of those I would rarely, if ever, visit. As it is, I skip the EA discussions, but I don't find them annoyingly pervasive.
That is exactly my own view. I can see the force of the arguments for EA, but remain unmoved by them. I don't mind it being discussed here, but take little interest in the discussions. I have no arguments against it (although the unfortunate end of George Price is a cautionary tale, a warning of a dragon on the way), and I certainly don't want to persuade anyone to do less good in the world.
It's rather like the Christian call to sainthood. Many are called, but few are chosen.
ETA: I am interested, as a spectator, in seeing how the movement develops.
On my part, it strikes me as the greatest and most important contribution this place has had on my life.
(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)
It appears to me that there are two LessWrongs.
The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner's dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.
The first LessWrong results in serious insights that should be integrated into one's life. In Program Equilibrium in the Prisoner's Dilemma via Lob's Theorem, the authors take a moment to discuss the issue of "Defecting Against CooperateBot"--if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner's dilemma. On the latest survey, I ins...
Seeing as, in terms of absolute as well as disposable income, I'm probably closer to being a recipient of donations rather than a giver of them, effective altruism is among those topics that make me feel just a little extra alienated from LessWrong. It's something I know I couldn't participate in, for at least 5 to 7 more years, even if I were so inclined (I expect to live in the next few years on a yearly income between $5000 and $7000, if things go well). Every single penny I get my hands on goes, and will continue to go, strictly towards my own benefit, and in all honesty I couldn't afford anything else. Maybe one day when I'll stop always feeling a few thousand $$ short of a lifestyle I find agreeable, I may reconsider. But for now, all this EA talk does for me is reinforce the impression of LW as a club for rich people in which I feel maybe a bit awkward and not belonging. If you ain't got no money, take yo' broke ass home!
Anyway, the manner in which my own existence relates to goals such as EA is only half the story, probably the more morally dubious half. Disconnected from my personal circumstances, the Effective Altruism movement seems one big mix of good and not-so-good mo...
I think that the image of EA on LW has been excessively donation-focused, but I'd like to point out that things like earning to give are only one part of EA.
EA is about having the biggest positive impact that you can have on the world, given your circumstances and personality. If your circumstances mean that you can't donate, or disagree with donations being the best way to do good, that still leaves options like e.g. working directly for some organization (be it a non-profit or for-profit) having a positive impact on the world. Some time back I wrote the following:
...Effective altruism says that, if you focus on the right career, you can have an even bigger impact! And the careers don't even need to be exotic, demanding ones that only a few select ones can do (even if some of them are). Some of the top potential careers that 80,000 hours has identified so far include thing as diverse as being an academic, civil servant, journalist, marketer, politician, or software engineer, among others. Not only that, they also emphasize finding your fit. To have a big impact on the world, you don't need to shoehorn yourself into a role that doesn't suit you and that you hate - in fact you're ex
I know this may come across as sociopathically cold and calculating, but given that post-singularity civilisation could be at least thirty orders of magnitude larger than current civilisation, I don't really think short term EA makes sense. I'm surprised that the EA and existential risk efforts seem to be correlated, since logically it seems to me that they should be anti-correlated.
And if the response is that future civilisation is 'far' in the overcoming bias sense, well, so are starving children in Africa.
My brain filters it out automatically. Altruism is not even on my mind AT ALL, until I sorted out my own problems and feel the life of me and my family is reasonably secure, happy, safe, and going up and up. I don't feel I have any surplus for altruism.
I guess in practice I do altruistic things all the time. People ask me for help, I don't say no. I just don't seek out opportunities to.
My biggest problem with EA is the excessive focus on a specific metric with no consideration of higher order plans or effects. The epitome of naive utilitarianism.
I propose that some major academic organization such as the American Economic Association randomly and secretly choose a few members and request that they attempt to get fraudulent work accepted into the highest ranked journals they can. They reveal the fraud as soon as an article is accepted. This procedure would give us some idea how of easy it is to engage in fraud, and give journals additional incentives to search for it. For some academic disciplines the incentives to engage in fraud seem similar to that with illegal performance enhancing drugs and professional sports, and I wonder if the outcomes are similar.
Every so often someone proposes this (and sometimes someone who thinks they are clever actually carries it out) and it's always a terrible idea. The purpose of peer review is not to uncover fraud. It's not even to make sure what's in the paper is correct. The purpose of peer review is just to make sure what's in the paper is plausible and sane, and worth being presented to a wider audience. The purpose is to weed out obvious low-quality material such as perpetual motion machines or people who are duplicating other's work as their own. Could you get fraudulent papers accepted in a journal? Of course. A scientist sufficiently knowledgeable of their field could definitely fool almost any arbitrarily rigorous peer review procedure. Does fraud exist in the scientific world? Of course it does. Peer review is just one of the many mechanisms that serve to uncover it. Real review of one's work begins after peer review is over and the work is examined by the scientific community at large.
The purpose of peer review is not to uncover fraud.
And this is OK if the fraud rate is low, and unacceptable if it's high.
Real review of one's work begins after peer review is over and the work is examined by the scientific community at large.
I doubt this happens to more than a tiny number of papers, although probably the more important the result the more likely it will get reviewed.
Let's make a top level thread collecting websites that are useful for any purpose. From curetogether.com to pomodoro timers. Also includes download sites of useful software. Eventually this should make it into the wiki.
What would be a good way to do it? Perhaps similar to media threads.
I also know the space I propose to search is ginormous, but the goal is not to make it exhaustive, the goal is to list the favorite web-based tools / learning materials / software / other useful things on the web of LW members. With the hidden hope that we will get a better...
A few thoughts on Mark_Friedenbach's recent departure:
I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, "good luck and best wishes." We are here for reasons, and when those reasons wane, I wouldn't begrudge anyone looking elsewhere or doing other things.
(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. ...
Or people who have commit privileges to popular newspapers could write articles like "10 things LW taught me".
That draws the wrong people.
When doing Quantified Self community building we found in in Germany that while we were featured plenty in mainstream media, the interesting people who came to our meetups hadn't heard of us over that channel. We learned that it doesn't make sense to hold QS meetups in German in Berlin, because everybody who's interesting speaks English but not everybody who's interesting speaks German.
You don't want the people who take popular newspapers seriously.
(Akrasia, because that's all I ever talk about):
I do not know to whose attention I should bring this so as to combat the problem, so I'm asking here:
http://caejones.livejournal.com/18117.html
I have a stupidly difficult time talking to people, too, especially my parents (who pretty much have to manage all the details, because of course they do). This does not help.
Yes, I've read all the Akrasia articles on Lesswrong that I can find. Mostly, I'm hoping there's someone better equipped to fix this than me or the internet, and that someone can help me find that...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.