All of MichaelVassar's Comments + Replies

Simulate and Defer To More Rational Selves

Possibly valuable to talk with Robin Hanson and I for revision to HPMOR!Quirrell decision procedures from the source?

I bid two.
I would give a finger from my wand hand for such an opportunity.
Open thread, 25-31 August 2014

There's an anecdote near the beginning of "introduction to psychoanalysis" where he discusses the dreams of arctic explorers, which are almost entirely about food, not about sex, for understandable reasons.

Defecting by Accident - A Flaw Common to Analytical People

It is possible to play both, but difficult, and you can't play both at once as well as equally smart non-analytical types will play just the social game.

Why not? Purely In terms of the social game, isn't "being smart and analytical" just one style of play? Disadvantages: less natural concern for offense or feelings Advantages: more concern and ability for logical politeness, finding the truth, and focusing on ideas (not taking offense). That's^ if you want to really enter the game and play it the standard way. You can also just be yourself, which gets you points and naturally crafts a reputation/expectations, and be idea-focused, which naturally does the same. from an above comment, which has also been my experience: "In some ways, my obliviousness was very powerful for me, because ignoring status cues is a mark of status, as are confidence and being at ease with high-status people - all of which flow from my focus on ideas over people or their status. Yet as I've moved from more academic/intellectual circles to business/wealth circles, it's become crucial to learn that extra social subtext, because most of those people get driven away if you don't have those extra layers of social sense and display it in your conversational maneuvering." I'm not even sure of the necessity of the second part, but it's a good ability to have regardless. I don't see where the cap on communication plus socialising comes from, because communicating well score someone a lot of social points, especially in terms of reputation, but also immediately -if they do it "right" for their environment, which is usually fairly straightforward (be polite and respectful and/or friendly and/or humble and/or oblivious, probably etc). Imo one of the best things you can do specifically for social games, is to pay 0 attention to them. Very few people are such explicit, calculated, and committed status seekers that they can't accept someone who isn't playing (and being described by those 3 adjectives wouldn't even cause them to either). Instead what usually happens is that some people are suspicious of people who don't appear to be playing, and pron
Defecting by Accident - A Flaw Common to Analytical People

Two examples. Sexual selection and speciation. Nuff' said.

Defecting by Accident - A Flaw Common to Analytical People

Yep, but the vast majority of people in a workplace, even those nominally there to deliver technical skills, are there to deliver social skills in reality, and all of the most highly paid people are paid for social skills.
That said, your right, still worth it. Being officially a foreigner is possibly the best approach.

Another Critique of Effective Altruism

Another reasonable concern has to do with informational flow-through lines. When novel investigation demonstrates that previous claims or perspectives were in error, do we have good ways to change the group consensus?

A critique of effective altruism

I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.

I may, of course, be wrong, and I'm not sure... (read more)

I'm partially unsure if I should be commenting here since I do not know Nick that well, or the other matters that could be involved in this discussion. Those two points nevertheless, not only it seems to me that your impression of him is mistaken, but that the truth lies in the exact opposite direction. If you check his posts and other writings, it seems he has the remarkable habit of taking into consideration many opposing views (e.g.: his thesis [] ) and also putting a whole more weight into others opinions (e.g.: common as sense as a prior []) than the average here at LW. I would gather others must have, at worst, a different opinion of him than the one you presented, otherwise he wouldn't be in the positions he's right now, both at FHI and GWWC. That's my two cents. Not even sure if he would agree with all of it, but I would image some data points wouldn't do harm.
What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did. Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you'd find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.
Common sense as a prior

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I'm thinking about how to live up to that agreement more. Regarding the rest of it, I did say "or give less weight to them". Thanks for answering the main question. I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like "Person X wouldn't go for this" and "That cluster of people that seems good really wouldn't go for this", and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as "following the standards that seems credible to me upon reflection", maybe we don't disagree too much. If it doesn't, I'd say it's a substantial disagreement.
A critique of effective altruism

I think that attempting effectiveness points towards a strong attractor of taking over countries.

A critique of effective altruism

I think that this is an effective list of real weak spots. If these problems can't be fixed, EA won't do much good.

A critique of effective altruism

This is MUCH better than I expected from the title. I strongly agree with essentially the entire post, and many of my qualms about EA are the result of my bringing these points up with, e.g. Nick Beckstead and not seeing them addressed or even acknowledged.

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

Common sense as a prior

Upvoted for clarity, but fantastically wrong, IMHO. In particular, "I suspect that taking straight averages gives too much weight to the opinions of cranks and crackpots, so that you may want to remove some outliers or give less weight to them. " seems to me to be unmotivated by epistemology and visibly motivated by conformity.

Would be interested to know more about why you think this is "fantastically wrong" and what you think we should do instead. The question the post is trying to answer is, "In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?" I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still? I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it's worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.
The Centre for Applied Rationality: a year later from a (somewhat) outside perspective

MetaMed is hopefully moving us towards a world with more rationality in the healthcare professions.

Earning to Give vs. Altruistic Career Choice Revisited

I tend to think that if one can make a for-profit entity, that's the best sort of vehicle to pursue most tasks, though occasionally, churches or governments have some value too.

Earning to Give vs. Altruistic Career Choice Revisited

My main comment on this is that if self-direction is as important as it appears to be, it would seem to me that 'become self directed' really should be everyone's first priority if they can think of any way to do that. My second comment is that it seems to me that if one is self-directed and seeks appropriate mentorship, the expected value of pursuing a conventional career is very low compared to that of pursuing an entrepreneurial career. Conversely, mentorship or advice that doesn't account for the critical factor of how self-directed someone is, as well as a few other critical factors such at the disposition to explore options, respond to empirical feedback from the market, etc, is likely to be worse than useless.

Can you expand on this? How does one seek appropriate mentorship?
Post ridiculous munchkin ideas!

The most basic is that as far as I can tell, I had never been hit on while wearing glasses, and that started happening regularly.

Reading failure, "never been hit on" is very different to "never been hit". It sounded wrong when I (mis)read, and I didn't notice.
Post ridiculous munchkin ideas!

You can assume that, but I assure you it's just not the case. We can debate the details some time in person if you'd like.

Post ridiculous munchkin ideas!

There are additional 'add-ons' with names like 'clear view'. The tech changes continually, so do some research before buying it.

Post ridiculous munchkin ideas!

Then something is wrong with the generator that your brain uses when trying to be unconventional. Try to figure out what and how to fix it, and tell me if you figure it out, as I have no idea how to do that.

Why do you say that? The low-hanging fruit of good ideas tends to get plucked, even the long shots - the primary exceptions are things that people refuse to do because they're wrong.
Post ridiculous munchkin ideas!

legitimate concerns, but way WAY weaker than the strength of the argument they are set against.

What? No they aren't. Telecommuting from a (formerly) foreign country where things costs much less (and everything that implies) really isn't that great an option.
Post ridiculous munchkin ideas!

Addendum. Also, learn to code, as that's MUCH more permanent than camming and less dependent on marketing than tutoring and hypnosis. If you can get paid for work you do yourself without marketing, you're doing well.

Post ridiculous munchkin ideas!

In theory. In practice, it would be Spock Rational to be against Spock Rationality, so we give it lip service.

I'm not quite sure I follow.
Post ridiculous munchkin ideas!

3x GDP/student/year? That's an absurdly high estimate.

I hope you realized I meant that GDP per capita. Assuming you did and still think it's high, here's a more detailed estimate: Let's arbitrarily pick the 10th country from bottom of the GDP per capita list by CIA Factbook: Niger. Now we can make some more concrete statements. On average Niger women have 7 children and presumably raise them on 2*GDP per capita income. This gives 0.28 of the GDP on raising a child. (Infant and childhood death rates seem to be negligible for this calculation, perhaps increasing average spending by 10% when assuming they make it to age 15.) So assuming a high quality education takes three times as much as the typical education/child raising of the time we'd get about 1 GDP per child per year. Niger happens to also have the highest birth rate in the world. Picking the country above it on the list, Afghanistan, gives a birth rate of 5 children/woman and would give us about 1.2 GDP per child per year. In any case, my 3 GDP estimate looks rather high here, but on the other hand so was my estimate that all* the children so taught would go on to work at Google-equivalent pay scales. I might then round my guess to 1.5 GDP per child per year when including extra costs like PR or administering tests to see who to admit that are done on more children than are educated. If you think that's still rather high, then you probably disagree with my estimate that such a program would want to spend three times as much money on education and child raising as is standard in the given country. Aside: how do you make links here that include parentheses in them?
Post ridiculous munchkin ideas!

Generalizing about 'poor countries' like this annoys me.

Post ridiculous munchkin ideas!

Very feasible but lots of work. I wouldn't invest in someone starting such a venture unless they had demonstrated the ability to make money by working hard as an independent business owner in the past, but I'd be happy to invest in and advise such a venture if it was run by the right kind of person.

Post ridiculous munchkin ideas!

Seconded. I had NO IDEA how much discrimination I suffered for wearing glasses until I gave them up. Contacts might be a better alternative if you expect to be wearing Google Glasses in a few years anyway though.

I'm intrigued. What was the nature of the discrimination? How did you know glasses/not-glasses was the cause? Any specific examples?
Would that happen automatically, or would the procedure set me at 20/20 unless the person doing it takes special action?
Post ridiculous munchkin ideas!

Whether it's unethical would seem to me to depend on who you are raising the money from and what they perceive the rules of the game to be. From my perspective, doing the submissive, 'morally cautious', un-winning thing rather than the game theoretical thing is unethical.

Post ridiculous munchkin ideas!

email me with info about that company, OK?
Sounds like maybe MetaMed should inquire into working with them.

Post ridiculous munchkin ideas!

It's not informative to send different signals than other people would send in your situation. You are proposing sending dishonest signals, which is uncooperative.

(I've thought about that, but the consideration that seemed more salient to me at the time was: If you send different signals than expected then those who can notice subtlety will notice a discrepancy given, say, a few hours of interaction. Yes you'll be oft-discounted (and you will have incurred this cost yourself and I don't deny that this is indeed a cost worth considering), but the people who falsely present themselves as more important than they are so vastly outweigh the people who falsely present themselves as less important than they are that causing someone to update their estimate of your importance upwards is more likely to make a (justifiable) positive impression than the alternative case which involves someone having to eventually update their estimate of your importance downwards. It's like the inverse of "don't throw pearls before swine". (I'm drunk, I apologize if I'm stating the obvious.))
Pascal's Muggle: Infinitesimal Priors and Strong Evidence

It seems to me like the whistler is saying that the probability of saving knuth people for $5 is exactly 1/knuth after updating for the Matrix Lord's claim, not before the claim, which seems surprising. Also, it's not clear that we need to make an FAI resistant to very very unlikely scenarios.

I'm a lot more worried about making an FAI behave correctly if it encounters a scenario which we thought was very very unlikely.

Pascal's Muggle: Infinitesimal Priors and Strong Evidence

How confident are you of "Probability penalties are epistemic features - they affect what we believe, not just what we do. Maps, ideally, correspond to territories."? That seems to me to be a strong heuristic, even a very very strong heuristic, but I don't think it's strong enough to carry the weight you're placing on it here. I mean, more technically, the map corresponds to some relationship between the territory and the map-maker's utility function, and nodes on a causal graph, which are, after all, probabilistic, and thus are features of ma... (read more)

Fermi Estimates

I recommend trying to take the harmonic mean of a physical and an economic estimate when appropriate.

So, what you're saying is that the larger number is less likely to be accurate the further it is from the smaller number? Why is that?
Can I have a representative example of a problem where this is appropriate?

I recommend doing everything when appropriate.

Is there a particular reason why the harmonic mean would be a particularly suitable tool for combining physical and economic estimates? I've spent only a few seconds trying to think of one, failed, and had trouble motivating myself to look harder because on the face of it it seems like for most problems for which you might want to do this you're about equally likely to be finding any given quantity as its reciprocal, which suggests that a general preference for the harmonic mean is unlikely to be a good strategy -- what am I missing?

MetaMed: Evidence-Based Healthcare

Definitely, though others must decide the update size.

MetaMed: Evidence-Based Healthcare

So about what do you think it IS worth? FYI, I think, based on experience with people whom have tried everything, that a 1% chance of finding something is unrealistically low. 20% with the first $5K and a further 30% with the next 35K would fit my past experience.

Define tried everything? Your prior is that there is a 1/5 chance a handful of researchers can find something helpful in 24 hours that isn't listed in something like an Up-To-Date report on the diagnosis (a decent definition of 'everything')? Does Metamed do patient tracking to see if their recommendations lead to relief? Or do they deliver a report and move on?
MetaMed: Evidence-Based Healthcare

We won't publish anything, but clients are free to publish whatever they wish to in any manner that they wish.

Morality is Awesome

Empirically, that general type of thing is good for at least a week worth of awesome.

LW Women- Minimizing the Inferential Distance

My most immediate question is whether you think your more rapidly increasing desire to be normal was due to biological differences, more cultural pressure, or something else.

Constructing fictional eugenics (LW edition)

Hell yeah.
That said, don't overestimate IQ relative to other important cognitive and behavioral traits.

Firewalling the Optimal from the Rational

I heard an amazing classical performance of Amon Tobin by the cover group for the proper Amon Tobin recently.

[This comment is no longer endorsed by its author]Reply
What is Evil about creating House Elves?

This is really good... now... what if the universe of 'moral atoms' is NOT simple enough for 12-year-old kids to understand, but acknowledging that would cripple our efforts to get people to act morally? What if we already know this, but would need to figure out a whole new way of talking about the human condition in order to adopt the findings of psychology into our day-to-day lives?

"Epiphany addiction"

In so far as happiness is what we strive for by definition the statement is vacant, and what is described as 'happiness' doesn't closely match the natural language meaning of the word.

The Useful Idea of Truth

Many people can effectively be kept out of trouble and made easier for caretakers or relatives to care for via mild sedation. This is fairly clearly the function of at least a significant portion of psychiatric medication.

The Useful Idea of Truth

It's not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as "what fields are legitimate". Saying that something is known in mainstream academia seems suspiciously like saying that "something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it's highly meaningful to say that something is discoverable by someone with competent 'google-fu"

Strongly seconded. Hell, some "Mainstream" scientists are working on big-money research project that attempt to prove that there's a worldwide conspiracy attempting to convince people that global warming exists so as to make money off of it. Either they're all sell-outs, something which seems very unlikely, or at least some of them actually disagree with some other mainstream scientists, who see the "Is there real global warming?" question as obviously resolved long ago.
Agree with all this.
Intellectual Hipsters and Meta-Contrarianism

To be fair, I think that this triad is largely a function of the sort of society one lives in. It could be summarized as "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them"

I think it’s more along the lines of: people in the third stage have acquired and digested all the low-hanging and medium-hanging fruit that those in the second stage are struggling to acquire, that advancing further is now really hard. So they now seek sex and money/power partly because acquiring those will (in the long run) help them further advance in the areas that they have currently put on hold. And partly because of course it’s also nice to have them.
What are the optimal biases to overcome?

It's an alternative to having a well-calibrated bias towards conformity.

A cynical explanation for why rationalists worry about FAI

Actually, I think you get points for doing things that work, whether they are fun or not.

A cynical explanation for why rationalists worry about FAI

As far as I can tell, SI long ago started avoiding that frame because the frame had deleterious effects, but if we wanted to excite anyone, it was ourselves, not other young people.

A cynical explanation for why rationalists worry about FAI

My actual take is that UFAI is actually a much larger threat than other existential risks, but also that working on FAI is fairly obviously the chosen path, not on EV grounds, but on the grounds of matching our skills and interests.

Load More