All of Nick_Beckstead's Comments + Replies

Do Earths with slower economic growth have a better chance at FAI?

What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".

Will the world's elites navigate the creation of AI just fine?

Could you say a bit about your audiobook selection process?

1lukeprog7yWhen I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to. These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible. Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process [http://lesswrong.com/lw/hlc/will_the_worlds_elites_navigate_the_creation_of/a3q5] to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure. Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
Common sense as a prior

I'd say Hothschild's stuff isn't that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don't follow it but should, do follow it but shouldn't, and don't follow it and shouldn't. There's nothing systematic about it.

Hochschild's own answer to my question is:

When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely,

... (read more)
1lukeprog7yRight, the empirics is: cataloguing on which issues the public does and doesn't agree with elite opinion, what elite opinion is, etc. Anyway, I figured it might be an entry point into some useful data sources.
A critique of effective altruism

What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.

Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things ... (read more)

Common sense as a prior

If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more import... (read more)

A critique of effective altruism

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I'm pretty on board here.

I think &qu

... (read more)
A critique of effective altruism

I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.

As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on f... (read more)

5benkuhn7yThe main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism... I think in general, a case that "X is bad so we need more of fixing X" without specific recommendations can also be useful in that it leaves the resource allocation up to individual people. For instance, you decided that your current plans are better than spending more time on social-movement introspection, but (hopefully) not everyone who reads this post will come to the same conclusion. I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own. Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.
A critique of effective altruism

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.

I may, of course, be wrong, and I'm not sure... (read more)

Review of studies says you can decrease motivated cognition through self-affirmation

I agree that this would be good, but didn't think it was worthwhile for me to go through the extra effort in this case. But I did think it was worthwhile to share what I had already found. I think I was very clear about how closely this had been vetted (which is to say, extremely little).

0John_Maxwell8yYep, thanks for sharing this!
Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

What if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be "timid" while making the downsides of "timidity" seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little.

I think it would be interestin... (read more)

Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

Also, it's not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post that's related to this.

I agree that Period Independence may break in the kind of case you describe, though I'm not sure. I don't think that the kind of case you are describing here is a strong consideration against using Period Independence in cases that don't involve exact repetition. I think your main example in the post is excellent.

2Wei_Dai8yWhat if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be "timid" while making the downsides of "timidity" seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little. BTW what do you think about my suggestion to do a sequence of blog posts based on your thesis? Or maybe you can at least do one post as a trial run? Also as an unrelated comment, the font in your thesis seems to be such that it's pretty uncomfortable to read in Adobe Acrobat, unless I zoom in to make the text much larger than I usually have to. Not sure if it's something you can easily fix. If not, I can try to help if you email me the source of the PDF.
Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

OK, I"ll ask Paul or Stewart next time I see them.

Does your proposal also violate #1 because the simplicity of an observer-situated-in-a-world is a holistic property of the the observer-situated-in-a-world rather than a local one?

2Wei_Dai8yYes (assuming by #1 you mean Period Independence), but it's not clear to what extent. For example there are at least two kinds of programs that can output a human brain. A) simulate a world and output the object at some space-time location. B) simulate a world and scan for an object matching some criteria, then output such an object. If a time period gets repeated exactly, people's algorithmic probability from A gets doubled, but algorithmic probability from B doesn't. I'm not sure at this point whether A dominates B or vice versa. Also, it's not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post [http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/] that's related to this.
Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

That aside, I do have an object-level comment. Nick states (in section 6.3.1) that Period Independence is incompatible with bounded utility function, but I think that's wrong. Consider a total utilitarian who exponentially discounts each person-stage according to their distance from some chosen space-time event. Then the utility function is both bounded (assuming the undiscounted utility for each person-stage is bounded) and satisfies Period Independence.

I agree with this. I think I was implicitly assuming some additional premises, particularly Temporal... (read more)

1Wei_Dai8yMy proposal violates Temporal Impartiality. Yes, sort of. When I said "algorithmic probability" I was referring to the technical concept divorced from standard connotations of "probability", but my idea is also somewhat related to the idea of probability as a property of centered possible worlds. I guess there's a bit of an inferential gap between us that makes it hard for me to quickly explain the idea to you. From my perspective, it would be much easier if you were already familiar with Algorithmic Information Theory and my UDT [http://wiki.lesswrong.com/wiki/Updateless_decision_theory] ideas, but I'm not sure if you want to read up on all that. Do you see Paul Christiano often? If so, he can probably explain it to you in person fairly quickly. Or, since you're at FHI, Stuart Armstrong might also know enough about my ideas to explain them to you.
Common sense as a prior

Would be interested to know more about why you think this is "fantastically wrong" and what you think we should do instead. The question the post is trying to answer is, "In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?" I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow s... (read more)

2MichaelVassar7yI think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
My daily reflection routine

The answers over the last 6 weeks have not been very repetitive at all. I'm not sure why this is exactly, since when I was much younger and would pray daily the answers were highly repetitive. It may have something to do with greater maturity and a greater appreciation of the purpose of the activity.

My daily reflection routine

I think of the gratitude list as things that stood out as either among the best parts of the day or as unusually good (for you personally). And mistakes go the opposite way.

Common sense as a prior

That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.")

This isn't something I've looked into closely, though from looking at it for a few minutes I think it is something I would like to look into more. Anyway, on the Wikipedia page on diffusion of innovation:

This is the second fastest category of individuals who adopt an innovation. Thes

... (read more)
Common sense as a prior

How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?

My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public, so that if people generally adopted the framework, many of the promising social movements would progress more quickly than they actually did. I am not sufficiently aware of the ... (read more)

2Eliezer Yudkowsky8yThat sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so - I can totally imagine a case being made for, "Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.") Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally? The last obvious point on which you could have thus been victorious would have been my skepticism of the now-confirmed Higgs boson, and Holden is apparently impressed by the retrospective applicability of this heuristic to predict that interventions much better than the Gates Foundation's best interventions would not be found. But still, an advance prediction would be pretty cool.
Common sense as a prior

I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian's perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.

1Lumifer8yThat's a pretty meaningless statement without specifying which values. How responsive "other people" would be to value disagreements about child pornography, for example, do you think?
Common sense as a prior

I'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.

Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on... (read more)

0Brian_Tomasik8yOr, most likely of all, it's because I don't care to justify it. If you want to call "not wanting to justify a stance" a bias or blind spot, I'm ok with that. :)
2Lumifer8yWhy do you think it's unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are "intuitive" -- they go by gut feeling, basically. I think that's the normal mode in which most of humanity operates most of the time.
Common sense as a prior

Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it."

I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don't think that I or others really understand what is going on. This has been an important process for many of the people I know who are t... (read more)

1Brian_Tomasik8yI'm not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it's not dependent on a controversial theory of meta-ethics. It's just that I intuitively don't like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance. On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance. Fair enough. :) I'll buy that way of putting it. Anyway, if I were really as unreasonable as it sounds, I wouldn't be talking here and putting at risk the preservation of my current goals.
Common sense as a prior

I don't have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke's Muslim theology case here.

One thing I'd say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I'd admit that many others are not. Examples of good ones include:

  • Love your neighbor as yourself (I'd translate this as "treat others as you would like to be treated")

  • Focus on identifying and managing your own personal weaknesses rather than criticizing others

... (read more)
0Brian_Tomasik8yGood points. Of course, depending on the Pope in question, you also have teachings like the sinfulness of homosexuality, the evil of birth control, and the righteousness of God in torturing nonbelievers forever. Many people place more weight on these beliefs than they do on those of liberal/scientific elites. It seems like you're going to get clusters of authority sentiment. Educated people will place high authority on impressive intellectuals, business people, etc. Conservative religious people will tend to place high authority on church leaders, religious founders, etc. and very low authority on scientists, at least when it comes to metaphysical questions rather than what medicine to take for an ailment. (Though there are plenty of skeptics of traditional medicine too.) What makes the world of Catholic elites different from the world of scientific elites? I mean, some people think the Pope is a stronger authority on God [http://en.wikipedia.org/wiki/Papal_infallibility] than anyone thinks the smartest scientist is about physics.
Common sense as a prior

It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."

If by short-term you mean "what happens in the next 100 years or so," I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the d... (read more)

Common sense as a prior

I think it is evidence that thinking about it carefully wouldn't advance their current concerns, so they don't bother or use the thinking/talking for other purposes. Here are some possibilities that come to mind:

  • they might not care about the outcomes that you think are decision-relevant and associated with your claim

  • they may care about the outcomes, but your claim may not actually be decision-relevant if you were to find out the truth about the claim

  • it may not be a claim which, if thought about carefully, would contribute enough additional evidence t

... (read more)
Common sense as a prior

However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?

I disagree with the claim that the argument for shaping the far future is a Pascalian wag... (read more)

2Brian_Tomasik8yI thought some of our disagreement might stem from understanding what each other meant, and that seems to have been true here. Even if the probability of humanity surviving a long time is large, there remain entropy in our influence and butterfly effects, such that it seems extremely unlikely that what we do now will actually make a pivotal difference in the long term, and we could easily be getting the sign wrong. This makes the probabilities small enough to seem Pascalian for most people. It's very common for people to say, "Predictions are hard, especially about the future, so let's focus on the short term where we can be more confident we're at least making a small positive impact."
Common sense as a prior

My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me.

I'm a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As f... (read more)

1Brian_Tomasik8yI think it's fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be. Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: "Within 20 seconds of arguing about the definition of 'desire', someone will say, 'Screw it. Taboo 'desire' so we can argue about facts and anticipations, not definitions.'" Here I would say, "Screw ethics and meta-ethics. All I'm saying is I want to do what I feel like doing, even if you and other elites don't agree with it." Sure, but this is not a factual error, just an error in being a reasonable person or something. :) -------------------------------------------------------------------------------- I should point out that "doing what I feel like doing" doesn't necessarily mean running roughshod over other people's values. I think it's generally better to seek compromise [http://utilitarian-essays.com/compromise.html] and remain friendly to those with whom you want to cooperate. It's just that this is an instrumental concession, not because I actually agree with the values that I'm willing to be nice to.
Common sense as a prior

Yes, thank you for catching.

Common sense as a prior

I agree that in principle, you don't want some discontinuous distinction between elites and non-elites. I also agree with your points (a) - (c). Something like PageRank seems good to me, though of course I would want to be tentative about the details.

In practice, my suspicion is that most of what's relevant here comes from the very elite people's thinking, so that not much is lost by just focusing on their opinions. But I hold this view pretty tentatively. I presented the ideas the way I did partly because of this hunch and partly for ease of exposition.

1Brian_Tomasik8yNick, what do you do about the Pope getting extremely high PageRank by your measure? You could say that most people who trust his judgment aren't elites themselves, but some certainly are (e.g., heads of state, CEOs, celebrities). Every president in US history has given very high credence to the moral teachings of Jesus, and some have even given high credence to his factual teachings. Hitler had very high PageRank during the 1930s, though I guess he doesn't now, and you could say that any algorithm makes mistakes some of the time. ETA: I guess you did say in your post that we should be less reliant on elite common sense in areas like religion and politics where rationality is less prized. But I feel like a similar thing could be said to some extent of debates about moral conclusions. The cleanest area of application for elite common-sense is with respect to verifiable factual claims.
Common sense as a prior

Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having

... (read more)
0Brian_Tomasik8yMy current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One's choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn't much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don't think there's something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I've never heard someone argue that it's a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
Common sense as a prior

It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don't seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don't bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elite

... (read more)
2Brian_Tomasik8yAs far as the GiveWell point, I meant "proper Pascalian bullets" where the probabilities are computed after constraining by some reasonable priors (keeping in mind that a normal distribution with mean 0 and variance 1 is not a reasonable prior in general). Low probability, yes, but not necessarily low probability*impact. As I mentioned in another comment [http://lesswrong.com/lw/iao/common_sense_as_a_prior/9kjj], I think most Pascalian wagers that one comes across are fallacious because they miss even bigger Pascalian wagers that should be pursued instead. However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like "the Overwhelming Importance of Shaping the Far Future." My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don't have linear utility functions and/or don't like Pascalian wagers. Do you think most elites would agree with you about shaping the far future? This highlights a meta-point in this discussion: Often what's under debate here is not the framework but instead claims about (1) whether elites would or would not agree with a given position upon hearing it defended and (2) whether their sustained disagreement even after hearing it defended results from divergent facts, values, or methodologies (e.g., not being consequentialist). It can take time to assess these, so in the short term, disagreements about what elites would come to believe are a main bottleneck for using elite common sense to reach conclusions.
Common sense as a prior

I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) "elites."

This is a question which it seems I wasn't sufficiently clear about. I count someone as an "expert on X" roughly when they are someone that a broad coalition of tru... (read more)

6Brian_Tomasik8yCool -- that makes sense. In principle, would you still count everyone with some (possibly very small) weight, the way PageRank does? (Or maybe negative weight in a few cases.) A binary separation between elites and non-elites seems hacky, though of course, it may in fact be best to do so in practice to make the analysis tractable. Cutting out part of the sample also leads to a biased estimator, but maybe that's not such a big deal in most cases if the weight on the remaining part was small anyway. You could also give different weight among the elites. Basically, elites vs. non-elites is a binary approximation of a more continuous weighting distribution. Anyway, it may be misleading to think of this as purely a weighted sample of opinion, because (a) you want to reduce the weight of beliefs that are copied from each other and (b) you may want to harmonize the beliefs in a way that's different from blind averaging. Also, as you suggested, (c) you may want to dampen outliers to avoid pulling the average too much toward the outlier [http://xkcd.com/690/].
Common sense as a prior

Great. It sounds like may reasonably be on the same page at this point.

To reiterate and clarify, you can pretty much make the standards as high as you like as long as: (1) you have a good enough grip on how the elite class thinks,(2) you are using clear indicators of trustworthiness that many people would accept, and (3) you make a good-faith effort not to cherry pick and watch out for the No True Scotsman fallacy. The only major limitation on this I can think of is that there is some trade-off to be made between certain levels of diversity and independent... (read more)

Common sense as a prior

I think we don't disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the uppe... (read more)

Common sense as a prior

Perhaps I should have meant loop quantum gravity. I confess that I am speaking beyond my depth, and was just trying to give an example of a central dispute in current theoretical physics. That is the type of case where I would not like to lean heavily on my own perspective.

Common sense as a prior

Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that's just an obvious sort of reweighting you might try, though anyone who's had experience with machine learning knows that most clever reweightings you try don't work. To someone else it might be cherry-picking of gullible physicists, and say, "You have violated Beckstead's rules!"

Just to be clear: I would count this as violating my rules because you haven't used a clear indicator of trustworthiness that many people would ... (read more)

Common sense as a prior

Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai's case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn't say that people should be relying on info they c... (read more)

6Eliezer Yudkowsky8ySo in my case I would consider elite common sense about cryptography to be "Ask Bruce Schneier", who might or might not have declined to talk to those companies or consult with them. That's much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn't answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to. But then if Scott Aaronson, who isn't a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn't follow the object-level arguments and there was no blatantly obvious meta-level difference, I'd mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn't hesitate to pick sides based on it, regardless of the eliteness of the people who'd taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I've been in that epistemic position lots of times. I'm honestly not sure about what your version is. I certainly don't get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my 'Bruce Schneier' output more strongly than the 10%, at least as I briefly read them.
Common sense as a prior

It seems the "No True Elite" fallacy would involve:

(1) Elite common sense seeming to say that I should believe X because on my definition of "elites," elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn't count as an "elite," but doing so in a way that couldn't be justified by my framework

In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don't get to count as part of elite common sense immed... (read more)

9Eliezer Yudkowsky8yThere's always reasons why the scotsman isn't a Scotsman. What I'm worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries 'cherry-picking' when similar reasoning is applied. Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that's just an obvious sort of reweighting you might try, though anyone who's had experience with machine learning knows that most clever reweightings you try don't work. To someone else it might be cherry-picking of gullible physicists, and say, "You have violated Beckstead's rules!" To me it might be obvious that AI 'elites' are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because 'Oh, they're not true elites' or 'Even though it's completely predictable that we're going to run into this problem later, it's not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again'. But now I'm diverging into a separate issue.) I'd be happy with advice along the lines of, "First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update." I'm much more worried about alleged rules for de
Common sense as a prior

I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you're interested.

Common sense as a prior

Re: (1), I can't do too much better than what I already wrote under "How should we assign weight to different groups of people?" I'd say you can go about as elite as you want if you are good at telling how the relevant people think and you aren't cherry-picking or using the "No True Scotsman" fallacy. I picked this number as something I felt a lot of people reading this blog would be in touch with and wouldn't be too narrow.

Re: (2), this is something I hope to discuss at greater length later on. I won't try to justify these claims now, ... (read more)

1tog8yGreat, thanks. The unjustified examples for (2) help. I'm interested in how the appropriate elite would vary in size and in nature for particular subjects. For instance, I imagine I might place a little more weight on certain groups of experts in, say, economics and medical effectiveness than the top 10% of Ivy League grads would. I haven't thought about this a great deal, so might well be being irrational in this.
8Eliezer Yudkowsky8yIf your advice is "Go ahead and be picky about who you consider elites, but make a good-faith effort not to cherry-pick them and watch out for the No True Scotsman fallacy" then I may merely agree with you here! It's when people say, "You're not allowed to do that because outside view!" that I start to worry, and I may have committed a fallacy of believing that you were saying that because I've heard other people argue it so often, for which I apologize.
Common sense as a prior

Sorry, limited experience with LW posts and limited HTML experience. 5 minutes of Google didn't help. Can you link or explain? Sorry if I'm being dense.

5CronoDAS8ySorry, I should have said "summary break" - that's what the editor calls it. It's what puts the "continue reading" link in the post.
Common sense as a prior

I feel this doesn't address the "low stakes" issues I brought up, or that this may not even by the physicists' area of competence. Maybe you'd get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.

I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn't this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn't immediately call you out on being wrong. And the people setting things up d... (read more)

0Document8yWhat different evidence would you expect to observe in a world where amateur attempts to set up systems of government were usually botched? (Edit: reworded for (hopefully) clarity.)
Common sense as a prior

I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage.

But note that philosophers of physics sometimes make whole careers thinking about this, and they are among the most high-caliber philosophers. They may be at an advantage in terms of this c... (read more)

5JonahS8yWhat I'm anchoring on here is the situation in the field of philosophy of math, where lack of experience with the practice of math seriously undercuts most philosophers' ability to do it well. There are exceptions, for example I consider Imre Lakatos [http://en.wikipedia.org/wiki/Imre_Lakatos] to be one. Maybe the situation is different in philosophy of physics.
Common sense as a prior

I haven't fully put together my thoughts on this, but it seems like a bad test to "break someone's trust in a sane world" for a number of reasons:

  • this is a case where all the views are pretty much empirically indistinguishable, so it isn't an area where physicists really care all that much

  • since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn't transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases w

... (read more)
1[anonymous]8yYou meant “is not really”?

From my perspective, the main point is that if you'd expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently - the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn't really matter much to me, since we're already way under the 'pass' threshold for FAI.

Common sense as a prior

I'm not sure I understand the objection/question, but I'll respond to the objection/question I think it is.

Am I changing the procedure to avoid a counterexample from Wei Dai?

I think the answer is No. If you look at the section titled "An outline of the framework and some guidelines for applying it effectively" you'll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an "expert" is being someone e... (read more)

Common sense as a prior

The intrinsic interest of the question of interpretation of quantum mechanics

The question of what quantum mechanics means has been considered one of the universe’s great mysteries. As such, people interested in physics have been highly motivated to understand it. So I think that the question is privileged relative to other questions that physicists would have opinions on — it’s not an arbitrary question outside of the domain of their research accomplishments.

My understanding is that the interpretation of QM is (1) not regarded as a very central question... (read more)

A minor quibble.

quantum gravity vs. string theory

I believe you are using bad terminology. 'Quantum gravity' refers to any attempt to reconcile quantum mechanics and general relativity, and string theory is one such theory (as well as a theory of everything). Perhaps you are referring to loop quantum gravity, or more broadly, to any theory of quantum gravity other than string theory?

2JonahS8yI agree if we're talking about the median theoretical physicist at a top 5 school, but when one gets further toward the top of the hierarchy, one starts to see a high density of people who are all-around intellectually curious and who explore natural questions that they come across independently of whether they're part of their official research. I agree, but a priori I suspect that philosophers of physics and others without heavy subject matter knowledge of quantum mechanics have leaned too heavily on this. Spending one's life thinking about something can result subconscious acquisition of implicit knowledge of things that are obliquely related. People who haven't had this experience may be at a disadvantage. I actually think that it's possible for somebody without subject matter knowledge to rationally develop priors that are substantially different from expert consensus here. One can do this by consulting physicists who visibly have high epistemic rationality outside of physics, by examining sociological factors that may have led to the status quo, and by watching physicists who disagree debate each other and see which of the points they respond to and which ones they don't. Can you give a reference?
Load More