It happens every now and then that someone encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.
If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!”
And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who expressed disbelief, saying, “Come back when you have a PhD.”
Now, there are good and bad reasons to get a PhD. This is one of the bad ones.
There are many reasons why someone might actually have an initial adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis calls to mind an associated category like “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth.”1 Immediately, at the speed of perception, the idea is rejected.
If someone afterward says, “Why not?” this launches a search for justification, but the search won’t necessarily hit on the true reason. By “‘true reason,” I don’t mean the best reason that could be offered. Rather, I mean whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.
Instead, the search for justification hits on the justifying-sounding fact, “This speaker does not have a PhD.” But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?
More to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.
They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.”
But do people actually believe arbitrary professors at Harvard who say weird things? Of course not.
If you’re saying things that sound wrong to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and if the hearer is a stranger, unfamiliar with you personally and unfamiliar with the subject matter of your field; then I suspect that the point at which the average person will actually start to grant credence overriding their initial impression, purely because of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as “beyond the mundane.”
This is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, “Where are the technical details?” or “Come back when you have a PhD!” And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And Nanosystems is a great book. But did the same people who said, “Come back when you have a PhD,” actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.
This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection may not make the real difference; and you should ponder that carefully before spending huge efforts. If the venture capitalist says, “If only your sales were growing a little faster!” or if the potential customer says, “It seems good, but you don’t have feature X,” that may not be the true rejection. Fixing it may, or may not, change anything.
And it would also be something to keep in mind during disagreements. Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.2
I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.:
- Uncommon, but well-supported, scientific knowledge or math;
- Long inferential distances;
- Hard-to-verbalize intuitions, perhaps stemming from specific visualizations;
- Zeitgeists inherited from a profession (that may have good reason for it);
- Patterns perceptually recognized from experience;
- Sheer habits of thought;
- Emotional commitments to believing in a particular outcome;
- Fear that a past mistake could be disproved;
- Deep self-deception for the sake of pride or other personal benefits.
If the matter were one in which all the true rejections could be easily laid on the table, the disagreement would probably be so straightforward to resolve that it would never have lasted past the first meeting.
“Is this my true rejection?” is something that both disagreers should surely be asking themselves, to make things easier on the other person. However, attempts to directly, publicly psychoanalyze the other may cause the conversation to degenerate very fast, from what I’ve seen.
Still—“Is that your true rejection?” should be fair game for Disagreers to humbly ask, if there’s any productive way to pursue that sub-issue. Maybe the rule could be that you can openly ask, “Is that simple straightforward-sounding reason your true rejection, or does it come from intuition-X or professional-zeitgeist-Y ?” While the more embarrassing possibilities lower on the table are left to the Other’s conscience, as their own responsibility to handle.
1See “Science as Attire” in Map and Territory.
2See Hal Finney, “Agreeing to Agree,” Overcoming Bias (blog), 2006, http://www.overcomingbias.com/2006/12/agreeing_to_agr.html.
There need not be just one "true objection"; there can be many factors that together lead to an estimate. Whether you have a Ph.D., and whether folks with Ph.D. have reviewed your claims, and what they say, can certainly be relevant. Also remember that you should care lots more about the opinions of experts that could build on and endorse your work, than about average Joe opinions. Very few things ever convince average folks of anything unusual; target a narrower audience.
Immediate association: pick-up artists know well that when a girl rejects you, she often doesn't know the true reason and has to deceive herself. You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN", and have accumulated many cynical but useful insights about human mating behaviour.
Most transhumanist ideas fall under the category of "not even wrong." Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis). It's a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can't disprove the notion, but there's nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of "you can't prove this isn't possible" which is completely uninteresting from a practical viewpoint.
If I was going to depack "you should get a PhD" I'd say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you'd be more cautious. If you had a PhD, maybe you'd be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There's two things you get from a formal education: one is broad, you're exposed to a variety of subject matter t... (read more)
"There's two things you get from a formal education: one is broad, you're exposed to a variety of subject matter that you're unlikely to encounter as an autodidact;"
As someone who has a Ph.D., I have to disagree here. Most of my own breadth of knowledge has come from pursuing topics on my own initiative outside of the classroom, simply because they interested me or because they seemed likely to help me solve some problem I was working on. In fact, as a grad student, most of the things I needed to learn weren't being taught in any of the classes available to me.
The choice isn't between being an autodidact or getting a Ph.D.; I don't think you can really earn the latter unless you have the skills of the former.
Or a common factor caused both.
That sounds like it's less "Once you get a Ph.D., I'll believe you," than "Once you get a Ph.D., you'll stop believing that."
Of course, those aren't so different: if I expect that getting a Ph. D would make one less likely to believe X, then believing X after getting a Ph.D is a stronger signal than simply believing X.
Vladimir, I don't quite think that's the "narrower audience" Robin is talking about...
Robin, see the Post Scriptum. I would be willing to get a PhD thesis if it went by the old rules and the old meaning of "Prove you can make an original, significant contribution to human knowledge and that you've mastered an existing field", rather than, "This credential shows you have spent X number of years in a building." (This particular theory would be hard enough to write up that I may not get around to it if a PhD credential isn't at stake.)
See poke's comment above (which is so on the nose, it actually inspired me to register). Then consider the following.
You will never get a PhD in the manner you propose, because that would fulfill only a part of the purpose of a PhD. The number of years spent in the building can be (and in too many cases is) wasted time - but if things are done in a proper manner, this time (which can be only three or four years) is critical.
For science PhDs specifically, the idea isn't to just come up with something novel and write it up. The idea is to go into the field with a question that you don't have an answer for, not yet. To find ways to collect data, and then to actually collect it. To build intricate, detailed models that answer your question precisely and completely, fitting all the available data. To design experiments specifically so you can test your models. And finally, to watch these models completely and utterly fail, nine times out of ten.
They won't fail because you missed something while building them. They will fail because you could only test them properly after making them. If you just built the model that fit everything, and then never tested it with specific experim... (read more)
As a current grad student myself, I could not disagree with poke's comment and this comment more. I work for a very respected adviser in computer vision from a very prestigious university. The reason I was accepted to this lab is because I am an NDSEG fellow. Many other qualified people lost out because my attendance here frees up a lot of my adviser's money for more students. In the mean time, I have a lot of pretty worthwhile ideas in physical vision and theories of semantic visual representations. However, I spend most of my days building Python GUI widgets for a group of collaborating sociologists. They collect really mundane data by annotating videos and no off the shelf stuff does quite what they want... so guess who gets to do that grunt work for a summer? Grad students.
You should really read the good Economist article The Disposable Academic. Graduate studentships are business acquisitions in all but the utmost theoretical fields. Advisers want the most non-linear things imaginable. For example, I am a pure math guy, with heavy emphasis on machine learning methods and probability theory. Yet my day job is seriously creative-energy-draining Python programming. The programmin... (read more)
Ok, so - I hear what you're saying, but a) that is not the way it's supposed to be, and b) you are missing the point.
First, a), even in the current academia, you are in a bad position. If I were you, I would switch mentors or programs ASAP.
I understand where you're coming from perfectly. I had a very similar experience: I spent three years in a failed PhD (the lab I was working in went under at the same time as the department I was in), and I ended up getting a MS instead. But even in that position, which was all tedious gruntwork, I understood the hypothesis and had some input. I switched to a different field, and a different mentor, where most of my work was still tedious, but it was driven by ideas I came to while working with my adviser.
If your position is, as it seems to be, even worse - that you have NO input whatsoever, and are purely cheap labor - then you should switch mentors immediately. If you don't, you might finish your PhD with a great deal of bitterness, but it is much more likely that you will simply burn out and drop out.
Which brings me to b). As I said above, it would be pointless for Eliezer to go to grad school now. Even at best, it contains a lot of tedious, repetitive work. But the essential point stands: in a poorly constrained area such as transhumanism, grand ideas are not enough. That is where PhD does have a function, and does have a reason.
Actually, my mentor is among one of the nicest guys around and is a good manager, offers good advice, and has a consistent record of producing successful students. It's just that almost no grad student gets to have real input in what they are doing. If you do have that, consider yourself lucky, because the dozens of grad students that I know aren't in a position like that. I just had a meeting today where my adviser talked to me about having to balance my time between "whatever needs doing" (for the utility of our whole research group rather than just my own dissertation) and doing my own reading/research. His idea (shared by many faculty members) is that for a few years at the front end of the PhD, you mostly do about 80% general utility work and infrastructure work, just to build experience, write code, get involved... then after you get into some publications a few years later, the roles switch and you shift to more like 80% writing and doing your own thing (research). The problem is that if you're a passionate student with good ideas, then that first few years of bullshit infrastructure work is a complete waste of time. The run-of-the-mill PhD student (who generally i... (read more)
Robin: Of course a PhD in "The Voodoo Sciences" isn't going to help convince anybody competent of much. I am actually more impressed with some of the fiction I vaguely remember you writing for Pournelle's "Endless Frontier" collections than a lot of what I've read recently here.
Poke: "formal education: one is broad, you're exposed to a variety of subject matter that you're unlikely to encounter as an autodidact"
I used to spend a lot of time around the Engineering Library at the University of Maryland, College Park before I mo... (read more)
Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I'd do the former if you have a choice.
Can't do basic derivatives? Seriously?!? I'm for kicking the troll out. His bragging about mediocre mathematical accomplishments isn't informative or entertaining to us readers.
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack a... (read more)
Eliezer, I'm sure if you complete your friendly AI design, there will be multiple honorary PhDs to follow.
Sorry about the length of the post, there was just a lot to say.
I believe disagreements are easier to unpack if we stop presuming they are about difference in belief. Posts like this seem to confirm my own experience that the strongest factor in convincing people of something is not any notion of truth or plausibility but whether there are common allegiances with the other side. This seems to explain a number of puzzles of disagreement, including: (list incomplete to save space)
... (read more)
- Why do people who aren't sure about Elizer's posts about physics/comp science/b
I have spent years in the Amazon Basin perfecting the art of run-on sentences and hubris it helps remind others of my shining intellect it also helps me find attractive women who love the smell of rich leather furnishings and old books.
Between bedding supermodels a new one each night, I have developed a scientific thesis that supersedes your talk of Solomonoff and Kolmogorov and any other Russian name you can throw at me. Here are a random snippet of conclusions a supposedly intelligent person will arrive having been graced by my mathematical superpowers:
I can off the tip of my rather distinguished salt-and-pepper beard name at least 108 other conclusions that would startle lesser minds such as the John BAEZ the very devil himself or Adolf Hitler I have really lost my patience with you ElIzer.
They called me mad when I reinvented calculus! They will call me mad no longer oh I have to go make the Sweaty Wildebeest with a delicately frowning Victoria's Secret model.
Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?
Eliezer - 'I would be willing to get a PhD thesis if it went by the old rules and the old meaning of "Prove you can make an original, significant contribution to human knowledge and that you've mastered an existing field", rather than, "This credential shows you have spent X number of years in a building."'
British and Australasian universities don't require any coursework for their PhDs, just the thesis. If you think your work is good enough, write to Alan Hajek at ANU and see if he'd be willing to give it a look.
Ignoring the highly unlikely slurs about your calculus ability:
However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way.
British universities? That's the traditional place to do that sort of thing. Oxbridge.
Specifically with regard to the apparent persistent disagreement between you and Robin, none of those things explain it. You guys could just take turns doing nothing but calling out your estimates on the issue in question (for example, the probability of a hard takeoff AI this century), and you should reach agreement within a few rounds. The actual reasoning behind your opinions has no bearing whatsoever on your ability to reach agreement (or more precisely, on your inability to maintain disagreement).
Now, this is assuming that you both are honest and rati... (read more)
And with that lovely exhibition of math talent, combined with the assertion that he skipped straight to grad school in mathematics, I do hereby request GenericThinker to cease and desist from further commenting on Overcoming Bias.
The y appears on both sides of the equation, so these are differential equations. To avoid confusion, re-write as:
(1) (d/dt) F(t) = A*F(t) (2) (d/dt) F(t) = e^F(t)
Now plug e^At into (1) and -ln(C-t) into (2), and verify that they satisfy the condition.
You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN"
Interesting. As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb's problem. Eliezer's posts have increased my credence that the latter is correct, although it hasn't helped me with the former.
I think Alec Greven may be your man. Or perhaps like Lucy van Pelt I should set up office hours offering Love Advice, 5 cents?
You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN"
You have. We do. And yes, they must.
"Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis)."
It doesn't address any of the questions a chemist would pose after reading Nanosystems.
"As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb's problem."
Interesting. Although I would say "approaching women with confidence is an instance of a class of problems that Newcomb's problem is supposed to represent but does ... (read more)
Daniel, I knew it :-)
Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.
"However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way."
I think this is a good idea for you. But don't be surprised if finding the right one takes more work than an occasional bleg. And I do recommend getting it at Harvard or the equivalent. And if I'm not mistaken, you may still have to do a bachelors and masters?
If I have to do a bachelors degree, I expect that I can pick up an accredited degree quickly at that university that lets you test out of everything (I think it's called University of Phoenix these days?). No Masters, though, unless there's an org that will let me test out of that.
The rule of thumb here is pretty simple: I'm happy to take tests, I'm not willing to sit in a building for two years solely in order to get a piece of paper which indicates primarily that I sat in a building for two years.
if you know ahead of time that you're going to be given this decision, either pre-commit to one-boxing, or try to game the superintelligence. Neither option is irrational; it doesn't take any fancy
Phil, your commitment ahead of time is your own private business, your own cognitive ritual. What you need in order to determine the past in the right way is that you are known to perform a certain action in the end. Whether you are arranging it so that you'll perform that action by making a prior commitment and then having to choose the actions because of the penalty, or simply following a timeless decision theory, so that you don't need to bother with prior commitments outside of your cognitive algorithm, is irrelevant. If you are known to follow timeles... (read more)
Vladimir, I understand the PD and similar cases. I'm just saying that the Newcomb paradox is not actually a member of that class. Any agent faced with either version - being told ahead of time that they will face the Predictor, or being told only once the boxes are on the ground - has a simple choice to make; there's no paradox and no PD-like situation. It's a puzzle only if you believe that there really is backwards causality.
Phil, you said "if you didn't know ahead of time that you'd be given this decision, choose both boxes", which is a wrong answer. You didn't know, but the predictor knew what you'll do, and if you one-box, that is your property that predictor knew, and you'll have your reward as a result.
The important part is what predictor knows about your action, not even what you yourself know about your action, and it doesn't matter how you convince the predictor. If predictor just calculates your final action by physical simulation or whatnot, you don't need ... (read more)
"You didn't know, but the predictor knew what you'll do, and if you one-box, that is your property that predictor knew, and you'll have your reward as a result."
No. That makes sense only if you believe that causality can work backwards. It can't.
"If predictor can verify that you'll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it's all it takes."
Your property of one-boxing can't be communicated backwards in time.
We could get bogged down in discussions of free will; ... (read more)
Compare: communicating the property of the timer that it will ring one hour in the future (that is, timer works according to certain principles that result in it ringing in the future) vs. communicating from the future the fact that timer ringed. If you can run a precise physical simulation of a coin, you can predict how it'll land. Usually, you can't do that. Not every difficult-seeming prediction requires things like simulation of physical laws, abstractions can be very powerful as well.
Vladimir, I don't mean to diss you; but I am running out of weekend, and think it's better for me to not reply than to reply carelessly. I don't think I can do much more than repeat myself anyway.
One boxing because of a lack of precommitment is a mistake. Backwards causality is irrelevant. Prediction based off psychological or physical simulation is sufficient.
Gaming a superintelligience with dice acheives little. You're here to make money not prove him wrong. Expect him to either give you a probabilistic payoff or count a probabilistic decision as two boxing. Giving pedentic answers requires a more formal description, it doesn't change anything.
If I'm ever stuck in a prison with a rational, competitive fellow prisoner, it'd be really damn handy to be omniscient and have my my buddy know it.
I may be wrong about Newcomb's paradox.
It's perplexing: This seems like a logic problem, and I expect to make progress on logic problems using logic. I would expect reading an explanation to be more helpful than having my subconscious mull over a logic problem. But instead, the first time I read it, I couldn't understand it properly because I was not framing the problem p... (read more)
I'm glad that helped.
I don't think it did help, though. I think I failed to comprehend it. I didn't file it away and think about it; I completely missed the point. Later, my subconscious somehow changed gears so that I was able to go back and comprehend it. But communication failed.
Buddhists say that great truths can't be communicated; they have to be experienced, only after which you can understand the communication. This was something like that. Discouraging.
From my experience, the most productive way to solve a problem on which I'm stuck (that is, hours of looking at it produce no new insight or promising directions of future investigation), is to keep it in the background for long time, while avoiding forgetting it by recalling what's it about and visualizing its different aspects and related conjectures from time to time. And sure enough, in a few days or weeks, triggered by some essentially unrelated cue, a little insight comes, that allows to develop a new line of thought. When there are several such problem in the background, it's more or less efficient.
Inferential distance can make communication a problem worthy of this kind of reflectively intractable insight.
Phil - Changing your mind on previous public commitments is hard work. Respect!
It's a fascinating problem. I'm hoping Eleizer gets a chance to write that thesis of his. It's even more interesting once you see people applying newcomelike reasoning behaviorally. A whole lot more of human behavior started making sense after I grasped the newcome problem.
Phil, I think that's how logic (or math) normally works. You make progress on logic problems by using logic, but understanding another's solution usually feels completely different to me, completely binary.
Also, it's hard to say that your unconscious wasn't working on it. In particular, I don't know if communicating logic to me is as binary as it feels, whether I go through a search of complete dead ends, or whether intermediate progress is made but not reported.
Going back to this post, a lot of things that puzzled us then are way more obvious now. But one angle remained unexplored for some reason. Here it is: if people catch on that you got a PhD just to persuade them, your PhD won't help you persuade them. As Robin said, people often don't have "true rejections" on the object level because they don't understand the object level. Instead they feel (correctly) that controversial scientific arguments should not be sold directly to the public, and apply multiple heuristics on the meta level. And the positi... (read more)
Perhaps it should, but the problem is that answering this question is one of the big problems in salesmanship: working out the customer's true obstacle to wanting to buy from you. Salesmen would love to be able to get a true answer to this question - and some even ask it directly - but people tend to receive this as manipulation: finding out their inner thoughts for purposes of getting their money. Thi... (read more)
Here's something I'd love to put into an entire article, but can't because my karma's bad (see my other comment on this thread):
Many people make the false assumption that the scientific method starts with the hypothesis. They think: first hypothesize, then observe, then make a theory from the collection of hypotheses.
The reality is quite the opposite. The first point on the scientific method is the observation. Any hypotheses before observation will only diminish the pool of possible observations. Second is building a theory. Along the process, many t... (read more)
i'd say: you don't have a phd, therfore you're not qualified to judge whether or not yudkowsky should have a phd.
When I was in Sales, we called this "finding their true objection."
Basically, if someone says "Well, I don't want it unless it has X!" You say "What if I could provide you with X?"
So if someone says "Come back when you have a PhD!" You say "What if I could provide you with PhDs who believe the same idea?" If they then say "There are tons of PhDs who believe crazy things!" then you say "Then what else would I need to convince you?"
Usually, between them dismissing their own criteria and the amount of ideas they can bring forward, you can bring it down to about three things. I've seen 5, but that was a hard case. Those aren't hard and fast rules: the rule is make sure you get them ALL, and make it specific, something like:
"So, if I can get you a published book by a PhD, respected in a field relevant to X, AND I can provide you with a for-profit organization that is working to accomplish goals relevant to X, AND I can make a flower appear out of my ear (or whatever)" THEN you will admit you were wrong and change your view?
And if you're REALLY invested, you should have been taking notes, and get them to 'in... (read more)
You spend a lot of your time worrying about how to get an AI to operate within the interests of lesser beings. You also seem to spend a certain amount of time laying out Schelling fences around "dark side tactics". It seems to me that these are closely related processes.
As you have said, "people are crazy, the world is mad". We are not operating with a po... (read more)
As the years go on. I'm glad to review this and appreciate that you understand this. You definitely have a group of people that love you more than like you, and it is somewhat disheartening to see these people so vehemently they insist on their rejection without even giving some modicum of chance.
I only get more motivation to put in my extraordinary effort and see in what ways I can help.
There are some views of Yudkowsky I don't necessarily agree with, and none of them have anything to do with him having or not having a PhD.
Are you sure this type of rejection (or excuse of a rejection) is common and significant?
I think there's a slight misconception about Aumann's agreement theorem here: "Common knowledge", as Aumann defines it (and is leveraged in the proof), doesn't just mean exchanging beliefs once: common knowledge means 1) knowing how they update after the first pass, then 2) knowing how they updated after knowing about the first pass, 3) knowing about how they updated after knowing about updating about knowing about the first pass, and so on
It's only at the end of a potentially infinite chain of exchanging beliefs, that two rational agents are guaranteed to... (read more)