Introduction

John Wentworth is on fire writing about the importance of gears-level models and the challenge that novices face in evaluating the claims of experts. Let's call the latter "consulting."

It's an absolutely basic problem of everyday life. How do I know the salesman, or the plumber, or the mechanic, is giving me a fair deal? When a friend invites you on a technical climbing trip, and tells you not to worry and that they'll plan everything out safely, how do you know whether to trust them or decline? Should you get a second opinion from another doctor, and if so, why not get a third, and even a fourth?

It's also an issue in politics. Julia Galef and Matthew Yglesias just put out a scintillating Rationally Speaking episode about their mutual mistake in initially supporting the Iraq War. Matt thought at the time that he should trust the experts in Washington, who had access to classified information and top experts in the field, over the opinions of the anti-war pundits.

This issue feeds back into itself in the form of credentialism. After all, there's little to stop people from speaking beyond their specific domain of expertise. As Julia and Matt discuss in this same episode, a textile manufacturer probably knew more about whether or not we could rapidly get cloth masks out to the population than Dr. Fauci. If the initial decision to tell people that masks were ineffective was motivated by a desire to preserve PPE for healthcare providers, did they think to ask textile manufacturers whether this was a serious concern? If not, why not?

In general, how do issues come to be "owned" by a certain profession or specialty? How did "bioethicist" become something you could be an "expert" in? Do I have to defer to Peter Singer on moral issues, or can I think for myself?

And this is a meta-issue as well. After all, who's qualified to have opinions on how specialty topics should be divided up? Who can claim to be an expert in distinguishing issues where we should defer to experts, and where we can trust our own judgment? On what basis do they claim that expertise?

Now, it might be true that most or all of these questions can be answered by learning how to think for ourselves on every issue. Building our own complete set of gears level models.

But let's assume for a moment that there's a reason we've had roles and hierarchies of experts all around the world, since time immemorial. If we need to consult with experts, can we create gears-level models of when to do it, who to consult with, how to have a useful interaction with a consultant, how to verify their advice, and how to implement or broadcast it when it seems widely useful?

Why Expert Consultation Is Necessary

Rationality often revolves around how to learn from and evaluate public information or the claims of the people in our personal lives. We talk a lot about scientific papers, news articles, claims on social media, personal debates, and our own thoughts and feelings.

However, we know that a lot of the most important information in the world is inaccessible, to the public or just to us. It might be classified, a carefully kept secret, somebody else's private thoughts, or written in a language we don't speak. They might be meaningful only in the context of a particular person's skillset. It might be point of pride to keep it hidden, like the fact that one woman in my choir wears a wig. Once, I hiked all the way through Zion National Park and kept it a secret the whole way through that I'd forgotten my sleeping bag and was freezing cold at night, because I didn't want other people to worry about me.

The people who possess that knowledge might have so much demand for their attention that they won't freely disseminate it, or won't take the time to ensure that others adequately understand it. Transmitting and triangulating information - making sure the right information gets to the right person, and that they absorb the right message - can be a very expensive and sometimes controversial task. When done badly, it can reflect negatively not only on the people directly involved, but their associated communities as well.

Knowledge can be a bargaining chip in a negotiation. And of course, many sectors, including science, business, and even philosophy, are focused on trying either to produce new information, or to discover information that at present is unobserved physical phenomena.

Hiding information can also be a way to improve signaling. If you have a job you'd like my help on, but are worried I'll feel pressured if you ask, you might not tell me about it unless I independently realize it and offer to help you out. If Alice is romantically interested in Ryan, she might keep that hidden until she's had a chance to observe him for a while so that he won't modify his behavior to impress her or push her away.

Some topics are so complex or fluid that we're almost certain to go astray working on our own. The relevant information might be constantly shifting, and to build a gears-level model of it is like trying to map the ocean waves. Other topics actively resist our attempts at inquiry, and naïve attempts to measure them will only generate illusions. In these areas, it may be possible to pursue the truth, but only by relying on others to help us get our footing and learn how to deal with it productively. Their ability to do so might be based on rare insights and strokes of genius that have been carefully preserved and transmitted for generations, and provide the ground for further productive work.

Expert Consultation As A Rationality Bottleneck

John Wentworth writes about how above a certain income, knowledge, not wealth, tends to become the bottleneck. You can't purchase the expertise you need, so you have to acquire it yourself.

It may also be that above a certain level of rationality, a social network, not one's own skills and knowledge, become the bottleneck. I'm in the preliminary stages of building a career in tissue engineering research, and I fully expect that I won't be able to focus my scholarship or research very well without mentors and colleagues, technicians, advisors, and contacts. 

Indeed, the most high-yield activities of the last two years have almost all been about social networking. By contacting PIs at grad schools I was considering, I was able to find specific labs I wanted to work in, and narrow down the types of papers and technical knowledge I need to acquire. I changed my intended MS degree from bioinformatics to bioengineering based on their advice, and was offered a job and funding should I be accepted.

Prior to that, social networking events hosted for post-bacc career changing biomedical students (most of whom were pre-med) was a source of long-term friendships who've provided me with smart, scientifically literate, trustworthy friends. A huge asset to my life. I also have made friends by posting about myself online, which resulted in fruitful research collaboration and friendship.

When I read forums on the academic life on Reddit, the posts are never about how hard it is to understand the intellectual material. It's virtually always about relationships with labmates or their PI, how to navigate bureaucracies, and similar people issues.

It seems possible and potentially high-value to build gears-level models for how to consult effectively. There are a range of specific problems, tradeoffs, structural issues, and domains to consider. I expect that evaluating a colleague's advice on where to work entails a very different set of considerations from choosing a textbook.

The challenge is that, while a written document gives everybody the ability to look at and evaluate the same thing, we have to describe our problems with evaluating expertise anecdotally. It's very black-box. The specific details may be more important than the general principles.

I imagine that a rationalist who'd devoted themselves more to the study of networking and consultation than to the study of Bayesian statistics would be having a lot of zoom calls and email exchanges with strangers. They'd be going to a lot of social events (post-COVID). They'd approach novel problems not by reading books and articles, but by figuring out who'd be the best people to talk about to find out more. They'd assume that even the most technical issues have a huge human factor in their execution, and would be at least as interested in finding out those details as in learning about the engineering problem.

Discussion

My guess is that you can and should (and already do, to some extent) build gears-level models about how to consult on a variety of topics. Learning how to do this effectively might also improve your ability to form gears-level models in a virtuous cycle. The fluid, unwritten nature of consultation means that it might get somewhat neglected by rationalists due to the streetlight effect. Its superficial difference from gears-level modeling might also set us up to see it standing in opposition to modeling, rather than as a necessary complement to it.

Our culture is full of common-sense knowledge about expert consultation, and evaluating expertise or ability. Get three bids. Ask for a second opinion. It's also known to be a particularly challenging and high-stakes problem, a key issue in how we structure our society. It's one of the drivers of credentialism, the other being rent-seeking behavior.

It's also one of the areas where people seem to systematically neglect it, for weird psychological/social/cultural reasons. It seems intimidating, like a violation of a hierarchy or status code. People are afraid they'll embarrass themselves, or that the person they're asking will somehow be able to exploit them. They get socially anxious.

Yet isn't this an obvious form of low-hanging fruit? If you can't get the ear of the President, can you get the ear of their staffers? If not, of the friend of a staffer? Do you at least know a guy who knows people in Washington, D.C.?

People sometimes play a game on Wikipedia where they choose two random pages, and see if they can figure out how to navigate from one page to the other through the in-page links. Why not play an analogous game with people? Set your sights on an inaccessible person who you'd like to talk to, and see if you can figure out a strategy to network your way to a conversation with them.

Where to start

It seems likely that when you're consulting, it's good to have clear objectives, and some relevant questions in mind. You wouldn't consult with a plumber "just to hear what he had to say." You have a specific plumbing problem in mind, and then you call the guy. Likewise, don't email an infectious disease expert just because you're "interested in COVID-19." Have an objective in mind, and some questions to ask.

The most valuable and logical use of expert consulting is to find out things you can't get from a written source. It doesn't make sense to email a professor to ask them for data that's publicly available in their latest publication (though you could request the paper if it's not open access). Instead, the expert can be a source for hidden information.

So gaining a sense of what sort of hidden information might be interesting to find out about would be a great investment. "Is there a less expensive alternative?" "There are lots of accounts of the fall of Rome, so why don't historians seem to attempt to falsify each others' hypotheses on this subject all that often?" "Did infectious disease experts consult with psychologists when they were trying to set policy in a way that influenced public response, and who exactly would be the relevant domain expert for doing that, anyway?" "What sort of books does your book club read, and how in-depth is the discussion?"

I imagine that people might have all kinds of deficiencies in this skill set. Areas to grow might include:

  • Articulating their own goals, skills, and background
  • Knowing what questions to ask, what information to seek
  • Understanding the risks and rewards involved
  • Thinking through who to contact, and how
  • Holding a good one-off conversation with a stranger
  • Converting relationships with strangers into repeated contacts
  • Figuring out how to provide value to the other person
  • Updating strategies for who to talk to next based on the previous conversation

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:40 PM

It may also be that above a certain level of rationality, a social network, not one's own skills and knowledge, become the bottleneck.

I like this as a summary of the hypothesis. I'm not convinced yet, but it's been distinguished from entropy now and I'll be keeping an eye out for evidence for/against.

I think that the case of Aubrey de Grey, the leader of SENS, is a good case study.

He seems to think that his high-level anti-aging research strategy is novel and tractable. All he needs is the funding to hire enough researchers and equipment to implement it, and the knowledge will flow.

To develop that strategy in the first place, he needed to be plugged into a network of other scientists studying various aspects of aging, both to gain knowledge and credibility.

His SENS foundation and book, Ending Aging, are both aimed in part at broadcasting his message and expanding his network. He's not trying to increase his knowledge (of aging or of making money) in order to put together the cure or the cash for himself. Instead, he's trying to expand his network, to convince government or private funders to support his vision.

I think that to make progress on evaluating these hypotheses (effectiveness is bottlenecked by network vs. by knowledge), we need to figure out how to distinguish them clearly.

For example, parts of the psychological research community seem bottlenecked by their collective lack of knowledge about statistics. But if they committed to collaborate more closely with some statisticians, that would probably help. Does that represent a "knowledge bottleneck" or a "network bottleneck?"

Likewise, I currently have only vague guesses about the specific skills/knowledge that would make me an effective tissue engineer. That'll become much more clear once I'm working in a lab next year. So is my problem that I'm bottlenecked by lack of knowledge about the specific needs of the lab I'll be working in, or is it that I'm not plugged into the social network at that lab?

I think that the distinction might rest more in written vs. unwritten knowledge.

Aubrey de Grey, the psychologists, and myself, are all bottlenecked by our lack of unwritten knowledge (how to meet an anti-aging billionaire, how to initiate a collaboration with a statistician, which lab I'll end up working in and what their needs are). Unwritten knowledge tends to be stored in social networks.

It's just the fact that people often think of "knowledge" as book knowledge that creates this confusion. So perhaps I should restate this hypothesis:

It may also be that above a certain level of rationality, access to unwritten knowledge, not one's ability to learn and practice publicly-available skills and knowledge, become the bottleneck.