In discussions of AI, nanotechnology, brain-computer interfaces, and genetic engineering, I've noticed a common theme of disagreement over the right bottleneck to focus on. The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. The other group counters by arguing that taking such a milestone/near-term focused approach is futile because we lack the fundamental insights to achieve the long-term goals we really care about. I'm interested in heuristics we can use or questions we can ask to try and resolve these disagreements. For example, in the recent MIRI update, Nate Soares talks about how we're not prepared to build an aligned AI because we can't talk about the topic without confusing ourselves. While this post focuses on capability, not safety, I think the "can we talk about the topic without sounding confused" is a useful heuristic for understanding how ready we are to build independent of safety questions.

What follows are a few links to/descriptions of concrete examples of this pattern:

  • Disagreements in the ML/AI world over how far towards AGI deep learning can get us. See, for example, Gary Marcus's Deep Learning: A Critical Appraisal. While the disagreement isn't only about chasing near-term goals vs. paradigm shifts, it seems like an important piece.
  • Arguments between "the Drexler camp" and mainstream chemists over how close we are to building assembler-based nanotech. Representative quote from here, "The Center for Responsible Nanotechnology writes 'A fabricator within a decade is plausible – maybe even sooner'. I think this timeline would be highly implausible even if all the underlying science was under control, and all that remained was the development of the technology. But the necessary science is very far from being understood. ""
  • Ed Boyden's point in an Edge interview (quote below), which mirrors the Rocket Alignment problem in a number of ways, but contrasts with the optimism of VCs/entrepreneurs such as Elon Musk and Brian Johnson about the potential for radically transformative brain-computer interfaces to go to market in the next 1-2 decades.

People forget. When they landed on the moon, they already had several hundred years of calculus so they have the math; physics, so they know Newton’s Laws; aerodynamics, you know how to fly; rocketry, people were launching rockets for many decades before the moon landing. When Kennedy gave the moon landing speech, he wasn’t saying, let’s do this impossible task; he was saying, look, we can do it. We’ve launched rockets; if we don’t do this, somebody else will get there first. I anticipate at least one answer to this question will look something like "look at the science and see if you understand the phenomena you which to engineer on top of enough", but I think this answer doesn't fully solve the problem. For example, in the case of nanotech, Drexler's argument centers on the point that successful engineering requires finding one path to success not necessarily understanding the entire space of possible phenomena.

EDIT (01/02/2019): I removed references to safety/alignment after ChristianKI noted that conflating the two makes the question more confusing and John_Maxwell_IV argued that I was misrepresenting his (and likely others') views on alignment. The post now focuses solely on the question of identifying bottlenecks to progress.

New Answer
New Comment

4 Answers sorted by

ryan_b

50

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

ChristianKl

50

Thomas Khun argues in his book that those scientific fields that try to achieve specific goals make worse progress then scientific fields that attempt to solve problems within those fields that it's researchers find interesting.

Physics progressed when physicists wanted to understand the natural laws of the universe and not because physicists wanted to make stuff that's useful.

On the other hand, you have a subject like nutrition science that's focused on producing knowledge that has immediate practical applications and the field doesn't make any practical progress.

ChristianKl

40

Asking what's the bottleneck to do X and asking what needs to happen for X to be done safely are two different questions.

For practical purposes it's important to know both answers but for understanding it's clouds the issue to fix the questions together.

The question of whether we can build more effecitve BCI's is a question that's mostly about technical capability.

On the other hand, the concern that Nate raises over AGI is a safety concern. Nate doesn't doubt that we can build an AGI but considers it dangerous to do so.

[-][anonymous]20

FYI: I've updated the post to focus solely on the "what's the bottleneck to do X" question and not on safety, as I think the former question is less discussed on LW and what I wanted answers to focus on.

John_Maxwell

30

FWIW, I can't speak for Paul Christiano, but insofar as you've attempted to summarize what I think here, I don't endorse the summary.

Where does the post mention Paul Christiano? I only see a link to a discussion, without any commentary.

Edit: Nvm, I figured it out. I assume you mean: "The general pattern is that one person or group argues that we know enough about a topic's foundation that it's time to start to focus on achieving near-term milestones, often engineering ones. " is the specific line that you think doesn't accurately capture your views.

[-][anonymous]40

Can you be more specific? If you help me understand how/if I'm misrepresenting, I'd be happy to change it. My sense is that Paul's view is more like, "through working towards prosaic alignment, we'll get a better understanding of whether there are insurmountable obstacles to alignment of scaled up (and likely better) models." I can rephrase to something like that or something when more nuanced. I'm just wary of adding too much alignment-specific discussion as I don't want the debate to be too focused on the object-level alignment debate.

It's also worth noting that there are other researchers who hold similar views, so I'm not just talking about Paul's.

4[anonymous]
FYI: I've updated the post to not talk about alignment at all, since I think focusing only on bottlenecks to progress in terms of capabilities makes the post clearer. Thanks to ChristianKI for pointing this out. John_Maxwell_IV, would love feedback on how you feel about the edited version.
2John_Maxwell
I think your original phrasing made it sound kinda like I thought that we should go full steam ahead on experimental/applied research. I agree with MIRI that people should be doing more philosophical/theoretical work related to FAI, at least on the margin. The position I was taking in the thread you linked was about the difficulty of such research, not its value. With regard to the question itself, Christian's point is a good one. If you're solely concerned with building capability, alternating between theory and experimentation, or even doing them in parallel, seems optimal. If you care about safety as well, it's probably better to cross the finish line during a "theory" cycle than an "experimentation" cycle.
3 comments, sorted by Click to highlight new comments since:

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

[-][anonymous]40

You should promote this to a full answer rather than a comment! It more than qualifies.

Regarding 1, I suspect a lot of recent progress in neuroscience has come from applying computational and physics-style approaches to existing problems. See, for example, the success Ed Boyden has had in his lab with applying physics thinking to building better neuroscience tools–optogenetics, expansion microscopy, and most recently implosion fabrication.

I think nanotechnology is a prime example of 2. AIUI, a lot of the component technologies for at least trying to build nano-assemblers exist but we lack the technology/institutions/incentives/knowledge to engineer them into coherent products and tools.

Copied to full answer!

I agree regarding neuroscience. I went to a presentation (from whom I have suddenly forgotten, and I seem to have lost my notes) that was describing an advanced type of fMRI that allowed more advanced inspection than previously, and the big discovery mostly consistent of "optimize the c++" and "rearrange the UI with practitioners in mind." I found it tremendously impressive - they were using it to help map epilepsy seizures in much more detail.

I am strongly tempted to say that 2 should be considered the highest priority in any kind of advanced engineering project, and I am further tempted to say it would sometimes be worth considering even before having project goals. There has been some new work in systems engineering recently that emphasizes the meta level and focusing on architecture-space before even getting the design constraints; I wonder if the same trick could be pulled with capabilities. Sort of systematizing the constraints at the same time as the design.