(Talk given at an event on Sunday 28th of June. johnswentworth is responsible for the talk, Jacob Lagerros and David Lambert edited the transcript. 

If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here.) 

 

This talk relies heavily on visual explanations. The transcript is intended to be readable on its own, but the original video is available here for those readers who prefer that: https://youtu.be/gZo5Yc5dT90

 

johnswentworth: So I am going to talk about some funky ways of solving mazes. 

Think about how to solve the following maze. 

One thing we could imagine doing as a preprocessing step is finding something like this, a barrier that cuts the maze in two.

 To solve it, we know we have to go through (A), so we split the whole problem into two. 

We say, "Okay, I'm first going to find a way to (A) and then I'm going to find a way to (B)." 

Now, this is useful for a few reasons. First, you're breaking it into sub-problems. That's always good.

Second, this barrier applies to lots of different problems. If I'm trying to get from this other (A) to this other (B), we can use this exact same barrier.

Or if I'm trying to get from yet another (A) to another (B), the same barrier still works. 

We can reuse this on a lot of different problems. So it's useful in that way.

But finding this barrier is difficult. It's going to take a fair bit of preprocessing work, and will probably be as hard as solving the whole maze in the first place. The question I want to think about here is “how can we find something like this?” 

Keep that in the back of your head because this is about to get weird.

For the first step, let's go back to thinking about the normal, old maze problem. We want to get from (A) to (B).

 I'm just going to ignore all the constraints on this problem and draw some paths that go through the  maze, sort of. There's a perfectly nice path that maybe violates a few constraints here and there, but it's a path.

There's a couple paths through the maze. But obviously, they're breaking the rules of mazes, like the areas where we are violating constraints. As you can see,  we’re completely ignoring a few walls.   

That is thinking about finding paths through the maze. Let's switch back to the other mode: thinking about finding constraints and barriers. 

When I'm thinking about finding something like that barrier, what I am trying to find is essentially a path of walls from one side of the maze to the other (see the red line in the image above). I want to find a path following the walls themselves as much as possible. Obviously, in some places, I'm going to have to jump over some spaces.

But mostly, I want to be following walls. Now, the key thing to notice  when trying to find a path following walls, is that the paths that I drew before are now barriers. If I'm trying to get from (A) to (B) following just the walls, I cannot get through (C) because there's no wall to get through there. I cannot get through (D) either because there's no wall to get through.

So I'm going to have to aim for some of these openings where we ignored the constraints. When I'm doing this path search, I have to aim for one of these three openings.

So I could go like this: ignore the opening (A), and go through (B). Then again, with the second path, I have to go through either (C) or (D) or one of these two spots way out on the edge, (E) or (F). I have to pick one of those to go through.

So I can use that to guide my search and say, "Okay, let's go toward this one. Nice. Got a nice, easy path now."

And again, we've cheated a little bit. We jumped over the space at (A). But now, I can get rid of these original paths that I was using as a guide.

I was searching in wall space. Now, I'm going to go back to searching in path space, and I could say, "All right. Now, when I'm looking for paths from up here to down there, I know they have to go through (A)." 

Makes sense?

So there's duality. We can search in the space of paths or we can search in the space of constraints, which is usually the space of walls. But when we're searching in the space of walls, the paths are our constraints. We can use the holes in those paths to guide our search for a wall path. And conversely, when we're searching in our original space, we can use holes in the constraints to guide our search. 

Now, the thing this is all supposed to be a metaphor for:

You can think of these constraints as counterexamples in math, or you can think of them as constraints in design. And the paths, you could  think of as something like proofs or designs. 

Ben Pace: Thank you very much. That was an exciting talk. Now for the first question, orthonormal.

orthonormal: First I'm going to say the point in terms of computational complexity and then going to ask about the analogy. Finding a path through a NxN maze is, I think, in the worst case, always O(N2). For any algorithm you have, there's going to be some maze which screws over your algorithm.

So doing something like this, I think, can't be any more efficient in the worst case. That being said, if theoretical efficiency isn't the benefit of doing this in the discovering-ideas, discovering-proofs sense, then what do you see as being the advantage?

johnswentworth: All right, so two big things here. First, obviously real world mazes are not designed to be maximally difficult. But the bigger thing is that in the real world, most of our problems are not two-dimensional. And if we're working in a high-dimensional space, think about how the maze in question is actually represented most of the time.

If you have 2N states, you just can't explicitly represent all the walls. We can't draw them all out like this. We just don't have the space for that. So how do we represent it? Usually, in practice, the way we represent real world, high-dimensional problems is we have these low-dimensional constraints on them.

So if I'm building a bridge, I know there's constraints like two things needing to connect and not wiggling around. But that's a low-dimensional constraint, which only has to do with this bridge bit and the thing next to it.  I could also have a cost constraint, or a budget, on whatever I'm doing.

That's relatively low-dimensional. Usually, what will happen is when you're looking for these constraints, you'll find that the constraints themselves are actually much lower-dimensional, so they'll be easier to work with. Does that make sense?

orthonormal: Yes, it does.

johnswentworth: Other questions. Daniel?

Daniel Kokotajlo: In some of your other blog posts, you talk about using constraints to understand the economy or the way the world works in general. Could this be an interpretation of that as well? 

johnswentworth: Good question. This point, in particular, does not work well with a 2D visualisation because it's inherently about the system being high-dimensional. If we think about it in physics, how do we represent systems in the world in physics? Usually, we write down some governing equations.

Usually, those governing equations are actually a very large system of very low-dimensional equations. So if I imagine that I have some field. There's a bunch of points and each of these points has a pressure and a temperature and some other stuff associated with it.

This is a very high-dimensional system, but usually our equations only relate this point to the points nearest to it. Those are going to be a whole bunch of very low-dimensional equations relating to just a few of these things. Similarly, in economics, we usually only have local interactions between different agents or we have local interactions based on which goods could be turned into which other goods, those sorts of things.

We're inherently representing these systems as a bunch of low-dimensional pieces. When we're writing down governing equations for a system, that's what we're doing: we're finding the constraints.

Ben Pace: Well, thanks. orthonormal, follow-up?

orthonormal: Following up, it reminds me that, in one area, this relates well to my own experience when trying to prove things. You want to alternate between trying to prove it and trying to disprove it, which is analogous to trying to find a path through the maze and trying to find a path through the walls.

In what other domains does this figure-ground inversion work? And in what domains does it not work? 

johnswentworth: Good question. I think we implicitly use this a lot more than it seems. The main reason why it might not feel like we're doing something like this is because constraints on real problems are usually low-dimensional.

So as we're working on the problem, we just run into this low-dimensional thing and we're like, "Okay, here's this equation that defines how the system works, and it is now like a background effect.”

Whereas in math, we're dealing with systems that look more like a maze. We don't only have this high-dimensional space with these relatively low-dimensional planes chopping it up. We have something that looks more like a maze.

If I'm trying to brainstorm situations where it feels more like doing a proof--counterproof type of thing, a lot of them would be in math. Or things with agents inherently doing some sort of high-dimensional thinking or something similar. Real world domains are usually domains where we’re trying to understand things inherently represented as a couple of low-dimensional systems. 

Ben Pace: On another point, my alternative title for your talk was “babble babble and prune babble”, which is obviously a terrible idea. 

johnswentworth: So the idea there is you can babble when searching for paths, and you can babble when searching for constraints.

Ben Pace: Final question: is there a concrete example in your life where this clearly fits in how you solved a problem?

johnswentworth: Oh, all the time. At some point, I'm probably going to write up a post specifically on that topic. But one of the best examples I've thought recently of  is putting a satellite into orbit without using a rocket. And I used that as an exercise in this specifically.

I came up with a relatively neat way of doing it that I haven't seen anywhere else before. So that was pretty fun.

49