lbThingrb

Topological Fixed Point Exercises

Generalized to *n* dimensions in my reply to Adele Lopez's solution to #9 (without any unnecessary calculus :)

Topological Fixed Point Exercises

Thanks! I find this approach more intuitive than the proof of Sperner's lemma that I found in Wikipedia. Along with nshepperd's comment, it also inspired me to work out an interesting extension that requires only minor modifications to your proof:

*d*-spheres are orientable manifolds, hence so is a decomposition of a *d*-sphere into a complex *K* of *d*-simplices. So we may arbitrarily choose one of the two possible orientations for *K* (e.g. by choosing a particular simplex *P* in *K*, ordering its vertices from 1 to *d* + 1, and declaring it to be the prototypical positively oriented simplex; when *d* = 2, *P* could be a triangle with the vertices going around counterclockwise when you count from 1 to 3; when *d* = 3, *P* could be a tetrahedron where, if you position your right hand in its center and point your thumb at the 1-vertex, your fingers curl around in the same direction in which you count the remaining vertices from 2 to 4). Then any ordering of the vertices of any *d*-simplex in *K* may be said to have positive or negative orientation (chirality). (E.g. it would be positive when there's an orientation-preserving map (e.g. a rotation) sending each of its vertices to the correspondingly numbered vertices of *P*.)

So here's my extension of parent comment's theorem: Any coloring of the vertices of *K* with the colors {1, ..., *d* + 1} will contain an equal number of positively and negatively oriented chromatic *d*-simplices—that is, the reason the number of chromatic *d*-simplices in *K* must be even is that each one can be paired off with one of the opposite (mirror) orientation. (Does this theorem have a name? If so I couldn't find it.)

Following parent comment, the proof is by induction on the dimension *d*. When *d* = 1, *K* is just a cycle graph with vertices colored 1 or 2. As we go around clockwise (or counterclockwise), we must traverse an equal number of 1→2 edges and 2→1 edges (i.e. oppositely oriented 1-simplices), by the time we return to our starting point.

Now let *d* > 1, and assume the theorem is true in the (*d*-1)-dimensional case. As in parent comment, we may choose any vertex *v* of *K*, and then the faces opposite to *v* in each *d*-simplex in *K* that contains *v* will, together, form a (*d*-1)-dimensional subcomplex *K′* of *K* that is homeomorphic (topologically equivalent) to a (*d*-1)-sphere.

Suppose *v* has color *i*. We will show that changing *v*'s color to *j* ≠ *i* will add or remove the same number of positively oriented chromatic *d*-simplices as negatively oriented ones: Forget, for the moment, the distinction between colors *i* and *j*—say any *i* or *j*-colored vertex of *K′* has color "*i*-or-*j*." Then *K′* is now *d*-colored, so, by inductive hypothesis, the chromatic (*d*-1)-simplices of *K′* must occur in pairs of opposite orientation (if any exist—if none exist, *v* can't be part of any chromatic *d*-simplex regardless of its color). Consider such a pair, call them *F*₁ and *F*₂.

Now cease pretending that *i* and *j* are a single color. Since *F*₁ was chromatic in *K′*, it must have had an *i*-or-*j*–colored vertex. Suppose, WOLOG, that that vertex was actually *j*-colored. Then, together with *i*-colored *v*, *F*₁ spans a chromatic *d*-simplex of *K*, call it *S*₁, which we may assume WOLOG to be positively oriented. Once we change the color of *v* from *i* to *j*, however, *S*₁ will have two *j*-colored vertices, and so will no longer be chromatic. To see that balance is maintained, consider what happens with *F*₂:

If *F*₂'s *i*-or-*j*–colored vertex was, like *F*₁'s, actually *j*-colored, then the *d*-simplex spanned by *F*₂ and *v*, call it *S*₂, was chromatic and negatively oriented (because *F*₂ had opposite orientation to *F*₁ in *K′*), and thus *S*₂ also ceased to be chromatic when we changed *v*'s color from *i* to *j*, balancing *S*₁'s loss of chromatic status. Otherwise, *F*₂'s *i*-or-*j*–colored vertex must have been *i*-colored, in which case *S*₂ wasn't chromatic when *v* was also *i*-colored, but changing *v*'s color to *j* turned *S*₂ into a new *d*-chromatic simplex. But what is *S*₂'s orientation? Well, it was negative under the assumption that *S*₂'s *i*-or-*j*–colored vertex was *j*-colored and *v* was *i*-colored, and swapping the labels of a pair of vertices in an oriented simplex reverses its orientation, so, under the present assumption, *S*₂'s orientation must be positive! Thus the loss of *S*₁ as a positively oriented chromatic *d*-simplex is balanced by the addition of *S*₂ as a new positively oriented chromatic *d*-simplex.

If all of *K*'s vertices are the same color, it has the same number (zero) of positively and negatively oriented chromatic *d*-simplices, and since this parity is preserved when we change the colors of the vertices of *K* one at a time, it remains no matter how we (*d*+1)-color *K*. ☐

We can relate this theorem back to Sperner's lemma using the same trick as parent comment: Suppose we are given a triangulation *K* of a regular *d*-simplex *S* into smaller *d*-simplices, and a (*d*+1)-coloring of *K*'s vertices that assigns a unique color to each vertex *v* of *S*, and doesn't use that color for any of *K*'s vertices lying on the face of *S* opposite to *v*. We form a larger simplicial complex *L* containing *K* by adding *d* + 1 new vertices as follows: For *i* = 1, ..., *d* + 1, place a new *i*-colored vertex exterior to *S*, some distance from the center of *S* along the ray that goes through the *i*-colored vertex of *S*. Connect this new vertex to each vertex of *K* lying in the face of *S* opposite from the (*i*+1)-colored (or 1-colored, when *i* = *d* + 1) vertex of *S*. (Note that none of the *d*-simplices thereby created will be chromatic, because they won't have an (*i*+1)-colored vertex.) Then connect all of the new vertices to each other.

Having thus defined *L*, we can embed it in a *d*-sphere, of which it will constitute a triangulation, because the new vertices will form a *d*-simplex *T* whose "interior" is the complement of *L* in the sphere. Thus we can apply our above theorem to conclude that *L* has equally many positively and negatively oriented chromatic *d*-simplices. By construction, none of *L*'s new vertices are included in any chromatic *d*-simplex other than *T*, so *K* must contain an equal number (possibly zero) of positively and negatively oriented chromatic *d*-simplices, plus one more, of opposite orientation to *T*. And what is the orientation of *T*? I claim that it is opposite to that of *S*: Consider *T* by itself, embedded in the sphere. *T*'s boundary and exterior (the interior of *L*) then constitute another chromatic *d*-simplex, call it *U*, which is essentially just a magnification of *S*, with correspondingly colored vertices, and so share's *S*'s orientation. Applying our theorem again, we see that *T* and *U* must have opposite orientations*, so we conclude that *K* must contain exactly one more chromatic *d*-simplex of the same orientation as *S* than of the opposite orientation. (As proved in nshepperd's comment for the case *d* = 2.)

*The observation, that, on the surface of a sphere, the interior and exterior of a trichromatic triangle have opposite orientations, is what sent me down this rabbit hole in the first place. :)

Embedded World-Models

Thanks, this is a very clear framework for understanding your objection. Here's the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.

Furthermore, it seems like there's a pattern where, the more general the algorithmic problem you want to solve is, the more your solution is compelled to resemble some sort of brute-force search. There are all kinds of narrow abilities we'd like an AGI to have that depend on the detailed structure of the physical world, but it's not obvious that any such structure, beyond hypotheses about what is feasibly computable, could be usefully exploited to solve the kinds of problem laid out in this sequence. So it may well be that the best approach turns out to involve some sort of bounded search over simpler strategies, plus lots and lots of compute.

Realism about rationality

I didn't mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it's similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device's universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent with the hypothesis that it worked as specified (and the non-halting case could be ascertained by the presence of pixie dust 🙂). But since its memory contents upon halting could be arbitrarily large, in practice you wouldn't be able to examine it fully even for individual computations of sufficient complexity. Still, if you did enough consistency checks on enough different kinds of computations, and the cleverest scientists couldn't come up with a test that the machine didn't pass, I think believing that the machine was a true halting-problem oracle would be empirically justified.

It's true that a black box oracle could output a nonstandard "counterfeit" halting function which claimed that some actually non-halting TMs do halt, only for TMs that can't be proved to halt within ZFC or any other plausible axiomatic foundation humans ever come up with, in which case we would never know that it was lying to us. It would be trickier for the device I described to pull off such a deception, because it would have to actually halt and show us its output in such cases. For example, if it claimed that some actually non-halting TM *M* halted, we could feed it a program that emulated *M* and output the number of steps *M* took to halt. That program would also have to halt, and output some specific number *n*. In principle, we could then try emulating *M* for *n* steps on a regular computer, observe that *M* hadn't reached a halting state, and conclude that the device was lying to us. If *n* were large enough, that wouldn't be feasible, but it's a decisive test that a normal computer could execute in principle. I suppose my magical device could instead do something like leave an infinite output string in memory, that a normal computer would never know was infinite, because it could only ever examine finitely much of it. But finite resource bounds already prevent us from completely ruling out far-fetched hypotheses about even normal computers. We'll never be able to test, e.g., an arbitrary-precision integer comparison function on *all* inputs that could feasibly be written down. Can we be *sure* it always returns a Boolean value, and never returns the Warner Brothers dancing frog?

Actually, hypothesizing that my device "computed" a nonstandard version of the halting function would already be sort of self-defeating from a standpoint of skepticism about hypercomputation, because all nonstandard models of Peano arithmetic are known to be uncomputable. A better skeptical hypothesis would be that the device passed off some actually halting TMs as non-halting, but only in cases where the shortest proof that any of those TMs would have halted eventually was too long for humans to have discovered yet. I don't know enough about Solomonoff induction to say whether it would unduly privilege such hypotheses over the hypothesis that the device was a true hypercomputer (if it could even entertain such a hypothesis). Intuitively, though, it seems to me that, if you went long enough without finding proof that the device wasn't a true hypercomputer, continuing to insist that such proof would be found at some future time would start to sound like a God-of-the-gaps argument. I think this reasoning is valid even in a hypothetical universe in which human brains couldn't do anything Turing machines can't do, but other physical systems could. I admit that's a nontrivial, contestable conclusion. I'm just going on intuition here.

Realism about rationality

This can’t be right ... Turing machines are assumed to be able to operate for unbounded time, using unbounded memory, without breaking down or making errors. Even finite automata can have any number of states and operate on inputs of unbounded size. By your logic, human minds shouldn’t be modeling physical systems using such automata, since they exceed the capabilities of our brains.

It’s not that hard to imagine hypothetical experimental evidence that would make it reasonable to believe that hypercomputers could exist. For example, suppose someone demonstrated a physical system that somehow emulated a universal Turing machine with infinite tape, using only finite matter and energy, and that this system could somehow run the emulation at an accelerating rate, such that it computed *n* steps in seconds. (Let’s just say that it resets to its initial state in a poof of pixie dust if the TM doesn’t halt after one second.)

You could try to reproduce this experiment and test it on various programs whose long-term behavior is predictable, but you could only test it on a finite (to say nothing of computable) set of such inputs. Still, if no one could come up with a test that stumped it, it would be reasonable to conclude that it worked as advertised. (Of course, some alternative explanation would be more plausible at first, given that the device as described would contradict well established physical principles, but eventually the weight of evidence would compel one to rewrite physics instead.)

One could hypothesize that the device only behaved as advertised on inputs for which human brains have the resources to verify the correctness of its answers, but did something else on other inputs, but you could just as well say that about a normal computer. There’d be no reason to believe such an alternative model, unless it was somehow more parsimonious. I don’t know any reason to think that theories that don’t posit uncomputable behavior can always be found which are at least as simple as a given theory that does.

Having said all that, I’m not sure any of it supports either side of the argument over whether there’s an ideal mathematical model of general intelligence, or whether there’s some sense in which intelligence is more fundamental than physics. I will say that I don’t think the Church-Turing thesis is some sort of metaphysical necessity baked into the concept of rationality. I’d characterize it as an empirical claim about (1) human intuition about what constitutes an algorithm, and (2) contingent limitations imposed on machines by the laws of physics.

Who Wants The Job?

Apologies for the stark terms if it felt judgmental or degrading!

No worries! I mostly just wrote that comment for the lulz. And the rest was mostly so people wouldn't think *I* was using humor to obliquely endorse social Darwinism.

Who Wants The Job?

I've never heard of anyone doing this directly. Has anyone else? If not, there's probably a reason. I suppose occupational certification programs serve a similar filtering function. Anyway, your suggestion might be more palatable if it were in the form of a deposit refundable if and only if the applicant shows up/answers the phone for scheduled interviews. You would also need a trusted intermediary to hold the deposits, or else we would see a flood of fake job-interview-deposit scams. And even if you had such a trusted intermediary, I suspect that, in a world in which job-interview deposits were the norm, scammers would find all sorts of creative ways to impersonate that intermediary convincingly enough to fool a lot of desperate, marginally employable marks.

Also, the deposit would have to be quite small for low-wage entry-level jobs whose applicant pool would include a lot of people who can't reliably scrounge up 20 bucks, and even then, some would be hindered by limited/expensive access to basic financial services like checking accounts and electronic payments. Maybe those are mostly people you're trying to filter out? Then again, who is the ideal applicant, from the minimum-wage employer's perspective? Someone reliable and competent, of course, but also someone who *really needs* the money, and so will be highly motivated. So, someone who wouldn't normally be desperately poor, but happens to be at the moment. Maybe access to those applicants is worth enough to some employers that they're willing to pay the price of similar-looking applicants flaking on their interviews and such.

Who Wants The Job?

It's really unclear, as a society, how to get them into a position where they can provide as much value as the resources they take.

Harvest their reusable organs, and melt down what remains to make a nutrient paste suitable for livestock and the working poor?

(Kidding! But that sort of thing is always where my mind wanders when people put the question in such stark terms, perhaps because I myself am a chronically unemployed "taker" (mental illness). Anyway, one of the long-term goals of AI and automation research, as I understand it, is to turn everyone into takers ("full unemployment"). Meanwhile, one of the long-term goals of transhumanism is to be able to cure all of the various disorders and disabilities that render a large fraction of currently umemployed people unemployable. Until at least one of those goals is achieved, we will continue to have an unemployable underclass. I guess progress towards the transhumanist goal, or better public policy, could make that underclass smaller, but right now more people seem to be worrying about progress on the AI/automation front making it larger.)

Who Wants The Job?

I'm curious if anyone has recollections of what it was like trying to hire for similar positions in recent years, when the unemployment rate was much higher. That is to say, how much of this is base-rate human flakiness, and how much is attributable to the tight labor market having already hoovered up almost all the well-functioning adults?

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

But apparently SARSCoV2 isn't that. (See pic.)

One scenario I recall seeing somewhere that would reconcile lab-escape with data points 3 & 4 above is that some low-level WIV employee or contractor might have sold some purloined lab animals to the wet market. No idea how plausible that is.