Ex 8: (in Python, using a reversal function)
def f(s):
return s[::-1]
dlmt = '"""'
code = """def f(s):
return s[::-1]
dlmt = '{}'
code = {}{}{}
code = code.format(dlmt, dlmt, code, dlmt)
print(f(code))"""
code = code.format(dlmt, dlmt, code, dlmt)
print(f(code))
Ex 4
Given a computable function , define a function by the rule . Then is computable, however, because for , we have that and .
Ex 5:
We show the contrapositive: given a function halt, we construct a surjective function from to as follows: enumerate all turing machines, such that each corresponds to a string. Given a , if does not decode to a turing machine, set . If it does, let denote that turning machine. Let with input first run halt; if halt returns , put out , otherwise will halt on input ; run on and put out the result.
Given a computable function , there is a string such that implements (if the turing thesis is true). Then , so that is surjective.
Ex 6:
Let be a parametrization of the circle given by . Given and , write to denote the point , where . First, note that, regardless of the topology on , it holds true that if is continuous, then so is for any , because given a basis element of the circle, we have which is open because is continuous.
Let be a continuous function from to . Then is continuous, and so is the diagonal function . Fix any , then given by is also continuous, but given any , one has and thus . It follows that is not surjective.
Ex 7:
I did it in java. There's probably easier ways to do this, especially in other languages, but it still works. It was incredibly fun to do. My basic idea was to have a loop iterate 2 times, the first time printing the program, the second time printing the printing command. Escaping the " characters is the biggest problem, the main idea here was to have a string q that equals " in the first iteration and " + q + " in the second. That second string (as part of the code in an expression where a string is printed) will print itself in the console output. Code:
package maths;public class Quine{public static void main(String[]args){for(int i=0;i<2;i++){String o=i==1?""+(char)34:"";String q=""+(char)34;q=i==1?q+"+q+"+q:q;String e=i==1?o+"+e);}}}":"System.out.print(o+";System.out.print(o+"package maths;public class Quine{public static void main(String[]args){for(int i=0;i<2;i++){String o=i==1?"+q+""+q+"+(char)34:"+q+""+q+";String q="+q+""+q+"+(char)34;q=i==1?q+"+q+"+q+"+q+"+q:q;String e=i==1?o+"+q+"+e);}}}"+q+":"+q+"System.out.print(o+"+q+";"+e);}}}
Thoughts on #10:
I am confused about this exercise. The standard/modern proof of Gödel's second incompleteness theorem uses the Hilbert–Bernays–Löb derivability conditions, which are stated as (a), (b), (c) in exercise #11. If the exercises are meant to be solved in sequence, this seems to imply that #10 is solvable without using the derivability conditions. I tried doing this for a while without getting anywhere.
Maybe another way to state my confusion is that I'm pretty sure that up to exercise #10, nothing that distinguishes Peano arithmetic from Robinson arithmetic has been introduced (it is only with the introduction of the derivability conditions in #11 that this difference becomes apparent). It looks like there is a version of the second incompleteness theorem for Robinson arithmetic, but the paper says "The proof is by the construction of a nonstandard model in which this formula [i.e. formula expressing consistency] is false", so I'm guessing this proof won't work for Peano arithmetic.
For #6, I have what looks to me like a counterexample. Possibly I am using the wrong definition of continuous function? I am taking this one as my source.
Take as the topological space the real line under the Euclidean topology. Let be the point in at radians rotation. This is surjective; all points in are mapped to infinitely many times. It is also continuous; For every and neighborhood there is a neighborhood such that ,
The partial functions f(x) from are also continuous by the same argument.
My solution for #12:
Suppose for the sake of contradiction that such a formula exists. By the diagonal lemma applied to , there is some sentence such that, provably, . By the soundness of our theory, in fact . But by the property for we also have , which means , a contradiction.
This seems to be the "semantic" version of the theorem, where the property for is stated outside the system. There is also a "syntactic" version where the property for is stated within the system.
#1
Let f be such a surjection. Construct a member of P(S) outside f's image by differing from each f(x) in whether it contains x.
#2
A nonempty set has functions without a fixed point iff it has at least two elements. It suffices to show that there is no surjection from S to S -> 2, which is analogous to #1.
#3
T has only one element. Use that one.
#7 Haskell
source = "main = putStrLn (\"source = \" ++ show source ++ \"\\n\" ++ source)"
main = putStrLn ("source = " ++ show source ++ "\n" ++ source)
Is #8 supposed to read "Write a program that takes a function f taking a string as input as input, and produces its output by applying f to its source code. For example, if f reverses the given string, then the program should outputs its source code backwards."?
If so, here.
source = "onme = putStrLn $ f $ \"source = \" ++ show source ++ \"\\n\" ++ source"
onme f = putStrLn $ f $ "source = " ++ show source ++ "\n" ++ source
Ex 1
Exercise 1: Let and let . Suppose that , then let be an element such that . Then by definition, and . So , a contradiction. Hence , so that is not surjective.
Ex 2
Exercise 2: Since is nonempty, it contains at least one element . Let be a function without a fixed point, then , so that and are two different elements in (this is the only thing we shall use the function for).
Let for nonempty. Suppose by contradiction that is surjective. Define a map by the rule . Given any subset , let be given by Since is surjective, we find a such that . Then . This proves that is surjective, which contradicts the result from (a).
Ex 3
Exercise 3: By (2) we know that , and so and where . That means for any . and .
Attempted solution and some thoughts on #9:
Define a formula taking one free variable to be .
Now define to be . By the definition of we have .
We have
The first step follows by the definition of , the second by the definition of , the third by the definition of , and the fourth by the property of mentioned above. Since by the type signature of , this completes the proof.
Things I'm not sure about:
It's a little unclear to me what the notation means. In particular, I've assumed that takes as inputs Gödel numbers of formulas rather than the formulas themselves. If takes as inputs the formulas themselves, then I don't think we can assume that the formula exists without doing more arithmetization work (i.e. the equivalent of would need to know how to convert from the Gödel number of a formula to the formula itself).
If the biconditional "" is a connective in the logic itself, then I think the same proof works but we would need to assume more about than is given in the problem statement, namely that the theory we have can prove the substitution property of .
The assumption about the quantifier complexity of and was barely used. It was just given to us in the type signature for , and the same proof would have worked without this assumption, so I am confused about why the problem includes this assumption.
Here's the proof from the paper again:
=s2A < px0,0..p9x0> xx0x0x0x0x0e0..6y0e0x0e0x0e0x0e0e0e0x0e0e0x0e0e0x0.x0e0x0e0e0x0e0e0e0x0e0.x0e0bx<x0e0e0x0e0e0e0e0x0e0.x0e0e0.e0e0.x0e0e0e0e0..2e0e0.x0e0e0e0e0).
So, a proof can't be "s2A" or "s2B" or "s2A" or "s2B" or "s2B" or "s2C" or "worlds" or "possible worlds" or "no known world". But it might well be that the proof is correct enough given the premises.
So we assume the proof is correct!
Q7 (Python):
Y = lambda s: eval(s)(s)
Y('lambda s: print("Y = lambda s: eval(s)(s)\\nY({s!r})")')
Q8 (Python):
Not sure about the interpretation of this one. Here's a way to have it work for any fixed (python function) f:
f = 'lambda s: "\\n".join(s.splitlines()[::-1])'
go = 'lambda s: print(eval(f)(eval(s)(s)))'
eval(go)('lambda src: f"f = {f!r}\\ngo = {go!r}\\neval(go)({src!r})"')
I am confused by the introductory statement for #9. Is this an accurate rephrasing?
By representing syntax using arithmetic, it is possible to define a function as follows:
Define with image in , such that:
substitutes the Goedel-number of into (creating ) and then substitutes the Goedel-number of into some fixed formula to get a result in .
I’m confused about Q9.
Given the way is defined, it’s unclear to me how we ensure type correctness of . In what sense is a set of sentences (rather than a set of pairs of sentences)? What does an element of that set look like?
Yeah, it is just functions that take in two sentences and put both their Godel numbers into a fixed formula (with 2 inputs).
Ex 1.
Suppose there is a surjection f : S -> P(S). Consider the set . Since f is a surjection, X = f(y) for some y in S. Does y lie in X? If , then , so by definition of X, . If , then , so y must belong to X. Contradiction.
Ex 2.
Since there is a function without fixed points, T must have at least two elements. Hence, there is a surjection , which induces a surjection (a function goes to ). So, if there were a surjection , there would also be a surjection , which cannot be by previous exercise.
Ex 4.
Suppose is a computable surjective function. Consider the function defined by . The function g is computable, therefore there should exist an : .
Then . Contradiction.
Ex 5.
Suppose halt(x,y) is a computable function. Consider the function : ; T if
Suppose is a Turing code of f. Since f halts everywhere, halt(s', s') = T. But then . Contradiction.
Ex 6.
Suppose that is a continuous surjection. Consider the function (here - f(x, x) is a point diametrically opposed to f(x, x)). f is surjective, hence g = f(y), but then g(y) = f(y,y) = - f(y,y). Contradiction.
Ex 7. A quine in python3:
code = """code = {}{}{}{}{}{}{}
print(code.format('"','"','"',code,'"','"','"'))"""
print(code.format('"','"','"',code,'"','"','"'))
Ex 8. In python:
import inspect
def f(string):
return string[::-1]
def applytoself(f):
source = inspect.getsource(f)
return f(source)
applytoself(f)
'\n]1-::[gnirts nruter \n:)gnirts(f fed'
Ex 9.
The formula for is
Ex 11.
Suppose is the formula . By the diagonal lemma, there exists a formula A such that .
Therefore,
By property c,
Again by property c,
Combining previous two implications,
Since , we have
Combining this with , we get
From this we get , therefore, and . QED.
Don't know if this is still relevant, but on Ex9
you definitely can't define this way. Your definition includes the godel numeral for , which makes the definition depend on itself.
Self-referential definitions can be constructed with the diagonal lemma. Given that the point of the exercise is to show something similar, you're right that this solution is probably a bit suspect.
I might be wrong, but I believe this is not correct. The diagonal lemma lets you construct a sentence that is logically equivalent to something including its own godel numeral. This is different from having its own godel numeral be part of the syntactic definition.
In particular, the former isn't recursive. It defines one sentence and then, once that sentence is defined, proves something about a second sentence which includes the godel numeral of the first. But what seed attempted (unless I misunderstood it) was to use the godel numeral in the syntactic definition for , which doesn't make sense because is not defined until is.
Minor correction for #7: You probably want to say "nonempty quine" or "nontrivial quine". The trivial quine works in many languages.
My nontrivial answer for Q7, in Python:
with open("foo.py", "r") as foo:
print foo.read()
And for Q8:
def f(string):
return ''.join([chr(ord(c)+1) for c in string])
with open("foo.py", "r") as foo:
print f(foo.read())
Ex8
This was reasonably straight-forward given the quine.
def apply(f):
l = chr(123) # opening curly bracket
r = chr(125) # closing curly bracket
q = chr(39) # single quotation mark
n = chr(10) # linebreak
z = [n+" ", l+f"z[i]"+r+q+n+" + f"+q]
x = [n+" ", l+f"x[i]"+r]
e = [q, l+"e[i]"+r+q+")"+n+" print(f(sourcecode))"]
sourcecode = ""
for i in range(0,2):
sourcecode += (f'def apply(f):{z[i]}'
+ f'l = chr(123) # opening curly bracket{z[i]}'
+ f'r = chr(125) # closing curly bracket{z[i]}'
+ f'q = chr(39) # single quotation mark{z[i]}'
+ f'n = chr(10) # linebreak{z[i]}'
+ f'z = [n+" ", l+f"z[i]"+r+q+n+" + f"+q]{z[i]}'
+ f'x = [n+" ", l+f"x[i]"+r]{z[i]}'
+ f'e = [q, l+"e[i]"+r+q+")"+n+" print(f(sourcecode))"]{z[i]}'
+ f'sourcecode = ""{z[i]}'
+ f'for i in range(0,2):{x[i]}sourcecode += (f{e[i]}')
print(f(sourcecode))
Last time, I got to Ex7. This time, I decided to do them all again before continuing.
Comment on Ex1-6
It gets easy if you just write down what property you want to have in first-order logic.
For example, for Ex1 you want a set that does the following:
now if we consider a set as a function that takes an element and returns true or false, this becomes
How do you get such a ? You can just choose , then
and this is done by defining , i.e., which is precisely the solution. This almost immediately answers Ex 1,2,4 and it mostly answers Ex6.
Another quine for Ex7, this time in python:
l = chr(123) # opening curly bracket
r = chr(125) # closing curly bracket
q = chr(39) # single quotation mark
t = chr(9) # tab
n = chr(10) # linebreak
z = [n, l+f"z[i]"+r+q+n+t+"+ f"+q]
x = [n+t, l+f"x[i]"+r]
e = [q, l+"e[i]"+r+q+", end="+q+q+")"]
for i in range(0,2):
print(f'l = chr(123) # opening curly bracket{z[i]}'
+ f'r = chr(125) # closing curly bracket{z[i]}'
+ f'q = chr(39) # single quotation mark{z[i]}'
+ f't = chr(9) # tab{z[i]}'
+ f'n = chr(10) # linebreak{z[i]}'
+ f'z = [n, l+f"z[i]"+r+q+n+t+"+ f"+q]{z[i]}'
+ f'x = [n+t, l+f"x[i]"+r]{z[i]}'
+ f'e = [q, l+"e[i]"+r+q+", end="+q+q+")"]{z[i]}'
+ f'for i in range(0,2):{x[i]}print(f{e[i]}', end='')
1: Got spoiled on this, so just "recalling"
Glossing over bijecting 2^N with R (use binary expansions, and then handle the countable 'collisions' like .111... = 1.000... separately), let's prove S can't surject into P(S). We have a function f: S -> P(S), and if we look at a value f(s) we'll get a subset of S. Let's build a set U that, for every s, asks if s in f(s) and then does the opposite. That is, U = {s : s not in f(s)}. Now, if we have a surjection then there's some u s.t. f(u) = U. Is u in U? Well, if it is, then it isn't, but if it isn't, then it is.
This made me notice that it's like Russell's paradox! So I guess I am glad I 'recalled' it.
If you haven't seen that one before: Imagine that you could make a set of all sets. Prove a contradiction. (whenever you have a set, e.g. the integers, you can do things like make {x^2 : x in the integers, x is even} which means "the set of all values of x^2, where x ranges over integers that satisfy (x is even)" aka the even squares).
if we have a set U of all sets, then we'd have an f: U -> P(U) because... P(U) = U! So just by saying f = identity, we get a contradiction from Cantor! More explicitly, walking through the proof again, we make S = {x : x in U, x not in x}. Now, is s in s? (aka is s in f(s)?) Well, if it is, then it isn't, but if it isn't, then it is. Womp womp.
2&3:
Well, in the previous exercise, our 'unfixy' map (call it f: T -> T) is the one that sends 0 to 1 and 1 to 0. So now we can mimic the previous proof: let g: S -> (S -> T). For every s, we look at g(s), and then look at g(s)(s). We then apply the 'shift' map f(g(s)(s)) to get an element of T depending on s, so we have a function h(s) = f(g(s)(s)). If g is surjective then there's some s' (I really want to write s^\*, but prime is quicker) such that g(s') = h. Now, what's g(s')(s')? Well, it's h(s') = f(g(s')(s')). But then we have a fixed point!
In general, any s' that gets sent to our function h provides us our fixed point g(s')(s')
Bonus Problem: Construct a fixed point combinator in the lambda calculus. That is, define a pure function with no free variables called fix such that fix f = f (fix f)
Well, in the previous problem our fixed point was g(s')(s'), where s' is such that g(s') = h, and h(s) = f(g(s)(s)).
Now we'll motivated what we'll do next by doing some intuitive but not really correct things, as that's how I got to the actually correct thing.
So our fix should be something kinda like (using pseudo python notation so I don't have to break out the latex)
lambda f: g(s')(s')
I can't quite remember how I actually thought up the next step, but I was thinking something like "Well, s' is like, the code that gets sent to the function h". So intuitively g(s')(s') should be something like (lambda s: f(g(s)(s))) (code for previous)
The thing to realize now is that the "code" for a lambda calculus expression... is just that expression. In other words, why don't we just pick g to be the identity (lambda x: x)? That is, if we say that L is the set of lambda functions and L(A->B) is the set of lambda functions from A to B, then L(L -> T) is a subset of L because L and T are both subsets of L. Looks kinda weird, but I guess that's just what happens when you don't have types?
It's surjective because any function L -> T can be passed in to the identity to juts get itself.
So we now have h(s) = f(s(s)), s' = h, and we want s'(s').
Writing it out as a function (and using the usual convention that concatenation is application),
fix = lambda f: (lambda s: f(s s)) (lambda s: f(s s))
We can check it: fix f = (lambda s: f(s s)) (lambda s: f(s s)) = f( (lambda s: f(s s)) (lambda s: f(s s)) ) = f(fix f)
If you don't know, this is the Y combinator. And now I finally know where the formula comes from!
Okay, now use that to implement the factorial function recursively. I already know the answer to this unfortunately, but if you don't then you should see if you can! You're allowed to take for granted that there's lambda calculus functions for basic arithmetic and if-then-else.
let F = lambda f, n: if n == 0 then 1 else n*f(n-1)
Now, fact = fix F. Why? Well, because if fact(n-1) = (n-1)! then F(fact)(n) = n*(n-1)! = n!. But F(fact)(n) = fact(n) so this means that fact(n-1) = (n-1)! implies fact(n) = n!, and then we can prove fact works on all integers by induction.
4:
For short I'll define Bool := {T,F}.
If we had a surjective computable function f: S -> Comp(S, Bool), then this means f': S^2 -> Bool is in Comp(S^2, Bool).
Let's try to adapt the previous exercise. For any s, f(s) is a program from strings to true/false. Let's ask whether it says itself is true: f(s)(s). Let's then invert it to make h(s) = Not(f(s)(s)). Now, f(s)(s) = f'(s,s) so this is h(s) = Not( f'(s,s) ) which is computable since f' is.
Now since f is surjective, there's some s' s.t. f(s') = h, but then f(s')(s') = h(s') = Not(f'(s',s')) = Not(f(s')(s')) but Not has no fixed points.
5:
Imagine we had a computable halt function. We would like to build a surjective computable function f: S -> Comp(S, Bool).
For all r, we can uncurry halt to get halt_r(s) = halt(r,s) which is thus computable for all r. Therefore we have a function S -> Comp(S, Bool).
Is it surjective? Well, what we want to know is if, given a program r assigning truth values to strings (that is an element of Comp(S, Bool)), can we find a program s such that s halts on exactly the inputs that r says it should halt on?
Well, when you say it like that, it's really easy! We can make a program that runs r on the input, returns if it's true, else loops forever. In other words, given any computable set, we can make a program whose set of halting inputs is that set.
Therefore, the uncurried halt has to be uncomputable.
Bonus: Rice's theorem (surprisingly easy, though it took me a while):
Suppose P is a property/set of programs that's nontrivial (there's a program without the property and a program with it) and extensional (it only depends on what the partial computable function that the code evaluates to does). Then deciding whether a program is in P is uncomputable,
Let Y/N be two programs that are/aren't in P. Now let's ask whether the program L = "loop forever" is in P. And say that "foo{s}bar" for a string s is the string with s appearing between "foo" and "bar" (instead of literal character s), like a format string.
If it does:
Given any program q and input i, ask whether
r = "lambda x: dummy = {q}({i}); return {N}(x)"
is in P
If q(i) halts then eval(r) = eval(N), so r isn't in P. Else eval(r) = eval(L) so r must be in P.
So you can make a computable function that makes the string r and feeds it to your P-detector.
Likewise if L doesn't have the property, you can use Y.
6:
Suppose we have X, f such that f: X -> C(X, S) and f a continuous surjection. Then we can look at h(x) = g(f(x)(x)) for some continuous g: S -> S we'll try to define later. Then h is continuous, so there's some x' such that f(x') = h, but f(x')(x') = h(x') = g(f(x')(x')). This tells up we want a g without fixed points. We can just rotate the circle a little to get our desired g.
Note that, for example, this proof also works if instead of the circle we had the torus, or the cylinder, or (I think) the klein bottle. Likewise, it'll work for the sphere. But it won't work for the disk, by the Brouwer fixed point theorem.
7: TODO: Make a quine
8: TODO: Make a program that returns f(its code).
Bonus TODO: Make a program that returns f(its code, input) [kleene recursion theorem]
Bonus TODO: Make a program that takes in a transformation of programs and spits out a fixed point under extensional equivalence [roger's theorem]
9: This suddenly became obvious a few days later - not sure what was holding me up. Also, what does "Syn" stand for?
We have f: S_1 -> (S_1 -> S_0), given by sending a P in S_1 to the function that sends any Q in S_1 to P([Q]) (where this means we substitute the godel number of Q into P). It's surjective (onto Syn(S_1, S_0)), because an element of the codomain is given by substituting Q into some fixed formula, which we can take as our P, and then f(P) is what it should be.
Now, what should our map S_0 -> S_0 be? Well, we want to find a fixed point of some P in S_1... so... why don't we use P? All we gotta do is say that P sends a formula Q in S_0 to P([Q])!
So, now we'd like h(Q) = P([ f(Q)([Q]) ]) as usual. What we really need, though, is that we have a substitution function f' that takes in two godel numbers n,m and returns the godel number of the formula with m plugged into the formula corresponding to n. We can do this by (handwaves) doing arithmetic. So then what we really have is h(Q) = P(f'([Q], [Q])).
Then h is in Syn(S_1, S_0) because we are just substituting Q into this formula. So there's some R such that f(R)([R]) = P(f'([R], [R])) so if we have psi = R([R]) then we have psi = P([psi])
I'm not exactly sure why I have equality when I should only have logical equivalence. Looking at the wikipedia page, it looks like instead of having a formula f', what we really have is a formula d(Q,y) <-> y = f'(Q,Q), and then instead of my h(Q) we have h(Q) = \exists y : d(Q,y) and P(y). The reason you need to do this is that you can basically code up a turing machine that does the substitution, but in order to represent it in logic you need to say something like "this program has output equal to y".
10: "Bew"? I'll just say Prove. I'll also elide the quoting in the prove operator. You should imagine I put it there. Define F to be your favorite contradictory statement, like ((1=1) and (1!=1)). I'll also use |- ('turnstile') for "PA proves...".
Anyways, when I first wrote this I kept getting 'proofs' that PA proves itself inconsistent. The error was that we do not have |- P -> Prove(P) (that is, PA doesn't prove it's complete). We merely have (|- P) -> (|- Prove(P)) and |- Prove(P) -> Prove(Prove(P)) .
By the previous exercise, -Prove must have some fixed point. Call it P. Then |- -Prove(P) <-> P
Now, let's (in our metatheory) suppose |- P. We have:
|- P -> -Prove(P)
|- Prove(P) (from metatheorem mentioned in next exercise)
|- P -> F
which would make PA inconsistent.
If instead |- -P, then we'd have:
|- Prove(P)
|- Prove(-P) aka Prove(P -> F)
|- Prove(F) (since |- Prove(A -> B) -> (Prove(A) -> Prove(B)) )
If PA proves itself consistent, then this would mean it's inconsistent.
Therefore if PA is consistent and PA proves itself consistent, then it cannot prove nor disprove P.
Now, we will show that if PA is consistent and proves itself consistent then it must prove P. Don't ask me how I came up with this: I just filled out a napkin with failed approaches until this proof mysteriously appeared.
|- -Prove(P) <-> P
|- Prove(-P <-> Prove(P))
|- Prove(-P) <-> Prove(Prove(P))
|- Prove(P) -> Prove(Prove(P))
|- Prove(P) -> Prove(-P)
|- Prove(-P) -> (Prove(P) -> Prove(F))
|- Prove(P) -> (Prove(P) -> Prove(F))
|- Prove(P) -> Prove(F) [this will be used in the next exercise - note we haven't yet used -Prove(F)]
Now, since we assumed |- -Prove(F), we have
|- -Prove(P)
but then
|- P
which is where the previous part comes in to show that then PA is inconsistent.
If we believe (aka the metatheory proves) PA is consistent, then we believe PA can't prove itself consistent. Secondly since we think PA can't prove a contradiction, we also think that
I think the fundamental idea is to get Prove(P) 'through some other means', and specifically here we apply Prove to (thing that takes in Prove(P)) to get an expression that itself has Prove(P), and so we can go back up and substitute it into the machine.
That is, think of the constructive version of the fixpoint theorem: there's a function that takes in Prove(P) and spits out something with P in it. By applying Prove to this function, we get something that itself has a "essentially new" Prove(P) in there, which allows us to apply the function to get something useful.
11:
Suppose we have |- Prove(P) -> P. By the diagonal lemma we know there's some Q such that
|- Q <-> (Prove(Q) -> Prove(P))
|- Q <-> (Prove(Q) -> P) ["Q is true iff a proof of it implies P"]
|- Prove(Q) -> (Prove(Prove(Q)) -> Prove(P)) ["I can prove Q iff I can use a proof that I can to prove P"]
|- Prove(Q) -> Prove(P)
|- Q
|- Prove(Q)
|- Prove(P)
|- P
12: This was much much easier than I assumed it would be.
Say F is our phi, so that for all P, |- F(P) <-> P holds. Then by the diagonal lemma, there's a P s.t.
|- -F(P) <-> P
But then |- P <-> F(P) gives us a contradiction.
13: TODO
Here are my solutions to the first six problems:
1. Recall Cantor’s diagonal argument for the uncountability of the real numbers. Apply the same technique to convince yourself than for any set , the cardinality of is less than the cardinality of the power set (i.e. there is no surjection from to ).
Solution:
Suppose that is surjective. Then take . Since is surjective, there exists such that . But then , a contradiction.
2. Suppose that a nonempty set has a function from to which lacks fixed points (i.e. for all ). Convince yourself that there is no surjection from S to , for any nonempty . (We will write the set of functions from to either as or ; these are the same.)
Solution:
Suppose that were such a surjection. Then we may define a function by . Now, since is surjective, there exists a such that . Passing as an argument to both sides, we have , so , contradicting the fact that has no fixed points.
3. For nonempty and , suppose you are given a surjective function from the set to the set of functions from to , and let be a function from to itself. The previous result implies that there exists an in such that . Can you use your proof to describe in terms of and ?
Solution:
Define by . Since is surjective, there exists such that . Then take . From my proof above, is a fixed point of .
4. Given sets and , let denote the space of total computable functions from to . We say that a function from to is computable if and only if the corresponding function (given by is computable. Show that there is no surjective computable function from the set of all strings to .
Solution:
Suppose for the sake of contradiction that is a surjective computable function. Then the following is computable: . Since is surjective, there exists such that , but then , a contradiction.
5. Show that the previous result implies that there is no computable function from which outputs if and only if the first input is a code for a Turing machine that halts when given the second input.
Suppose for the sake of contradiction that there exists such a computable function . Then define a function as follows: for any string , if is not a code for a Turing Machine, let be the function sending all strings to . If is a code for a Turing machine, let be the function . I claim this is a surjective computable function from to : to show it is surjective, for any computable function , there is a Turing machine which halts precisely when that function returns . Letting be the code for that Turing machine, we then have . It's also computable since we assume is computable. This contradicts the previous result, so we must have been wrong in our initial assumption.
6. Given topological spaces and , let be the space with the set of continuous functions from to as its underlying set, and with topology such that is continuous if and only if the corresponding function (given by ) is continuous, assuming such a space exists. Convince yourself that there is no space which continuously surjects onto , where is the circle.
Suppose for the sake of contradiction that for some space , is a surjective continuous function. Let denote the continuous function giving the point on the opposite side of the circle. Then the following is continuous: . Since is surjective, there exists such that , but then , a contradiction.
Solution to 8 implemented in python using zero self-reference, where you can replace f with code for any arbitrary function on string x (escaping characters as necessary):
f="x+'\\n'+x"
def ff(x):
return eval(f)
(lambda s : print(ff('f='+chr(34)+f+chr(34)+chr(10)+'def ff(x):'+chr(10)+chr(9)+'return eval(f)'+chr(10)+s+'('+chr(34)+s+chr(34)+')')))("(lambda s : print(ff('f='+chr(34)+f+chr(34)+chr(10)+'def ff(x):'+chr(10)+chr(9)+'return eval(f)'+chr(10)+s+'('+chr(34)+s+chr(34)+')')))")
edit: fixed spoiler tags
This is the second of three sets of fixed point exercises. The first post in this sequence is here, giving context.
Recall Cantor’s diagonal argument for the uncountability of the real numbers. Apply the same technique to convince yourself than for any set S, the cardinality of S is less than the cardinality of the power set P(S) (i.e. there is no surjection from S to P(S)).
Suppose that a nonempty set T has a function f from T to T which lacks fixed points (i.e. f(x)≠x for all x∈T). Convince yourself that there is no surjection from S to S→T, for any nonempty S. (We will write the set of functions from A to B either as A→B or BA; these are the same.)
For nonempty S and T, suppose you are given g:S→TS a surjective function from the set S to the set of functions from S to T, and let f be a function from T to itself. The previous result implies that there exists an x in T such that f(x)=x. Can you use your proof to describe x in terms of f and g?
Given sets A and B, let Comp(A,B) denote the space of total computable functions from A to B. We say that a function from C to Comp(A,B) is computable if and only if the corresponding function f′:C×A→B (given by f′(c,a)=f(c)(a)) is computable. Show that there is no surjective computable function from the set S of all strings to Comp(S,{T,F}).
Show that the previous result implies that there is no computable function halt(x,y) from S×S→{T,F} which outputs T if and only if the first input is a code for a Turing machine that halts when given the second input.
Given topological spaces A and B, let Cont(A,B) be the space with the set of continuous functions from A to B as its underlying set, and with topology such that f:C→Cont(A,B) is continuous if and only if the corresponding function f′:C×A→B (given by f′(c,a)=f(c)(a)) is continuous, assuming such a space exists. Convince yourself that there is no space X which continuously surjects onto Cont(X,S), where S is the circle.
In your preferred programming language, write a quine, that is, a program whose output is a string equal to its own source code.
Write a program that defines a function f taking a string as input, and produces its output by applying f to its source code. For example, if f reverses the given string, then the program should outputs its source code backwards.
Given two sets A and B of sentences, let Syn(A,B) be the set of all functions from A to B defined by substituting the Gödel number of a sentence in A into a fixed formula. Let S0 be the set of all sentences in the language of arithmetic with one unbounded universal quantifier and arbitrarily many bounded quantifiers, and let S1 be the set of all formulas with one free variables of that same quantifier complexity. By representing syntax using arithmetic, it is possible to give a function f∈Syn(S1×S1,S0) that substitutes its second argument into its first argument. Pick some coding of formulas as natural numbers, where we denote the number coding for a formula φ as ┌φ┐. Using this, show that for any formula ϕ∈S1, there is a formula ψ∈S0 such that ϕ(┌ψ┐)↔ψ.
(Gödel's second incompleteness theorem) In the set S1, there is a formula ¬Bew such that ¬Bew(┌ψ┐) holds iff the sentence ψ is not provable in Peano arithmetic. Using this, show that Peano arithmetic cannot prove its own consistency.
(Löb's theorem) More generally, the diagonal lemma states that for any formula ϕ with a single free variable, there is a formula ψ such that, provably, ϕ(┌ψ┐)↔ψ. Now, suppose that Peano arithmetic proves that Bew(ψ)→ψ for some formula ψ. Show that Peano arithmetic also proves ψ itself. Some facts that you may need are that (a) when a sentence ψ is provable, the sentence Bew(ψ) is itself provable, (b) Peano arithmetic proves this fact, that is, Peano arithmetic proves Bew(ψ)→Bew(Bew(ψ)), for any sentence ψ and (c) Peano arithmetic proves the fact that if χ and χ→ζ are provable, then ζ is provable.
(Tarski's theorem) Show that there does not exist a formula ϕ with one free variable such that for each sentence ψ, the statement ϕ(┌ψ┐)↔ψ holds.
Looking back at all these exercises, think about the relationship between them.
Please use the spoilers feature - the symbol '>' followed by '!' followed by space -in your comments to hide all solutions, partial solutions, and other discussions of the math. The comments will be moderated strictly to hide spoilers!
I recommend putting all the object level points in spoilers and including metadata outside of the spoilers, like so: "I think I've solved problem #5, here's my solution <spoilers>" or "I'd like help with problem #3, here's what I understand <spoilers>" so that people can choose what to read.