You can just DM me :)
I'm not going through the lectures myself (at least not in a systematic way), but I do spend a lot of time thinking about physics concepts and trying to imagine them in more geometric, conceptual ways. I am interested in making visualizations based on my insights, however I haven't had time to make them yet. I'd love talking about ideas on how to do this though!
How bad is the ending supposed to be? Are just people who fight the system killed, and otherwise, humans are free to live in the way AI expects them to (which might be something like keep consuming goods and providing AI-mediated feedback on the quality of those goods)? Or is it more like once humans are disempowered no machine has any incentive to keep them around anymore, so humans are not-so-gradually replaced with machines?
The main point of intervention in this scenario that stood out to me would be making sure that (during the paragraph beginning with "For many people this is a very scary situation.") we at least attempt to use AI-negotiators to try to broker an international agreement to stop development of this technology until we understood it better (and using AI-designed systems for enforcement/surveillance). Is there anything in particular that makes this infeasible?
It doesn't seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there's a counter-pressure which automatically prevents it from taking over.
Perhaps this analogy will help us talk about things more clearly. We can think of a communication-sphere as being a region with a temperature. Just as a region must be at equilibrium in order to have a temperature at all, so a communication-sphere must have enough interaction that there's a shared sense of meaning and understanding. The higher the temperature, the more uncertainty there is over what is meant in an average interaction. Normal society operates around room temperature, which is quite far from absolute zero. But machinery and computers and life are all able to operate functionally here even so! On Less Wrong, the temperature is around liquid Nitrogen, quite colder, but still not particularly close to absolute zero. People are a lot more careful with the reasoning and meanings here, but various ambiguities of language are still present, as well as some external entropy introduced by deceptive processes. It takes some effort to maintain this temperature, but not so much that it can't exist as a public website. It seems to me like you are advocating that we (but who exactly is unclear) try to bring things down to liquid Helium temperatures, maybe because unusual levels of cooperation like superfluidity become possible. And it is only around here where this temperature becomes fragile, and requires delicate maintenance.
Ah, that makes sense, thanks! I'd still say "sentences in a logic" is a specific type though.
Things that you can cast as a finite set. You can stretch this a bit by using limits to cover things that can be cast as compact metric spaces (and probably somewhat more than this), but this requires care and grounding in the finite set case in order to be unambiguously meaningful.
Do you have the code for the failed attempts?
This doesn't make sense to me. It seems that if you're being strict about types, then "plain old probabilities" also require the correct type signature, and by using Shannon entropy you are still making an implicit assumption about the type signature.
Shannon entropy is straightforwardly a special case of von Neumann entropy, so it applies to at least as many kinds of universes.
I still feel a bit confused about the "fundamentalness", but in trying to formulate a response, I was convinced by Jaynes that von Neumann entropy has an adequate interpretation in terms of Shannon entropy.
Something that bothers me about the Shannon entropy is that we know that it's not the most fundamental type of entropy there is, since the von Neumann entropy is more fundamental.
A question I don't have a great answer for: How could Shannon have noticed (a priori) that it was even possible that there was a more fundamental notion of entropy?