Polytopos

Comments

Less Wrong Rationality and Mainstream Philosophy

I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.

I find this debate a bit strange. Academic philosophy has its problems, but it's also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in order to be able to say anything interesting about AI. On the other hand, I don't quite agree that we should just re-invent the wheel completely and then look to the literature to find "philosophical nearest neighbor". Imagine suggesting we do that with math. "Who cares about what all these mathematicians have written, just invent your own mathematical concepts from scratch and then look to find the nearest neighbor in the mathematical literature." You could do that, but you'd be wasting a huge amount of time and energy re-discovering things that are already well understood in the appropriate field of study. I routinely find myself reading pseudo-philosophical debates among science/engineering types and thinking to myself, I wish they had read philosopher X on that topic so that their thinking would be clearer. 

It seems that here on LW many people have a definition of "rationalist" that amounts to endorsing a specific set of philosophical positions or meta-theories (e.g., naturalism, Bayesianism, logical empiricism, reductionism, etc). In contrast, I think that the study of philosophy shows another way of understanding what it is to be a rational inquirer. It involves a sensitivity to reason and argument, a willingness to question one's cherished assumptions, a willingness to be generous with one's intellectual interlocutors. In other words, being rational means following a set of tacit norms for inquiry and dialogue rather than holding a specific set of beliefs or theories. 

In this sense of reason does not involve a commitment to any specific meta-theory. Plato's theory of the forms, however implausible it seems to us today, is just as much an expression of rationalism in the philosophical sense. It was a good-faith effort to try to make sense of reality according to best arguments and evidence of his day. For me, the greatest value of studying philosophy is that it teaches rational inquiry as a way of life. It shows us that all these different weird theories can be compatible with a shared commitment to reason as the path to truth.

Unfortunately, this shared commitment does break down in some places in the 19th and 20th centuries. With certain continental "philosophers" like Nietzsche, Derrida and Foucault their writing undermines the commitment to rational inquiry itself, and ends up being a lot of posturing and rhetoric. However, even on the continental side there are some philosophers who are committed to rational inquiry (my favourite being Merleau-Ponty who pioneered ideas of grounded intelligence that inspired certain approaches in RL research today).  

I think it's also worth noting that Nick Bostrum who helped found the field of AI safety is a straight-up Oxford trained analytic philosopher. In my Master's program, I attended a talk he gave on Utilitarianism at Oxford back in 2007 before he was well known for AI related stuff. 

Another philosopher who I think should get more attention in the AI-alignment discussion is Harry Frankfurt. He wrote brilliantly on value-alignment problem for humans (i.e., how do we ourselves align conflicting desires, values, interests, etc.).

How can labour productivity growth be an indicator of automation?

Ah, thanks for clarifying. So the key issue is really the adjusted for inflation/deflation part. You are saying even if previously expensive goods become very cheap due to automation, they will still be valued in "real dollars" the same for the productivity calculation. 

Does this mean that a lot rides on how economists determine comparable baskets of goods at different times and also on how far back they look for a historical reference frame?

How can labour productivity growth be an indicator of automation?

Thanks for your comment Phil. That's helpful, I hadn't considered the question of where labour shifts after less of it is needed to produce an existing good. 

I understand you as saying that as productivity increases in a field and market demand becomes saturated then the workers move elsewhere. This shift of labour to new sectors could (and historically did) lead to more overall productivity, but I think this trend may not continue with the current waves of automation. It seems possible that now areas of the economy where workers move to are those less affected by the productivity enhancing effects of technology. I think this is what actually happened with the economic shift from manufacturing to service industries. Manufacturing can benefit a lot from automation technology, whereas service jobs (especially in fields where the human element of the service is what makes the service valued) are not as capable of becoming more productive. e.g., A massage therapist is not going to get much more productive no matter how much technology we have. So what I imagine is that as automation makes it take less and less labour to produce physical and digital stuff, then most of the jobs that remain will be in human-centered fields which are inherently harder to make more productive through technology. 

Thus, it still seems possible that automation could cause worker productivity to go down (which is the opposite of what Krugman was assuming). This is counter-intuitive because clearly there is a common sense way in which the automated economy is much much more productive. More and more goods become plentiful and virtually free. But these cheap plentiful goods do not have much market value, despite their value to us as human beings, so they don't contribute to labour productivity as measured by economists.

The tech left behind

Digital knowledge management tools envisioned in the 1950s and 60s such as Douglas Engelbart's hyperdocument system has not been fully implemented (to my knowledge) and certainly not widely embraced. The World Wide Web failed to implement key features from Engelbart's proposal such as the ability to directly address arbitrary sub-documents, or the ability to live embed a sub-document inside another document. 

Similarly both Engelbart and Ted Nelson emphasized the importance of hyperlinks being two-directional so that the link is browsable from both the source and the target document. In other words, you could look at any webpage and immediately see all the pages that link to that page.  However, Tim Berners-Lee chose to make web hyperlinks one directional from source to target, and we are still stuck with that limitation today.  Google's PageRank algorithm gets around this by doing massive crawling the web and then tracing the back-links through the network, but back-links could have been built into the web as a basic feature available to everybody. 

https://www.dougengelbart.org/content/view/156/88/

Are Humans Fundamentally Good?

I second this book recommendation. I just finished reading it and it is well written and well argued. Bregman explicitly contrasts Hobbes' pessimistic view of human nature with Rousseau's positive view. According to the most recent evidence Rousseau was correct.

His evolutionary argument is that social learning was the overwhelming fitness inducing ability that drove human evolution. As a result we evolved for friendliness and cooperation as a byproduct of selection for social learning.

Introduction to Introduction to Category Theory

I don't know enough math to understand your response. However, from the bits I can understand, it seems leave open the epistemic issue of needing an account of demostrative knowledge that is not dependent on Bayesian probability.

Introduction to Introduction to Category Theory

Interesting. This might be somewhat off topic, but I'm curious how would such an Bayesian analysis of mathematical knowledge explain the fact that it is provable that any number of randomly selected real numbers are non-computable with a probability 1, yet this is not equivalent to a proof that all real numbers are non-computable. The real numbers 1, 1.4, square root 2, pi, etc are all computable numbers, although the probability of such numbers occurring in an empirical sample of the domain is zero.

A new kind of Hermeneutics

I was excited by the initial direction of the article, but somewhat disappointed with how it unfolded.

In terms of Leibniz's hope for a universal formal language we may be closer to that. The new book Modal Homotopy Type Theory (2020 by David Corfield) argues that much of the disappointment with formal languages among philosophers and linguists stems from the fact that through the 20th century most attempts to formalize natural language did so with first-order predicate logic or other logics that lacked dependent types. Yet, dependent types are natural in both mathematical discourse and ordinary language.

Martin-Lof developed the theory of dependent types in the 1970s and now Homotopy Type Theory has been developed on top of that to serve as a computation-friendly foundation for mathematics. Corfield argues that such type theories offer new hope for the possibility of formalizing the semantic structure of natural language.

Of course, this hasn't been accomplished yet, but it's exciting to think that Leibniz's dream may be realized in our century.

Introduction to Introduction to Category Theory

I disagree with the idea that one doesn't have intuitions about generalization if one hasn't studied mathematics. One things that I find so interesting about CT is that it is so general it applies as much to everyday common sense concepts as it does to mathematical ones. David Spivak's ontology logs are a great illustration of this.

I do agree that there isn't a really good beginners book that covers category theory in a general way. But there are some amazing YouTube lectures. I got started on CT with this series, Category Theory for Beginners. The videos are quite long, but the lecturer does an amazing job explaining all the difficult concepts with lots of great visual diagrams. What is great about this series is that despite the "beginners" in the title he actually covers many more advanced topics such as adjunction, Yoneda's lemma, and topos theory in a way that doesn't presuppose prior mathematical knowledge.

In terms of books, Conceptual Mathematics really helped me with the basics of sets and functions, although it doesn't get into the more abstract stuff very much. Finally, Category Theory for Programmers is quite accessible if you have any background in computer programming.

Introduction to Introduction to Category Theory

It seems odd to equate rationality with probabilistic reasoning. Philosophers have always distinguished between demonstrative (i.e., mathematical) reasoning and probabilistic (i.e., empirical) reasoning. To say that rationality is constituted only by the latter form reasoning is very odd, especially considering that it is only though demonstrative knowledge that we can even formulate such things as Bayesian mathematics.

Category theory is a meta-theory of demonstrative knowledge. It helps us understand how concepts relate to each other in a rigorous way. This helps with the theory side of science rather than the observation side of science (although applied category theories are working to build unified formalisms for experiments-as-events and theories).

I think it is accurate to say that, outside of computer science, applied category theory is a very young field (maybe 10-20 years old). It is not surprising that there haven't been major breakthroughs yet. Historically fruitful applications of discoveries in pure math often take decades or even centuries to develop. The wave equation was discovered in the 1750s in a pure math context, but it wasn't until the 1860s that Maxwell used it to develop a theory of electromagnetism. Of course, this is not in itself an argument that CT will produce applied breakthroughs. However, we can draw a kind of meta-historical generalization that mathematical theories which are central/profound to pure mathematicians often turn out to be useful in describing the world (Ian Stewart sketches this argument in his Concepts of Modern Mathematics pp 6-7).

CT is one of the key ideas in 20th century algebra/topology/logic which has allowed huge innovation in modern mathematics. What I find interesting in particular about CT is how it allows problems to be translated between universes of discourse. I think a lot of its promise in science may be in a similar vein. Imagine if scientists across different scientific disciplines had a way to use the theoretical insights of other disciplines to attack their problems. We already see this when say economists borrow equations from physics, but CT could enable a more systematic sharing of theoretical apparatus across scientific domains.

Load More