# Ω 15

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Srinivasa Ramanujan is an Indian mathematician who is famously known for solving math problems with sudden and inexplicable flashes of insight. From his Wikipedia page:

Imagine that you are on a street with houses marked 1 through n. There is a house in between (x) such that the sum of the house numbers to the left of it equals the sum of the house numbers to its right. If n is between 50 and 500, what are n and x?' This is a bivariate problem with multiple solutions. Ramanujan thought about it and gave the answer with a twist: He gave a continued fraction. The unusual part was that it was the solution to the whole class of problems. Mahalanobis was astounded and asked how he did it. 'It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. Which continued fraction, I asked myself. Then the answer came to my mind', Ramanujan replied."[60][61]

and

... Ramanujan's first Indian biographers describe him as a rigorously orthodox Hindu. He credited his acumen to his family goddess, Namagiri Thayar (Goddess Mahalakshmi) of Namakkal. He looked to her for inspiration in his work[12]:36 and said he dreamed of blood drops that symbolised her consort, Narasimha. Afterward he would receive visions of scrolls of complex mathematical content unfolding before his eyes.[12]:281 He often said, "An equation for me has no meaning unless it represents a thought of God."[58]

His style of mathematical reasoning was completely novel to the mathematicians around him, and led to groundbreaking research:

During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations).[4] Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired a vast amount of further research.[5] Nearly all his claims have now been proven correct.[6] The Ramanujan Journal, a peer-reviewed scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan,[7]and his notebooks—containing summaries of his published and unpublished results—have been analyzed and studied for decades since his death as a source of new mathematical ideas. As late as 2011 and again in 2012, researchers continued to discover that mere comments in his writings about "simple properties" and "similar outputs" for certain findings were themselves profound and subtle number theory results that remained unsuspected until nearly a century after his death.[8][9] He became one of the youngest Fellows of the Royal Society and only the second Indian member, and the first Indian to be elected a Fellow of Trinity College, Cambridge. Of his original letters, Hardy stated that a single look was enough to show they could only have been written by a mathematician of the highest calibre, comparing Ramanujan to other mathematical geniuses such as Euler and Jacobi.

If HCH is ascription universal, then it should be able to epistemically dominate an AI theorem-prover that reasons similarly to how Ramanujan reasoned. But I don't currently have any intuitions as to why explicit verbal breakdowns of reasoning should be able to replicate the intuitions that generated Ramanujan's results (or any style of reasoning employed by any mathematician since Ramanujan, for that matter).

I do think explicit verbal breakdowns of reasoning are adequate for verifying the validity of Ramanujan's results. At the very least, mathematicians since Ramanujan have been able to verify a majority of his claims.

But, as far as I'm aware, there has not been a single mathematician with Ramanujan's style of reasoning since Ramanujan himself. This makes me skeptical that explicit verbal breakdowns of reasoning would be able to replicate the intuitions that generated Ramanujan's results, which I understand (perhaps erroneously) to be a necessary prerequisite for HCH to be ascription universal.

# Ω 15

New Comment
[-]Wei DaiΩ580

If HCH is ascription universal, then it should be able to epistemically dominate an AI theorem-prover that reasons similarly to how Ramanujan reasoned.

My guess is that HCH has to reverse engineer the theorem prover, figure out how/why it works, and then reproduce the same kind of reasoning. This seems plausible to me but (if this is what Paul has in mind too) I'm not sure why Paul assumes HCH to be able to do this quickly:

Moreover, I’ll assume that it’s universal with some small blow-up, i.e. that if we give HCH a budget k N, then it epistemically dominates any computation that could be simulated by HCH with budget N.

It seems to me that it could easily take a super-linear (or even super-polynomial) budget to reverse engineer how a computation works since it could require disentangling opaque and convoluted algorithms, and acquiring new theoretical understandings that would explain how/why the algorithms work. In the case of "an AI theorem-prover that reasons similarly to how Ramanujan reasoned" this might for example require gaining a good theoretical understanding of logical uncertainty and seeing how the AI theorem-prover uses an approximate solution to logical uncertainty to guide its proof search.

[-]Wei DaiΩ120

It occurs to me that if the overseer understands everything that the ML model (that it's training) is doing, and the training is via some kind of local optimization algorithm like gradient descent, the overseer is essentially manually programming the ML model by gradually nudging it from some initial (e.g., random) point in configuration space. If this is a good way to think about what's happening, we could generalize the idea by letting the overseer use other ways to program the model (for example by using standard programming methodologies adapted to the model class) which can probably be much more efficient than just using the "nudging" method. This suggests that maybe IDA has less to do with ML than it first appears, or maybe that basing IDA on ML only makes sense if ML goes beyond local optimization at some point (so that the analogy with "nudging" breaks down), or we have to figure out how to do IDA safely without the overseer understanding everything that the ML model is doing (which is another another way that the analogy could break down).

It seems like for Filtered-HCH, the application in the post you linked to, you might be able to do a weaker version where you label any computation that you can't understand in kN steps as problematic, only accepting things you think you can efficiently understand. (But I don't think Paul is arguing for this weaker version).

I stumble upon interesting critical article about mathematical savants twins reported by Sacks:

http://www.pepijnvanerp.nl/articles/oliver-sackss-twins-and-prime-numbers/