Pekka Puupaa
Pekka Puupaa has not written any posts yet.

Pekka Puupaa has not written any posts yet.

My point is that that heuristic is not good. This obviously doesn't mean that reversing the heuristic would give you good results (reverse stupidity is not intelligence and so on). What one needs is a different set of heuristics.
If you extrapolate capability graphs in the most straightforward way, you get the result that AGI should arrive around 2027-2028. Scenario analyses (like the ones produced by Kokotajlo and Aschenbrenner) tend to converge on the same result.
An effective cancer cure will likely require superintelligence, so I would be expecting one around 2029 assuming alignment gets solved.
We mostly solved egg frying and laundry folding last year with Aloha and Optimus, which were some of the most long-standing issues in robotics. So human level robots in 2024 would actually have been an okay prediction. Actual human level probably requires human level intelligence, so 2027.
It’s been over ~40 years of progress since the perceptron, how do you know we’re in the last ~10% today?
What would this heuristic have said about the probability of AlphaFold 2 solving protein folding in 2020? What about all the other tasks that had been untractable for decades that became solvable in the past five years?
To me, 50% over the next 3 years is what sanity looks like.
Thank you, this has been a very interesting conversation so far.
I originally started writing a much longer reply explaining my position on the interpretation of QM in full, but realized that the explanation would grow so long that it would really need to be its own post. So instead, I'll just make a few shorter remarks. Sorry if these sound a bit snappy.
As soon as you assume that there exists an external universe, you can forget about your personal experience just try to estimate the length of the program that runs the universe.
And if one assumes an external universe evolving according to classical laws, the Bohmian interpretation has the lowest KC. If... (read 485 more words →)
I don't believe this is correct.
Which part do you disagree with? Whether or not every interpretation needs a way to connect measurements to conscious experiences, or whether they need extra machinery?
If the former: you need some way to connect the formalism to conscious experiences, since that's what an interpretation is largely for. It needs to explain how the classical world of your conscious experience is connected to the mathematical formalism. This is true for any interpretation.
If you're saying that many worlds does not actually need any extra machinery, I guess the most reasonable way to interpret that in my framework is to say that the branching function is a part of the... (read more)
I am also not a physicist, so perhaps I've misunderstood. I'll outline my reasoning.
An interpretation of quantum mechanics does two things: (1) defines what parts of our theory, if any, are ontically "real" and (2) explains how our conscious observations of measurement results are related to the mathematical formalism of QM.
The Kolmogorov complexity of different interpretations cannot be defined completely objectively, as DeepSeek also notes. But broadly speaking, defining KC "sanely", it ought to be correlated with a kind of "Occam's razor for conceptual entities", or more precisely, "Occam's razor over defined terms and equations".
I think Many Worlds is more conceptually complex than Copenhagen. But I view Copenhagen as a catchall term... (read more)
Can the author or somebody else explain what is wrong with Deepseek's answer to the Kolmogorov complexity question? It seems to give more or less the same answer I'd give, and even correctly notes the major caveat in the last sentence of its output.
I suppose its answer is a bit handwavy ("observer role"?), and some of the minor details of its arguments are wrong or poorly phrased, but the conclusion seems correct. Am I misunderstanding something?
Very interesting, thanks! On a quick skim, I don't think I agree with the claim that LLMs have never done anything important. I know for a fact that they have written a lot of production code for a lot of companies, for example. And I personally have read AI texts funny or entertaining enough to reflect back on, and AI art beautiful enough to admire even a year later. (All of this is highly subjective, of course. I don't think you'd find the same examples impressive.) If you don't think any of that qualifies as important, then I think your definition of important... (read 365 more words →)