ekka
ekka has not written any posts yet.

The effort from going from Chimp to Human was marginally lower but still took a huge amount of effort. It was maybe 5 million years since the last common ancestor between Chimps and Humans and taking a generation to be like 20 years that's at least 250,000 generations of a couple of thousand individuals in a complex environment with lots of processes going on. I haven't done the math but that seems like a massive amount of computation. To go from human to Von Neumann still takes a huge search process. If we think of every individual human as consisting of evolution trying to get more intelligence there are almost 8 billion... (read more)
Evolution is massively parallelized and occurs in a very complex, interactive, and dynamic environment. Evolution is also patient, can tolerate high costs such as mass extinction events and also really doesn't care about the outcome of the process. It's just something that happens and results in the filtering of the most fit genes. The amount of computation that it would take to replicate such complex, interactive, and dynamic environments would be huge. Why should we be confident that it's possible to find an architecture for general intelligence a lot more efficiently than evolution? Wouldn't it also always be more practically expedient creating intelligence that does the exact things we want, even if we could simulate the evolutionary process why would we do it?
Who are the AI Capabilities researchers trying to build AGI and think they will succeed within the next 30 years?
Yoshua Bengio did talk about System 2 Deep Learning at NeurIPS 2019
Not aligned on values, beliefs and moral intuitions. Plenty of humans would not kill all people alive if given the choice but there are some who would. I think the existence of doomsday cults that have tried to precipitate an armageddon give support to this claim.
Human beings are not aligned and will possibly never be aligned without changing what humans are. If it's possible to build an AI as capable as a human in all ways that matter, why would it be possible to align such an AI?
Great point! Though for what it's worth I didn't mean to be dismissive of the prediction, my main point is that the future has not yet been determined. As you indicate people can react to predictions of the future and end up on a different course.
I'm still forming my views and I don't think I'm well calibrated to state any probability with authority yet. My uncertainty still feels so high that I think my error bars would be too wide for my actual probability estimates to be useful. Some things I'm thinking about:
I was being hyperbolic but point taken.
What is the theory of change of the AI Safety field and why do you think it has a high probability to work?