Interesting! I agree that in the current paradigm, Foom seems very unlikely in days. But I predict that soon, we will step out of the LLM paradigm to something that works better. Take coding, GPT-4 is great at coding from only predicting code without any weight updates from experience of trial and error coding like how a human improves at it. I expect it will become possible to take a LLM base model and then train it using RL on tasks of writing full programs/apps/websites... where the feedback comes from executing the code and comparing the results with its expectation. You might be able to create a dataset of websites, for example, and give it the goal of "recreate this" so that the reward can be given autonomously. The LLM process brings
common sense (according to the lead author Bubeck of the sparks of AGI paper in his YouTube Presentation), plausible idea generation, and the ability to look up other people's idea's online. If you add learning from trying out ideas on real tasks like coding full programs, this might go very fast upwards in capability. And in doing this you create an agentic AI that unlike Auto-GPT does learn from experience.
That is a very binary assessment. You make it seem like either Safety is impossible or it is easy. If impossible, we could save everyone by not building AGI. If we know it to be easy, I agree, we should accelerate. But the reality is that we do not know, and that it can be somewhere on the spectrum from easy to impossible. And since everything is on the line, including your life. Better safe than sorry is to me the obvious approach. Do I see correctly that you think the pausing AGI situation is not 'safe' because if all would go well, the AGI could be used to make humans immortal?
I do not have good examples no. You are right that normally there is learning from failure cases. But we should still try. Now we have nothing that is required that could prevent an AGI breakout. Nick Bostrom has wrote in Superintelligence for example that we could implement tripwires and honeypot situations in virtual worlds that would trigger a shutdown. We can think of things that are better than nothing.
It seems that you see what can be gained in a pause is only technical alignment advances. But I want to point out that safety comes from solving two problems, the governance problem and the technical problem. And we need a lot of time to get the governance ironed out. The way I see it, misaligned AGI or ASI is the most dangerous thing ever, so we need the best regulation ever. The best safety / testing requirements. The best monitoring by governments of AI groups for unsafe actions, the best awareness among politicians. Among the public. And if one country has great governance figured out, it takes years or decades to get that level of excellence to be applied globally.
Very interesting and revealing post. (I'm new)
I recently picked up a habit that makes me more strategic and goal-achieving that might be useful to share.
I have instated a rule to start the day by: Making a list of options of things that I could do and ranking them in importance and how much effort they cost, and then the rule is to do the most important / greatest effort or unpleasant first. Then, when I have done it, I have moved toward my goal and feel better about myself. Before doing this, I would choose what to do based on what you WANT to do first and then feel disappointed by the little progress after a few hours. Another rule that ties in with this is no phone or other distraction before having done one hour of productivity.
Thank you for your comments and explanations! Very interesting to see your reasoning. I have not seen evidence of trivial alignment. I hope for the mass to be in the in between region. I want to point out that I think you do not need your "magic" level intelligence to do a world takeover. Just high human level with digital speed and working with your copies is likely enough I think. My blurry picture is that the AGI would only need a few robots in a secret company and some paid humans to work on a >90% mortality virus where the humans are not aware what the robots are doing. And hope for international agreement comes not so much from a pause but from a safe virtual testing environment that I am thinking about.