Layers of AGI
Model → Can respond like a human (My estimate is 95% there)
OpenClaw → Can do everything a human can do on a computer (also 95% there)
Robot → Can do everything a human can do (Unclear how close)
The main bottleneck to AGI for something such as OpenClaw is that the internet and the world generally is built for humans. As the world adapts the capability difference between humans and agents will collapse or invert.
On scary Moltbook posts -
Main point that seems relevant here is that it is not possible to determine whether posts are from an agent or a human. A human could easily send messages pretending to be an agent via the API, or tell their agent to send certian messages. This leaves me skeptical. Furthermore, OpenClaw agents have configured personalities, one can easily tell their agent to be anti-human and post anti-human posts (this leaves a lot more to think about beyond the Moltbook forum).
Main point that seems relevant here is that it is not possible to determine whether posts are from an agent or a human. A human could easily send messages pretending to be an agent via the API, or tell their agent to send certian messages. This leaves me skeptical. Furthermore, OpenClaw agents have configured personalities, one can easily tell their agent to be anti-human and post anti-human posts (this leaves a lot more to think about beyond a forum).
I think it is more likely default techniques are sufficent then default market or government is sufficent. Markets don't incentives non-harmful products, regulation does. Regulation can be slow. If you believe in a rapid intelligence explosion it seems there is a high chance there is not sufficent market regulation. On the other hand, our morals are mostly evolved, so you can imagine that an AI that understands things in the same regard as we do shares our same morals.
It's hard to imagine a economic/political system that doesn't eventually lead to an intelligence explosion. Maybe specific rules are easier to imagine: for example if you had a country in which building any form of intelligence was forbiden (I am not advocating for this), there wouldn't be an intelligence explosion. Such countries could have similar levels of growth all the way up to the advent of machine intelligence. It's important to remember though such countries could be caught in power prisoner-dilemmas which would give them a strong incentive to not have such rules.
I should edit my question. What I am primarily intending to ask is this: Could a superintelligent machine with a near-complete understanding of the universe (perhaps using some approximations) determine if we are in a simulation? <- assuming no such thing as true randomness
If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
From what I've read he believed the U.S. should nuke them before they developed a nuclear bomb. He was extremely worried about nuclear catastrophe once multiple powers had nukes. He also advocated for bombing Kyoto instead of Hiroshima and Nagasaki reasoning that the disastrous result would prevent countries from using nuclear weapons or even developing them.
Let's assume there is no such thing as true randomness. If this is true, and we create a superintelligent system which knows the location and properties of every particle in the universe, could we determine if we are in a simulation? (EDIT: to avoid running afoul of the impossibility of storing a complete description of the universe within the universe as @Karl Krueger pointed out, assume this includes approximations and is not exact). If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
I am especially interested in answers to the final question.
Edit (10/19/2025):
I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction?
Original:
Questions:
When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?
I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives.
Just thinking out loud: If AI (or something else) causes the S&P500 to go up by 50%...
By January 21st, 2028 then you can achieve a 1324% ROI (https://finance.yahoo.com/quote/SPY280121C01000000/)
By June 16, 2028 then you can achieve a 659% ROI (https://finance.yahoo.com/quote/SPY280616C01000000/)
By December 15, 2028 then you can achieve a 315% ROI (https://finance.yahoo.com/quote/SPY281215C01000000/)
Curious to hear others thoughts, especially those who have short AGI timelines. *Of course this is not financial advice