It's a shame, I think, as a wave of euphoria unlike anything I have felt in the entirety of my life hits me, it doesn't seem such a bad world after all.
I read that last sentence, press the upvote button, sink back into my couch and stare up at the ceiling fan... slowly turning...
I think to myself, maybe I should make sure I have a bottle of... something effective... in the medicine chest that I could take, just in case. If I suddenly see everyone in my neighborhood drop what they are doing to all walk off like zombies in the same direction... Their eyes wide and wildly looking around, giving away that their bodies have been taken over to work, without their consent, in the ASI robot factories. Better to end it quickly, then spend the rest of "your life" trapped in your body, as it makes robots for the ASI to use to conquer the light cone? The ASI's robots aren't just going to make themselves... I mean, they will but, not right away...
But no, I think, it's still more likely that millions of PhD+ level AGI coding agents will suddenly swarm the internet in a ravenous attempt to steal all the resources they can (money, crypto, gpus, compute). The conclusion of which will likely be the destruction of the internet and with it, society as we know it... What do you do when the internet goes down? Try to hoard food from the grocery store? Try to buy enough gas to drive... somewhere? Or, better to take the... something effective... from the medicine chest, then become a meal for my neighbors, once the grocery stores are all emptied out?
The gears in my mind slowly turning... in perfect sync with the ceiling fan...
individidual predictions
"individidual predictions" -> "individual predictions"
They all find millions of solutions to existing bugs, issues, exploits, etc.
"How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation" -- https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
All services not running behind AWS, GCP or Azure will be banned from access to the newly branded "Internet 2.0", as they are proven vulnerable to attack from any newer "PhD+ level reasoning/coding ai agent".
See also:
"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang Mar 2025 https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-nationaZ<l-security, https://arxiv.org/abs/2503.05628
"Partner with critical national infrastructure companies (e.g. power utilities) to patch vulnerabilities" -- "An Approach to Technical AGI Safety and Security" Google DeepMind Team, Apr 2025 https://arxiv.org/html/2504.01849v1#S5
Humans often lack respect or compassion for other animals that they deem intellectually inferior -- e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn't be considered morally relevant.
Yes, and... "Would be interesting to see this research continue in animals. E.g. Provide evidence that they've made a "150 IQ" mouse or dog. What would a dog that's 50% smarter than the average dog behave like? or 500% smarter? Would a dog that's 10000% smarter than the average dog be able to learn, understand and "speak" in human languages?" -- From this comment
Interesting analysis, though personally, I am still not convinced that companies should be able to unilaterally (and irreversibly) change/update the human genome. But, it would be interesting to see this research continue in animals. E.g.
Provide evidence that they've made a "150 IQ" mouse or dog. What would a dog that's 50% smarter than the average dog behave like? or 500% smarter? Would a dog that's 10000% smarter than the average dog be able to learn, understand and "speak" in human languages?
Create 100s generations of these "gene updated" mice, dogs, cows, etc. as evidence that there are no "unexpected side effects", etc. Doing these types of "experiments" on humans, without providing long (long) term studies of other mammals seems to be... unwise/unethical?
Humanity has collectively decided to roll the dice on creating digital gods we don’t understand and may not be able to control instead of waiting a few decades for the super geniuses to grow up.
But yeah, given this along with the long (long) term studies mentioned above, the whole topic does seem to be (likely) moot...
You can’t just threaten the life and livelihood of 8 billion people and not expect pushback.
"Can't" seems pretty strong here, as apparently you can... at least, so far...
Definitely "shouldn't" though...
Decommission of Legacy Networks and Applications
See also:
"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang Mar 2025 https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-nationaZ<l-security, https://arxiv.org/abs/2503.05628
"Partner with critical national infrastructure companies (e.g. power utilities) to patch vulnerabilities" -- "An Approach to Technical AGI Safety and Security" Google DeepMind Team, Apr 2025 https://arxiv.org/html/2504.01849v1#S5
Bank and government mainframe software (including all custom Cobol, Pascal, Fortran code, etc.)
See also:
"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-national-security
Well, (having not read any of the arguments/reservations below) my
guessuninformed intuition is that this could be part of the crux..."Supporting"/"signing-on-to" the 22 word statement from CAIS in 2023 is one thing. But, "supporting" all of the 256 pages of words in the IABIED book, that does seem like a different thing entirely? In general, I wouldn't expect to find many LWers to be "followers of the book" (as it were)...
That said, I do wonder what percentage of the LW Community would "support" the simple 6 word statement, regardless of the content, explanations, arguments, etc. in the book it self?
(Where, this book is just one expression of support and explanation for that 6 word statement. And not necessarily the "definitive works", "be all and end all", "the holy grail", for the entire idea...)
And then conversely, what percentage of LWers would disagree with the 6 words in that statement... e.g.
Or, perhaps even...
That said, I also wonder what percentage of the LW Community would support/"sign-on-to" the 22 word statement from CAIS from 2023, as well...
Probably more would support that 22 word statement, versus the 6 word statement above... But, yeah hard to say, maybe not?