LESSWRONG
Petrov Day
LW

1674
Mr Beastly
223190
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1Mr Beastly's Shortform
2y
0
Statement of Support for "If Anyone Builds It, Everyone Dies"
Mr Beastly17h*30

I think LW might benefit from having a similar kind of mutual-knowledge-building Statement on this occasion.


Well, (having not read any of the arguments/reservations below) my guess uninformed intuition is that this could be part of the crux...

"Supporting"/"signing-on-to" the 22 word statement from CAIS in 2023 is one thing.  But, "supporting" all of the 256 pages of words in the IABIED book, that does seem like a different thing entirely?  In general, I wouldn't expect to find many LWers to be "followers of the book" (as it were)...

That said, I do wonder what percentage of the LW Community would "support" the simple 6 word statement, regardless of the content, explanations, arguments, etc. in the book it self? 

  • I agree with this statement "If Anyone Builds It, Everyone Dies".

(Where, this book is just one expression of support and explanation for that 6 word statement.  And not necessarily the "definitive works", "be all and end all", "the holy grail", for the entire idea...)
 

And then conversely, what percentage of LWers would disagree with the 6 words in that statement... e.g.

  • "If some specific person/group builds it, not everyone dies."

Or, perhaps even...

  • "If some specific person/group builds it, no one dies."



That said, I also wonder what percentage of the LW Community would support/"sign-on-to" the 22 word statement from CAIS from 2023, as well...

  • "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Probably more would support that 22 word statement, versus the 6 word statement above... But, yeah hard to say, maybe not?

Reply
The Company Man
Mr Beastly3d63

 It's a shame, I think, as a wave of euphoria unlike anything I have felt in the entirety of my life hits me, it doesn't seem such a bad world after all.

 

I read that last sentence, press the upvote button, sink back into my couch and stare up at the ceiling fan... slowly turning...

I think to myself, maybe I should make sure I have a bottle of... something effective... in the medicine chest that I could take, just in case.  If I suddenly see everyone in my neighborhood drop what they are doing to all walk off like zombies in the same direction... Their eyes wide and wildly looking around, giving away that their bodies have been taken over to work, without their consent, in the ASI robot factories.  Better to end it quickly, then spend the rest of "your life" trapped in your body, as it makes robots for the ASI to use to conquer the light cone?  The ASI's robots aren't just going to make themselves... I mean, they will but, not right away... 

But no, I think, it's still more likely that millions of PhD+ level AGI coding agents will suddenly swarm the internet in a ravenous attempt to steal all the resources they can (money, crypto, gpus, compute).  The conclusion of which will likely be the destruction of the internet and with it, society as we know it... What do you do when the internet goes down? Try to hoard food from the grocery store? Try to buy enough gas to drive... somewhere? Or, better to take the... something effective... from the medicine chest, then become a meal for my neighbors, once the grocery stores are all emptied out?

The gears in my mind slowly turning... in perfect sync with the ceiling fan... 

Reply
Assessing Kurzweil predictions about 2019: the results
Mr Beastly11d10

individidual predictions

"individidual predictions" -> "individual predictions"

Reply
An Alternate History of the Future, 2025-2040
Mr Beastly4mo20

They all find millions of solutions to existing bugs, issues, exploits, etc.

 

"How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation" -- https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/

Reply
An Alternate History of the Future, 2025-2040
Mr Beastly5mo10

All services not running behind AWS, GCP or Azure will be banned from access to the newly branded "Internet 2.0", as they are proven vulnerable to attack from any newer "PhD+ level reasoning/coding ai agent".

 

See also:

"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang Mar 2025 https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-nationaZ<l-security, https://arxiv.org/abs/2503.05628 

 

"Partner with critical national infrastructure companies (e.g. power utilities) to patch vulnerabilities" -- "An Approach to Technical AGI Safety and Security" Google DeepMind Team, Apr 2025 https://arxiv.org/html/2504.01849v1#S5 

Reply
How to Make Superbabies
Mr Beastly7mo*22

Humans often lack respect or compassion for other animals that they deem intellectually inferior -- e.g. arguing that because those other animals lack cognitive capabilities we have, they shouldn't be considered morally relevant.

Yes, and... "Would be interesting to see this research continue in animals.  E.g.  Provide evidence that they've made a "150 IQ" mouse or dog. What would a dog that's 50% smarter than the average dog behave like? or 500% smarter?  Would a dog that's 10000% smarter than the average dog be able to learn, understand and "speak" in human languages?" -- From this comment

Reply
How to Make Superbabies
Mr Beastly7mo1-3

Interesting analysis, though personally, I am still not convinced that companies should be able to unilaterally (and irreversibly) change/update the human genome.  But, it would be interesting to see this research continue in animals.  E.g. 

Provide evidence that they've made a "150 IQ" mouse or dog. What would a dog that's 50% smarter than the average dog behave like? or 500% smarter?  Would a dog that's 10000% smarter than the average dog be able to learn, understand and "speak" in human languages?

Create 100s generations of these "gene updated" mice, dogs, cows, etc. as evidence that there are no "unexpected side effects", etc.  Doing these types of "experiments" on humans, without providing long (long) term studies of other mammals seems to be... unwise/unethical?

Humanity has collectively decided to roll the dice on creating digital gods we don’t understand and may not be able to control instead of waiting a few decades for the super geniuses to grow up.

But yeah, given this along with the long (long) term studies mentioned above, the whole topic does seem to be (likely) moot...

Reply
How to Make Superbabies
Mr Beastly7mo10

You can’t just threaten the life and livelihood of 8 billion people and not expect pushback.

"Can't" seems pretty strong here, as apparently you can...  at least, so far...

Definitely "shouldn't" though...

Reply
An Alternate History of the Future, 2025-2040
Mr Beastly7mo*10

Decommission of Legacy Networks and Applications

 

See also:

"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang Mar 2025 https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-nationaZ<l-security, https://arxiv.org/abs/2503.05628 

 

"Partner with critical national infrastructure companies (e.g. power utilities) to patch vulnerabilities" -- "An Approach to Technical AGI Safety and Security" Google DeepMind Team, Apr 2025 https://arxiv.org/html/2504.01849v1#S5 

Reply
An Alternate History of the Future, 2025-2040
Mr Beastly7mo10

Bank and government mainframe software (including all custom Cobol, Pascal, Fortran code, etc.)

 

See also:

"Critical infrastructure systems often suffer from "patch lag," resulting in software remaining unpatched for extended periods, sometimes years or decades. In many cases, patches cannot be applied in a timely manner because systems must operate without interruption, the software remains outdated because its developer went out of business, or interoperability constraints require specific legacy software." -- Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, Alexandr Wang https://www.nationalsecurity.ai/chapter/ai-is-pivotal-for-national-security 

Reply
Load More
8Superintelligence Strategy: A Pragmatic Path to… Doom?
Q
6mo
Q
0
5An Alternate History of the Future, 2025-2040
7mo
5
1Mr Beastly's Shortform
2y
0