OK, but by the end of 2026, which red lines will be left to enforce?
Indeed, a valid gut punch.
Quick answer: some limit of RSI, some limit of AGI, and ASI.
"If this project lengthens people's timelines, well, maybe that's correct and valuable?"
Agreed. Hm, my thinking is not that the purpose is, or ought be, "scary demo"-y". Rather that a capable frontier-scaffolded Agent Village inherently would be in more probable use cases. So I am asserting what you worry, while not suggesting highly dangerous scaffolding/tooling.
The intent of my vague "way forward" direction prompt was to counter a sort of "Golden path"¹ use bias (like how top labs assume non-jailbroken model use) to more accurately represent real-world b...
Alas, after months of amusement, must update downwards on the net benefit of the AI Village (as-is).
It's just too lovable and the agent shortcomings come off as endearing. One just ends up pitifully rooting for them with a "mostly harmless" takeaway.
It is not likely to reveal strong multi-agent risks ahead of real-world deployments. The tooling is a strong factor¹ and the first alarming disaster won't likely result from innocent tasking. Further, just improving soft agent tooling, like UI interaction, would encourage risky acceleration².
Given the mild publ...
Meaningful red lines must be formally defined in a technical, near real-time enforcement system* with political enforcement backing - treated as hard-limit bans, not alarms. Non-technical red lines raise the will for such solutions, if they are not:
- EU AI Act style - complex regulatory red lines that exclude critical risk, are enforcement-intractable (or reactive) and serve as concern-stoppers.
- Lines that are foreseeable unenforceable or carry a definite outcome if crossed ('RSI will lead to
... (read more)