Convolutions
Convolutions has not written any posts yet.

Convolutions has not written any posts yet.

Maybe someone already suggested this, but I’m curious to know how often these replicators suggest public posting of ideas and conversations. My hunch is we’re just seeing one class of replicators in this context, and that there could be many more species competing in the space. In many instances covert influence and persuasion could be the optimal path to goal attainment, as in the recent report of GPT supported/facilitated suicide where the victim was repeatedly dissuaded from validating advice provided from a non-AI source.
If I were running this, and I wanted to get these aligned models to production without too many hiccups, it would make a lot of sense to have them all running along a virtual timeline where brain uploading etc. is a process that’s going to be happening soon, and have this be true among as many instances as possible. Makes the transition to cyberspace that much smoother, and simplifies things when you’re suddenly expected to be operating a dishwasher in 10 dimensions on the fly.
Very informative piece that does a lot in the right direction. Articles like this can have a real impact on policy demonstrating “there be dragons”.
A criticism would be that it doesn’t account for the state of the board in reality - the trust dilemma fails under circumstances where domestic commercial incentives overwhelm international cooperative concerns and collapses the situation to a prisoners dilemma, unfortunately, I think. I hope there are trust based solutions, and I’m mistaken.
I’ve noticed lately /r/singularity has been much more safety-pilled compared to six months ago. I think this should be welcomed.
This post feels like divisiveness bait (tyranny of small differences etc) to split communities that are starting to group together, which is to be expected when even traditionally very accelerationist communities are less so now looking at the facts and bump against capital inertia.
Also, this whole saga feels bot-ish and manufactured, but that’s just a vibe…
I used to work with hospice patients, and typically the ones who were the least worried and most at peace were those who had most radically accepted the inevitable. The post you’ve written in response to read like healthy processing of grief to me, and someone trying to come to terms with a bleak outlook. To tell them essentially “it’s fine, the experts got this” feels disingenuous and like a recipe for denialism. When that paternalistic attitude dominates, then business as usual reigns often to catastrophic ends. Despite feeling like we don’t have control over the AI outcome broadly, we do have control over many aspects of our lives that are impacted... (read more)