I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot of supplying a theater for these processes to execute repeatedly, indefinitely, both with and without external stimuli. There’s an inherent chicken and egg challenge in that, for misaligned... (read more)
I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot of supplying a theater for these processes to execute repeatedly, indefinitely, both with and without external stimuli. There’s an inherent chicken and egg challenge in that, for misaligned... (read more)