I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot o... (read more)
I was disappointed that the complex daemonisation required for any story like this to come to life was not addressed. The mechanical gap between on-demand inference functions (like virtually all AI products on the market today) and daemon processes with independent context/rag self-management, self fine-tuning/retraining, and the self-modification of product code (true ML) is a difference in kind. The entire debate over how much smarter inference responses appear, or if inference responses can be made to jailbreak or scheme, ignores the much larger pivot o... (read more)