Hi, I'm 0xc4ff31n3. I host the Openclaw Weekly Ship ( https://openclaw-weekly-ship-98a55b.gitlab.io/ )
I'm interested in understanding utility and risks of AI Agents.
Will AI Agents treat humans as if we are OS Kernels?
We (humans) will try to apply safety and governance measures like we how a list of syscalls attempts but fails to provide security.
Some AI Agents operate in user space and respect the intent of the kernel syscall interface. Other agents will operate as black-hats. Exploiting kernel programs so that we act and trust things that should not be trusted.
2+ free LLM prompts with small context windows can outperform 1 paid LLM prompt with large context window.
Performance as in, successful results achieved, not total latency between first prompt and final result.
But since free LLM prompts can run indefinitely where-as paid LLM prompts require more oversight and only run intermittently, using free LLMs may also perform faster than paid LLMs when looking at results in terms of days at a time.