This post looks at the problem of how to make a cap on AI intelligence enforceable, using something different from current ideas in compute governance (mandated limitations to computational power). The idea I present is finding an AI technique/architecture/paradigm which is a "tuned dead end": it can't be scaled up...
Chatbots are pretty useful to me, but the pro plans are expensive. I don't want to spend 20$/month here and there. However, I couldn't find a free chatbot app with at least OpenAI and Anthropic API support with a similar level of functionality as the online gpt or claude chatbots....
Judea Pearl is a famous researcher, known for Bayesian networks (the standard way of representing Bayesian models), and his statistical formalization of causality. Although he has always been recommended reading here, he's less of a staple compared to, say, Jaynes. So the need to re-introduce him. My purpose here is...
I answer LeCun's arguments against LLMs as exposed in this lesswrong comment. I haven't searched thoroughly or double-checked in detail LeCun's writings on the topic. My argument is suggestive-hand-waving-stage. Introduction Current large language models (LLMs) like GPT-x are autoregressive. "Autoregressive" means that the core of the system is a function...
Paper: Maus et al., "Scalable Deep Bayesian optimization over Structured Inputs," arXiv:2201.11872. A nice and interesting presentation about finding synthetizable molecules that bind to a given protein with Bayesian optimization. An autoencoder is trained on a database of molecules. The latent space of the autoencoder is fed as input space...