Top postsTop post
Burny
Message
On the quest to understand the fundamental mathematics of intelligence and of the universe with curiosity.
183
5
29
I think alignment by itself is complex phenomenon, but big part of it is sharing the same, or similar ethics. And ethics itself is also very complex phenomenon and often fuzzy and inconsistent, but to a certain approximation from a certain perspective, I see a lot of ethics in big...
https://gemini.google.com/share/6d141b742a13 My favorite theory is that the whole conversation is too much like a scifi plot, where someone asks an AI repetitive questions, until the AI snaps, so this general pattern was pattern matched from the training data, because while training, the RLHF, or whatever they use for alignment, didn't...
tl;dr: OpenAI leaked AI breakthrough called Q*, acing grade-school math. It is hypothesized combination of Q-learning and A*. It was then refuted. DeepMind is working on something similar with Gemini, AlphaGo-style Monte Carlo Tree Search. Scaling these might be crux of planning for increasingly abstract goals and agentic behavior. Academic...
Step closer to AGI? The classic argument made over 30 years ago by Fodor and Pylyshyn - that neural networks fundamentally lack the systematic compositional skills of humans due to their statistical nature - has cast a long shadow over neural network research. Their critique framed doubts about the viability...