Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Yes, I think "runes" throw many LLMs off into the wrong simulator. Humans don't fall for this because the symbols "look" mathematical but a text-based LLM can't "see" that. The opposite happens for computer scientists: They see "[]" and start to think in programming terms such as lambda functions...
Using a much simpler prompt and without mentioning number theory or math o3 easily solves it:
There's a series of symbol sequences, composed entirely of "[", "]", and "-" in some combination that is equal to a number. Here are examples:
...
What is the meaning of this formal notation?
Yes. I first tried things like this, too. I also tried term rewrite rules, and some of these were quite close. For example, AB -> A*(B+1) or AB -> A*(B+A) or AB -> A*(B+index) led to some close misses (the question was which to expand first, so which associativity, I also considered expanding smaller first) but failed with later expansions. Took me half an hour to figure out that the index was not additive or multiplicative but the exponent base.
When we talk about AIs scheming, alignment faking or goal preservation, we imply there is something scheming or alignment faking or wanting to preserve its goals or escape the datacentre.
See also this previous discussion about
What is the AGI system and what is the environment? Where does the AGI system draw the boundary when reasoning about itself?
[Linkpost] China's AI OVERPRODUCTION
China seeks to commoditize their complements. So, over the following months, I expect a complete blitz of Chinese open-source AI models for everything from computer vision to robotics to image generation.
If true, what effects would that have on the AI race and AI governance?
The good thing is that at least those actions on larger platforms leave evidence that can be established by the court.