A simple solution to the problem is ensuring that the output of an LLM aligns with a specified schema.
It's possible to do this already. Only want to give an LLM three "valid" options to choose from? Then define an output type with three valid options using a tool like dottxt-ai.github.io/outlines
In many ways, I think this is analogous to how legal systems enumerate only several valid ways of adjudicating a crime, out of the theoretically infinite decision space.
A simple solution to the problem is ensuring that the output of an LLM aligns with a specified schema.
It's possible to do this already. Only want to give an LLM three "valid" options to choose from? Then define an output type with three valid options using a tool like dottxt-ai.github.io/outlines
In many ways, I think this is analogous to how legal systems enumerate only several valid ways of adjudicating a crime, out of the theoretically infinite decision space.