LLM keys - A Proposal of a Solution to Prompt Injection Attacks — LessWrong