Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations — LessWrong