Idea: Safe Fallback Regulations for Widely Deployed AI Systems — LessWrong