The OWASP Top 10 [1] is the probably the most well-known and recognised reference standard for the most critical web application security risks. This organisation has now started to working on creating a similar list for Large Language Models (LLM) Applications.

I'm posting about it here since I think it would be beneficial for safety alignment researchers to be involved for two reasons:

  1. To provide AI safety and alignment expertise to the security community and standardisation process.
  2. To learn from the cybersecurity community both about standardisation processes since they have a long experience in developing these kinds of standards and about security mindset and vulnerabilities.

I have no idea about how many people within the AI safety and alignment community actually know about this initiative, but I did not find any reference to it on the EA forum or here on the alignment forum, so I thought I might as well post about it.

More information available here:

- https://owasp.org/www-project-top-10-for-large-language-model-applications/

- https://github.com/OWASP/www-project-top-10-for-large-language-model-applications

 

Cross post: https://forum.effectivealtruism.org/posts/mdg8gL59LiiZmaGCw/new-reference-standard-on-llm-application-security-started 

New Comment