By 2027, evaluations are showing that frontier models—including open-source models—could meaningfully help in engineering pandemics, if bad actors so chose. There's a messy but moderately effective effort by AI safety organisations and several agencies within governments to have some sort of misuse mitigation measures in place, in particular for API-accessible models. However, in the absence of a major incident, governments don't care enough, and open-source models seem hard to contain. Also, some bioterrorism continues being blocked by wet lab skills and just continuing good luck regarding the absence of a motivated bioterrorist.
And also later:
"However, by 2030 there are open-source dark web models that will do whatever you want including designing candidate pandemic... (read more)
And also later:
"However, by 2030 there are open-source dark web models that will do whatever you want including designing candidate pandemic... (read more)