An "open source bad" mentality becomes more risky.
I agree with this actually"
We need to dig deeper into what open source AI is mostly like in practice. If OS AI naturally tilts defensive (including counter offensive capabilities), then yeah, both of your accounts make sense. But I'm looking at the current landscape and I think I see something different: we've got many models that are actively disaligned ("uncensored") by the community, and there's a chance that the next big GPT moment is some brilliant insight that doesn't need massive compute and can be run from a small cloud.
Gabriel Weil – “Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence” Proposes reforming tort law to deter catastrophic AI risks before they materialize SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4694006
Yonathan Arbel et al. – "Open Questions in Law and AI Safety: An Emerging Research Agenda" Sets out a research agenda for a new field of “AI Safety Law,” focusing on existential and systemic AI risks Published: https://www.lawfaremedia.org/article/open-questions-in-law-and-ai-safety-an-emerging-research-agenda
Peter Salib – "AI Outputs Are Not Protected Speech" Argues that AI-generated outputs lack First Amendment protection, enabling stronger safety regulation SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4481512
Mirit Eyal & Yonathan Arbel – “Tax Levers for a Safer AI Future” Proposes using tax credits and penalties to align AI development incentives with public safety SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4528105
Cullen O’Keefe, Rohan Ramakrishnan, Annette Zimmermann, Daniel Tay & David C. Winter – “Law-Following AI: Designing AI Agents to Obey Human Laws” Suggests AI agents should be trained and constrained to follow human law, like corporate actors SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4726207
On this part:
"
I agree with this actually"
We need to dig deeper into what open source AI is mostly like in practice. If OS AI naturally tilts defensive (including counter offensive capabilities), then yeah, both of your accounts make sense. But I'm looking at the current landscape and I think I see something different: we've got many models that are actively disaligned ("uncensored") by the community, and there's a chance that the next big GPT moment is some brilliant insight that doesn't need massive compute and can be run from a small cloud.