TL;DR: OpenSSL is among the most scrutinized and audited cryptographic libraries on the planet, underpinning encryption for most of the internet. They just announced 12 new zero-day vulnerabilities (meaning previously unknown to maintainers at time of disclosure). We at AISLE discovered all 12 using our AI system. This is a historically unusual count and the first real-world demonstration of AI-based cybersecurity at this scale. Meanwhile, curl just cancelled its bug bounty program due to a flood of AI-generated spam, even as we reported 5 genuine CVEs to them. AI is simultaneously collapsing the median ("slop") and raising the ceiling (real zero-days in critical infrastructure).
Background
We at AISLE have been building an automated AI system for deep cybersecurity discovery and remediation, sometimes operating in bug bounties under the pseudonym Giant Anteater. Our goal was to turn what used to be an elite, artisanal hacker craft into a repeatable industrial process. We do this to secure the software infrastructure of human civilization before strong AI systems become ubiquitous. Prosaically, we want to make sure we don't get hacked into oblivion the moment they come online.
No reliable cybersecurity benchmark reaching the desired performance level exists yet. We therefore decided to test the performance of our AI system against live targets. The clear benefit of this is that for a new, zero-day security vulnerability to be accepted as meriting a CVE (a unique vulnerability identifier), it has to pass an extremely stringent judgement by the long-term maintainers and security team of the project, who are working under many incentives not to do so. Beyond just finding bugs, the issue must fit within the project's security posture, i.e. what they consider important enough to warrant a CVE. OpenSSL is famously conservative here. Many reported issues are fixed quietly or rejected entirely. Therefore our "benchmark" was completely external to us, and in some cases intellectually adversarial.
We chose to focus on some of the most well-audited, secure, and heavily tested pillars of the world's software ecosystem. Among them, OpenSSL stands out. Industry estimates suggest that at least 2/3 of the world's internet traffic is encrypted using OpenSSL, and a single zero-day vulnerability in it can define a security researcher's career. It is a very hard target to find real, valuable security issues in.
Fall 2025: Our first OpenSSL results
In late summer 2025, 6 months into starting our research, we tested our AI system against OpenSSL and found a number of real, previously unknown security issues. In the Fall 2025 OpenSSL security release, 4 CVEs in total were announced from 2025 (of the format CVE-2025-*), out of which 3 were found, responsibly disclosed and in some cases even fixed by us (or more precisely by our AI system). You can read more in our original blog post.
Specifically, these were two moderate severity issues:
CVE-2025-9230: Out-of-bounds read/write in the RFC 3211 KEK unwrap operation for CMS password-based encryption, potentially leading to memory corruption or code execution. This bug had been present since 2009, undetected for over 15 years.
CVE-2025-9231: Timing side-channel in SM2 elliptic-curve signatures on 64-bit ARM, where variations in execution time during modular arithmetic could in principle allow private key recovery through careful remote observation. This is a subtle, logic-level vulnerability where the correctness of the code obscured a timing leak that only emerged under specific hardware conditions.
We also found a single low severity CVE:
CVE-2025-9232: Out-of-bounds read in HTTP client no_proxy handling when parsing IPv6 hosts, triggering a controlled crash.
Independently, the Frontier of the Year 2025 forecasting project by Gavin Leech, Lauren Gilbert, and Ulkar Aghayeva looked out for AI-driven vulnerability discovery in critical infrastructure as one of the top AI breakthroughs of 2025, assigning it a 0.9 probability of generalizing and placing it at #3 overall by expected impact, resolving as:
Google's Big Sleep agent and the startup AISLE found dozens of critical vulnerabilities in some of the main infrastructure of the internet: Linux, cURL, OpenSSL, and SQLite. [Frontier of the Year 2025]
For context on our approach: our system handles the full loop = scanning, analysis, triage, exploit construction (if needed and possible), patch generation, and patch verification. Humans choose targets and act as high-level pilots overseeing and improving the system, but don't perform the vulnerability discovery. On high-profile targets, we additionally review the resulting fixes and disclosures manually to ensure quality, although this only rarely changes anything.
January 2026: 12 out of 12 new vulnerabilities
Just today, January 27, 2026, OpenSSL announced a new security patch release, publishing 12 new zero-day vulnerabilities, including a very rare high-severity one. Of the 12 announced, we at AISLE discovered every single one of them using our AI system. (One vulnerability, CVE-2025-11187, was also co-reported by a security researcher Hamza from Metadust 33 days after our initial disclosure. Congratulations on representing humanity in this virtuous race! 🎉)
Out of the 12 new CVEs, 10 were assigned CVE-2025-* identifiers and 2 already belong to the year 2026 with CVE-2026-*s. Adding this to the 3 out of 4 CVEs we already had in 2025 previously, this means that AISLE, and by extension AI in general, is responsible for discovering 13 out of 14 zero-day vulnerabilities in OpenSSL in 2025. Both the count and the relative proportion have been increasing as a function of time and are overall historically very atypical.
The 12 vulnerabilities span a significant breadth of OpenSSL's codebase. Here they are sorted by severity:
HIGH severity (1):
CVE-2025-15467: Stack buffer overflow in CMS AuthEnvelopedData parsing. The overflow occurs prior to any cryptographic verification, meaning no valid key material is required to trigger it, making it potentially remotely exploitable against any application parsing untrusted CMS content. (For context: HIGH-severity-or-above CVEs in OpenSSL have historically averaged less than one per year.)
MODERATE severity (1):
CVE-2025-11187: Stack buffer overflow and NULL pointer dereference in PBMAC1 parameter validation during PKCS#12 MAC verification. (Co-reported by Hamza from Metadust 33 days after our disclosure.)
These span QUIC, PKCS#12, PKCS#7, CMS, TLS 1.3, and BIO subsystems, including heap overflows, type confusions, NULL dereferences, and a cryptographic bug where OCB mode leaves trailing bytes unencrypted and unauthenticated. Three of these bugs date even back to 1998-2000, having lurked undetected for 25-27 years. One of them (CVE-2026-22796) predates OpenSSL itself and was inherited from SSLeay, Eric Young's original SSL implementation from the 1990s. Yet it remained undetected by the heavy human and machine scrutiny over the quarter century.
Even a “low” severity CVE is a higher bar than might be obvious. The vast majority of reported issues don't qualify as security vulnerabilities at all. Of those that do, most are bugs that get fixed without CVEs as standard PRs. To receive a CVE from OpenSSL, an issue must pass their conservative security posture and be deemed important enough to track formally. “Low” severity in OpenSSL still means a real, externally validated security vulnerability in well-audited critical infrastructure.
In 5 cases, AISLE's AI system directly proposed the patches that were accepted into the official release (after a human review from both AISLE and OpenSSL).
Matt Caswell, Executive Director of the OpenSSL Foundation, said this about the findings:
"Keeping widely deployed cryptography secure requires tight coordination between maintainers and researchers. We appreciate Aisle's responsible disclosures and the quality of their engagement across these issues."
Tomas Mraz, the CTO of OpenSSL, said about the newest security release the following:
"One of the most important sources of the security of the OpenSSL Library and open source projects overall is independent research. This release is fixing 12 security issues, all disclosed to us by Aisle. We appreciate the high quality of the reports and their constructive collaboration with us throughout the remediation."
The assigned CVEs still don’t represent the full picture here. Some of the most valuable security work happens when vulnerabilities are caught before they ever ship, which is my ultimate goal. Throughout 2025, AISLE's system identified several issues in OpenSSL's development branches and pull requests that were fixed before reaching any release:
Double-free in OCSP implementation (PR #28300): Caught and fixed before the vulnerable code ever appeared in a release.
Use-after-free and double-free in RSA OAEP label handling (PR #29707): Improper duplication of the OAEP label member could lead to UAF and double-free when the duplicate is freed.
Crash in BIO_sendmmsg/recvmmsg with legacy callbacks (PR #29395): Missing parameter passed to the return callback would crash applications using legacy BIO callbacks with the new mmsg functions.
Private key file permissions not set in openssl req (PR #29397): The openssl req command was not always setting proper permissions on private key output files.
This is the outcome we're eventually working towards = vulnerabilities prevented proactively, not only patched after deployment retroactively. The concentration of findings from a single research team, spanning this breadth of subsystems and vulnerability types, is historically unusual for OpenSSL and is in my view in large part due to our heavy use of AI.
Broader impact: curl
OpenSSL is not the only critical infrastructure project we've been testing our system against. curl, the super ubiquitous data transfer tool, tells a very similar story.
In July 2025, Daniel Stenberg (curl's creator and main maintainer) wrote "Death by a thousand slops", a frustrated account of AI-generated garbage flooding the curl bug bounty program. According to him, about 20% of submissions were AI slop, and only 5% of all 2025 submissions turned out to be genuine vulnerabilities. The costs incurred on the small security team were long-term unsustainable.
Just yesterday, January 26, 2026, Stenberg announced "The end of the curl bug-bounty". The program that had run since 2019 and paid out over $90,000 for 81 genuine vulnerabilities was essentially killed by the flood of low-quality AI submissions.
While the story above was unfolding, we at AISLE (operating as "Giant Anteater" on HackerOne and later in personal correspondence with Daniel) reported findings that turned into 5 genuine CVEs in curl:
In the curl 8.18.0 released January 8, 2026, we were in fact responsible for 3 of the 6 CVEs disclosed and fixed. After initial HackerOne reports, we moved to direct private communication with the curl security team, reporting over 30 additional issues, the majority of which were valid, true positive security issues (24 curl PRs now include some variant of “Reported-by: Stanislav Fort” as a result).
In October 2025, Daniel Stenberg wrote "A new breed of analyzers" acknowledging that some AI-driven security research was producing genuinely valuable results. He explicitly mentioned AI-drive discovery:
As we started to plow through the huge list of issues from Joshua, we received yet another security report against curl. This time by Stanislav Fort from Aisle (using their own AI powered tooling and pipeline for code analysis). Getting security reports is not uncommon for us, we tend to get 2-3 every week, but on September 23 we got another one we could confirm was a real vulnerability. Again, an AI powered analysis tool had been used.
A new breed of AI-powered high quality code analyzers, primarily ZeroPath and Aisle Research, started pouring in bug reports to us with potential defects. We have fixed several hundred bugs as a direct result of those reports – so far.
This is a really clear example of a very common bifurcation of the top of a distribution from its median. Mass adoption collapsed the median quality (“slop” killed the bug bounty = a very viral story for people who assume that AI is bad at things a priori), but simultaneously raised the ceiling (we found many real vulnerabilities that the curl team valued enough to patch, assign CVEs to, and pay bounties for).
The era of AI cybersecurity is here for good
The evidence is in my view no longer anecdotal. Across two of the most critical, well-audited, and security-conscious codebases on the planet, we see a very clear signal.
OpenSSL
15 CVEs discovered by AISLE's AI system across late 2025 and early 2026 (13 of 14 total CVE-2025-* plus 2 CVE-2026-*)
12 out of 12 CVEs in a single, most recent release
4 additional vulnerabilities caught before they shipped
Patches contributed and accepted into official releases
curl
5 CVEs discovered and patched using AISLE's AI
3 of 6 CVEs in the curl 8.18.0 release
"Several hundred bugs" fixed per the maintainer by us and other AI-based tools
These are external validations from projects with every incentive to be skeptical. OpenSSL and curl maintainers don't hand out CVEs as participation trophies. They have conservative security postures, limited time, and (especially in curl's case) deep frustration with low-quality AI submissions. When they accept a vulnerability, patch it, assign a CVE, and publicly credit the reporter, that's as close to ground truth as security research gets. That’s why we chose this to be our ultimate evaluation.
Future outlook
We don't yet know the true underlying number of vulnerabilities in OpenSSL, so we can't say what dent we're making in its overall security. We also don't yet know whether offense or defense benefits more from these capabilities. Time will tell. If we keep tracking CVE counts, severities, and real-world impact, we'll see whether this translates into meaningfully fewer exploitable bugs in production in the years to come (I believe it will).
Here's what we do know: AI can now find real security vulnerabilities in the most hardened, well-audited codebases on the planet. The capabilities exist, they work, and they're improving rapidly.
I personally believe this advantages defense. If this pattern continues, finding and fixing vulnerabilities faster than they can be exploited, particularly in foundational libraries like OpenSSL that the rest of the ecosystem inherits from, we get compounding security returns. The hard part was always the discovery, remediation scales more easily once you know what to fix (at least in key projects that get updated often).
We're not there yet, but the trajectory is clear. The time of AI-driven vulnerability discovery is here, and the evidence suggests that it can be pointed at making critical infrastructure genuinely more secure. I am therefore hopeful and positive about the future of cybersecurity in the strong AI era.
This is a partial follow-up to AISLE discovered three new OpenSSL vulnerabilities from October 2025.
TL;DR: OpenSSL is among the most scrutinized and audited cryptographic libraries on the planet, underpinning encryption for most of the internet. They just announced 12 new zero-day vulnerabilities (meaning previously unknown to maintainers at time of disclosure). We at AISLE discovered all 12 using our AI system. This is a historically unusual count and the first real-world demonstration of AI-based cybersecurity at this scale. Meanwhile, curl just cancelled its bug bounty program due to a flood of AI-generated spam, even as we reported 5 genuine CVEs to them. AI is simultaneously collapsing the median ("slop") and raising the ceiling (real zero-days in critical infrastructure).
Background
We at AISLE have been building an automated AI system for deep cybersecurity discovery and remediation, sometimes operating in bug bounties under the pseudonym Giant Anteater. Our goal was to turn what used to be an elite, artisanal hacker craft into a repeatable industrial process. We do this to secure the software infrastructure of human civilization before strong AI systems become ubiquitous. Prosaically, we want to make sure we don't get hacked into oblivion the moment they come online.
No reliable cybersecurity benchmark reaching the desired performance level exists yet. We therefore decided to test the performance of our AI system against live targets. The clear benefit of this is that for a new, zero-day security vulnerability to be accepted as meriting a CVE (a unique vulnerability identifier), it has to pass an extremely stringent judgement by the long-term maintainers and security team of the project, who are working under many incentives not to do so. Beyond just finding bugs, the issue must fit within the project's security posture, i.e. what they consider important enough to warrant a CVE. OpenSSL is famously conservative here. Many reported issues are fixed quietly or rejected entirely. Therefore our "benchmark" was completely external to us, and in some cases intellectually adversarial.
We chose to focus on some of the most well-audited, secure, and heavily tested pillars of the world's software ecosystem. Among them, OpenSSL stands out. Industry estimates suggest that at least 2/3 of the world's internet traffic is encrypted using OpenSSL, and a single zero-day vulnerability in it can define a security researcher's career. It is a very hard target to find real, valuable security issues in.
Fall 2025: Our first OpenSSL results
In late summer 2025, 6 months into starting our research, we tested our AI system against OpenSSL and found a number of real, previously unknown security issues. In the Fall 2025 OpenSSL security release, 4 CVEs in total were announced from 2025 (of the format
CVE-2025-*), out of which 3 were found, responsibly disclosed and in some cases even fixed by us (or more precisely by our AI system). You can read more in our original blog post.Specifically, these were two moderate severity issues:
We also found a single low severity CVE:
no_proxyhandling when parsing IPv6 hosts, triggering a controlled crash.Independently, the Frontier of the Year 2025 forecasting project by Gavin Leech, Lauren Gilbert, and Ulkar Aghayeva looked out for AI-driven vulnerability discovery in critical infrastructure as one of the top AI breakthroughs of 2025, assigning it a 0.9 probability of generalizing and placing it at #3 overall by expected impact, resolving as:
For context on our approach: our system handles the full loop = scanning, analysis, triage, exploit construction (if needed and possible), patch generation, and patch verification. Humans choose targets and act as high-level pilots overseeing and improving the system, but don't perform the vulnerability discovery. On high-profile targets, we additionally review the resulting fixes and disclosures manually to ensure quality, although this only rarely changes anything.
January 2026: 12 out of 12 new vulnerabilities
Just today, January 27, 2026, OpenSSL announced a new security patch release, publishing 12 new zero-day vulnerabilities, including a very rare high-severity one. Of the 12 announced, we at AISLE discovered every single one of them using our AI system. (One vulnerability, CVE-2025-11187, was also co-reported by a security researcher Hamza from Metadust 33 days after our initial disclosure. Congratulations on representing humanity in this virtuous race! 🎉)
Out of the 12 new CVEs, 10 were assigned
CVE-2025-*identifiers and 2 already belong to the year 2026 withCVE-2026-*s. Adding this to the 3 out of 4 CVEs we already had in 2025 previously, this means that AISLE, and by extension AI in general, is responsible for discovering 13 out of 14 zero-day vulnerabilities in OpenSSL in 2025. Both the count and the relative proportion have been increasing as a function of time and are overall historically very atypical.The 12 vulnerabilities span a significant breadth of OpenSSL's codebase. Here they are sorted by severity:
HIGH severity (1):
MODERATE severity (1):
LOW severity (10):
CVE-2025-15468, CVE-2025-15469, CVE-2025-66199, CVE-2025-68160, CVE-2025-69418, CVE-2025-69419, CVE-2025-69420, CVE-2025-69421, CVE-2026-22795, CVE-2026-22796, listed primarily for completeness' sake.
These span QUIC, PKCS#12, PKCS#7, CMS, TLS 1.3, and BIO subsystems, including heap overflows, type confusions, NULL dereferences, and a cryptographic bug where OCB mode leaves trailing bytes unencrypted and unauthenticated. Three of these bugs date even back to 1998-2000, having lurked undetected for 25-27 years. One of them (CVE-2026-22796) predates OpenSSL itself and was inherited from SSLeay, Eric Young's original SSL implementation from the 1990s. Yet it remained undetected by the heavy human and machine scrutiny over the quarter century.
Even a “low” severity CVE is a higher bar than might be obvious. The vast majority of reported issues don't qualify as security vulnerabilities at all. Of those that do, most are bugs that get fixed without CVEs as standard PRs. To receive a CVE from OpenSSL, an issue must pass their conservative security posture and be deemed important enough to track formally. “Low” severity in OpenSSL still means a real, externally validated security vulnerability in well-audited critical infrastructure.
In 5 cases, AISLE's AI system directly proposed the patches that were accepted into the official release (after a human review from both AISLE and OpenSSL).
Matt Caswell, Executive Director of the OpenSSL Foundation, said this about the findings:
Tomas Mraz, the CTO of OpenSSL, said about the newest security release the following:
The assigned CVEs still don’t represent the full picture here. Some of the most valuable security work happens when vulnerabilities are caught before they ever ship, which is my ultimate goal. Throughout 2025, AISLE's system identified several issues in OpenSSL's development branches and pull requests that were fixed before reaching any release:
openssl reqcommand was not always setting proper permissions on private key output files.This is the outcome we're eventually working towards = vulnerabilities prevented proactively, not only patched after deployment retroactively. The concentration of findings from a single research team, spanning this breadth of subsystems and vulnerability types, is historically unusual for OpenSSL and is in my view in large part due to our heavy use of AI.
Broader impact: curl
OpenSSL is not the only critical infrastructure project we've been testing our system against. curl, the super ubiquitous data transfer tool, tells a very similar story.
In July 2025, Daniel Stenberg (curl's creator and main maintainer) wrote "Death by a thousand slops", a frustrated account of AI-generated garbage flooding the curl bug bounty program. According to him, about 20% of submissions were AI slop, and only 5% of all 2025 submissions turned out to be genuine vulnerabilities. The costs incurred on the small security team were long-term unsustainable.
Just yesterday, January 26, 2026, Stenberg announced "The end of the curl bug-bounty". The program that had run since 2019 and paid out over $90,000 for 81 genuine vulnerabilities was essentially killed by the flood of low-quality AI submissions.
While the story above was unfolding, we at AISLE (operating as "Giant Anteater" on HackerOne and later in personal correspondence with Daniel) reported findings that turned into 5 genuine CVEs in curl:
In the curl 8.18.0 released January 8, 2026, we were in fact responsible for 3 of the 6 CVEs disclosed and fixed. After initial HackerOne reports, we moved to direct private communication with the curl security team, reporting over 30 additional issues, the majority of which were valid, true positive security issues (24 curl PRs now include some variant of “Reported-by: Stanislav Fort” as a result).
In October 2025, Daniel Stenberg wrote "A new breed of analyzers" acknowledging that some AI-driven security research was producing genuinely valuable results. He explicitly mentioned AI-drive discovery:
In his curl 2025 year in review, under "AI improvements," Daniel Stenberg even wrote directly:
This is a really clear example of a very common bifurcation of the top of a distribution from its median. Mass adoption collapsed the median quality (“slop” killed the bug bounty = a very viral story for people who assume that AI is bad at things a priori), but simultaneously raised the ceiling (we found many real vulnerabilities that the curl team valued enough to patch, assign CVEs to, and pay bounties for).
The era of AI cybersecurity is here for good
The evidence is in my view no longer anecdotal. Across two of the most critical, well-audited, and security-conscious codebases on the planet, we see a very clear signal.
OpenSSL
curl
These are external validations from projects with every incentive to be skeptical. OpenSSL and curl maintainers don't hand out CVEs as participation trophies. They have conservative security postures, limited time, and (especially in curl's case) deep frustration with low-quality AI submissions. When they accept a vulnerability, patch it, assign a CVE, and publicly credit the reporter, that's as close to ground truth as security research gets. That’s why we chose this to be our ultimate evaluation.
Future outlook
We don't yet know the true underlying number of vulnerabilities in OpenSSL, so we can't say what dent we're making in its overall security. We also don't yet know whether offense or defense benefits more from these capabilities. Time will tell. If we keep tracking CVE counts, severities, and real-world impact, we'll see whether this translates into meaningfully fewer exploitable bugs in production in the years to come (I believe it will).
Here's what we do know: AI can now find real security vulnerabilities in the most hardened, well-audited codebases on the planet. The capabilities exist, they work, and they're improving rapidly.
I personally believe this advantages defense. If this pattern continues, finding and fixing vulnerabilities faster than they can be exploited, particularly in foundational libraries like OpenSSL that the rest of the ecosystem inherits from, we get compounding security returns. The hard part was always the discovery, remediation scales more easily once you know what to fix (at least in key projects that get updated often).
We're not there yet, but the trajectory is clear. The time of AI-driven vulnerability discovery is here, and the evidence suggests that it can be pointed at making critical infrastructure genuinely more secure. I am therefore hopeful and positive about the future of cybersecurity in the strong AI era.