2415

LESSWRONG
LW

2414
AI TimelinesComputer Security & CryptographyAI

42

AISLE discovered three new OpenSSL vulnerabilities

by Jan_Kulveit
30th Oct 2025
1 min read
4

42

This is a linkpost for https://aisle.com/blog/aisle-discovers-three-of-the-four-openssl-vulnerabilities-of-2025

42

AISLE discovered three new OpenSSL vulnerabilities
16peterbarnett
14lc
14lc
4Mikhail Samin
New Comment
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 9:42 PM
[-]peterbarnett4h160

I would love for someone to tell me how big a deal these vulnerabilities are, and how hard people had previously been trying to catch them. The blog post says that two were severity "Moderate", and one was "Low", but I don't really know how to interpret this. 

Reply
[-]lc4h*140

Two of the bugs AISLE highlighted are memory corruption primitives. They could be used in certain situations to crash a program that was running OpenSSL (like a web server), which is a denial of service risk. Because of modern compiler safety techniques, they can't on their own be used to access data or run code, but they're still concerning because it sometimes turns out to be possible to chain primitives like these into more dangerous exploits. 

The third bug is a "timing side-channel bug" with a particular opt-in certificate algorithm that OpenSSL provides, when used on ARM architectures. It's a pretty niche circumstance but it does look legitimate to me. The only way to know if it's exploitable would be to try to build some kind of a PoC.

OpenSSL is a very hardened target, and lots of security researchers look at it. Any security-relevant bugs found on OpenSSL are pretty impressive.

Reply3
[-]lc5h*140

I don't know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I'm not sure if we ever got any CVEs for it.

Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort - especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner's perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]

  1. ^

    Full disclosure: we've since hired this guy, but we only reached out to him after he posted this blog.

Reply
[-]Mikhail Samin3h42

(am not a security professional.)

  • All seem low-real-world-severity; two of the three are bugs in places where I think people wouldn't be looking for them as much; one of the three is a controlled crash with no impact outside potential DoS.
  • See this comment.
  • The timing side-channel bug is impressive to see discovered with AI. You need to notice that operations take different amounts of time, and then figure out that it's bad in this specific case.

AISLE's system flagged the anomaly through deep analysis of memory access patterns and control flow

  • Unsure how much of this is due to scaffolding around LLMs vs. due to more traditional systems.
Reply
Moderation Log
More from Jan_Kulveit
View more
Curated and popular this week
4Comments
AI TimelinesComputer Security & CryptographyAI

The company post is linked; it seems like an update on where we are with automated cybersec.

So far in 2025, only four security vulnerabilities received CVE identifiers in OpenSSL, the cryptographic library that secures the majority of internet traffic. AISLE's autonomous system discovered three of them. (CVE-2025-9230, CVE-2025-9231, and CVE-2025-9232)

Some quick thoughts:

  • OpenSSL is one of the most human-security-audited pieces of open-source code ever, so discovering 3 new vulnerabilities sounds impressive. How much exactly: I'm curious about peoples opinions
  • Obviously, vulnerability discovery is a somewhat symmetric capability, so this also gives us some estimate of the offense side
  • This provides concrete evidence for the huge pool of bugs that are findable and exploitable even by current level AI - this is something everyone sane believed existed in my impression
  • On the other hand, it does not neatly support the story where it's easy for rogue AIs to hack anything. Automated systems can also fix the bugs, hopefully systems like this will be deployed, and it seems likely the defense side will start with large advantage of compute
  • It's plausible that the "programs are proofs" limit is defense-dominated. On the other hand, actual programs are leaky abstractions of the physical world, and it's less clear what the limit is in that case.