I don't know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I'm not sure if we ever got any CVEs for it.
Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort - especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner's perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]
Full disclosure: we've since hired this guy, but we only reached out to him after he posted this blog.
As a rationalist I also strongly dislike subtweeting
One small thing I noticed when living in India is how the escalators would stop moving when people got off them, just to save a little power.
As soon as you convincingly argue that there is an underestimation, it goes away
It's not a belief. It's an entire cognitive profile that affects how they relate to and interact with other people, and the wrong beliefs are adaptive. For nice people, treating other people you know as nice-until-proven-evil opens up a much wider spectrum of cooperative interactions. For evil people, genuinely believing the people around you are just as self-interested gives you a bit more cover to be self-interested too.
You left out the best part:
“Nishin,” you say. “Nobody is accepting your romantic overtures because of Twitter. Nobody is granting you power. Nobody is offering you mon(ey)."
Even if this rumor isn't true, it is strikingly plausible and worrying
Obviously the training data of LLMs contains more than human dialogue, so the claim that the pretrained LLMs are "strictly imitating humans" is clearly false. I don't know why this was never brought up.
The background of the Stanislav Petrov incident is literally one of the dumbest and most insane things I have ever read (attached screenshot below):
Two of the bugs AISLE highlighted are memory corruption primitives. They could be used in certain situations to crash a program that was running OpenSSL (like a web server), which is a denial of service risk. Because of modern compiler safety techniques, they can't on their own be used to access data or run code, but they're still concerning because it sometimes turns out to be possible to chain primitives like these into more dangerous exploits.
The third bug is a "timing side-channel bug" with a particular opt-in certificate algorithm that OpenSSL provides, when used on ARM architectures. It's a pretty niche circumstance but it does look legitimate to me. The only way to know if it's exploitable would be to try to build some kind of a PoC.
OpenSSL is a very hardened target, and lots of security researchers look at it. Any security-relevant bugs found on OpenSSL are pretty impressive.