LESSWRONG
LW

213
lc
11539Ω347142010
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Territories
Mechanics of Tradecraft
2Shortform
6y
602
No wikitag contributions to display.
AISLE discovered three new OpenSSL vulnerabilities
lc3d*240

Two of the bugs AISLE highlighted are memory corruption primitives. They could be used in certain situations to crash a program that was running OpenSSL (like a web server), which is a denial of service risk. Because of modern compiler safety techniques, they can't on their own be used to access data or run code, but they're still concerning because it sometimes turns out to be possible to chain primitives like these into more dangerous exploits. 

The third bug is a "timing side-channel bug" with a particular opt-in certificate algorithm that OpenSSL provides, when used on ARM architectures. It's a pretty niche circumstance but it does look legitimate to me. The only way to know if it's exploitable would be to try to build some kind of a PoC.

OpenSSL is a very hardened target, and lots of security researchers look at it. Any security-relevant bugs found on OpenSSL are pretty impressive.

Reply4
AISLE discovered three new OpenSSL vulnerabilities
lc3d*190

I don't know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I'm not sure if we ever got any CVEs for it.

Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort - especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner's perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]

  1. ^

    Full disclosure: we've since hired this guy, but we only reached out to him after he posted this blog.

Reply
1a3orn's Shortform
lc6d2941

As a rationalist I also strongly dislike subtweeting

Reply
Cheap Labour Everywhere
lc13d30

One small thing I noticed when living in India is how the escalators would stop moving when people got off them, just to save a little power. 

Reply
Shortform
lc14d92

As soon as you convincingly argue that there is an underestimation, it goes away

It's not a belief. It's an entire cognitive profile that affects how they relate to and interact with other people, and the wrong beliefs are adaptive. For nice people, treating other people you know as nice-until-proven-evil opens up a much wider spectrum of cooperative interactions. For evil people, genuinely believing the people around you are just as self-interested gives you a bit more cover to be self-interested too.

Reply
Shortform
lc15d5952

Bad people underestimate how nice some people are and nice people underestimate how bad some people are.

Reply
Daniel Kokotajlo's Shortform
lc23d*10-4

You left out the best part:

“Nishin,” you say. “Nobody is accepting your romantic overtures because of Twitter. Nobody is granting you power. Nobody is offering you mon(ey)."

Reply
abramdemski's Shortform
lc1mo10473

Even if this rumor isn't true, it is strikingly plausible and worrying

Reply9
On Dwarkesh Patel’s Podcast With Richard Sutton
lc1mo88

Obviously the training data of LLMs contains more than human dialogue, so the claim that the pretrained LLMs are "strictly imitating humans" is clearly false. I don't know why this was never brought up.

Reply
Shortform
lc1mo*8127

The background of the Stanislav Petrov incident is literally one of the dumbest and most insane things I have ever read (attached screenshot below):

Reply
Load More
21Beware LLMs' pathological guardrailing
1mo
1
53Female sexual attractiveness seems more egalitarian than people acknowledge
2mo
27
28Is the political right becoming actively, explicitly antisemitic?
Q
4mo
Q
16
356Recent AI model progress feels mostly like bullshit
7mo
85
46Virtue signaling, and the "humans-are-wonderful" bias, as a trust exercise
9mo
16
132My simple AGI investment & insurance strategy
2y
28
58Aligned AI is dual use technology
2y
31
169You can just spontaneously call people you haven't met in years
2y
21
5Does bulemia work?
Q
2y
Q
18
23Should people build productizations of open source AI models?
Q
2y
Q
0
Load More