LESSWRONG
LW

1922
kave
3337Ω126124952
Message
Dialogue
Subscribe

Hello! I work at Lightcone and like LessWrong :-). I have made some confidentiality agreements I can't leak much metadata about (like who they are with). I have made no non-disparagement agreements.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5kave's Shortform
Ω
2y
Ω
12
On working 80%
kave1d40

I think retiring is hard for lots of people cos they don't really change their minds about this

Reply
How AI Manipulates—A Case Study
kave2d40

I'd be interested to read the full transcript. Is that available anywhere? Sorry if I missed it

Reply
The "Length" of "Horizons"
kave2d42

Yup. The missing assumption is that setting up and running experiments is inside the funny subset, perhaps because it's fairly routine

Reply1
The "Length" of "Horizons"
kave2d5-3

A version of the argument I've heard:

AI can do longer and longer coding tasks. That makes it easier for AI builders to run different experiments that might let them build AGI. So either it's the case that both (a) the long-horizon coding AI won't help with experiment selection at all and (b) the experiments will saturate the available compute resources before they're helpful; or, long-horizon coding AI will make strong AI come quickly.

I think it's not too hard to believe (a) & (b), fwiw. Randomly run experiments might not lead to anyone figuring out the idea they need to build strong AI.

Reply
The Moral Infrastructure for Tomorrow
kave3dModerator Comment20

Mod here. This post violates our LLM Writing Policy for LessWrong, so I have delisted the post, so it's only accessible via link. I've not returned it to the user's drafts, because that would make the comments hard to access.

@sdeture, we'll remove posting permissions if you post more direct LLM output.

Reply
Generalization and the Multiple Stage Fallacy?
kave8d53

I think the larger effect is treating the probabilities as independent when they're not.

Suppose I have a jar of jelly beans, which are either all red, all green or all blue. You want to know what the probability of drawing 100 blue jelly beans is. Is it 13100≈2⋅10−48? No, of course not. That's what you get if you multiply 1/3 by itself 100 times. But you should condition on your results as you go. P(jelly1 = blue)⋅P(jelly2=blue|jelly1=blue)⋅P(jelly3=blue|jelly1=blue,jelly2=blue) ...

Every factor but the first is 1, so the probability is 13.

Reply
CFAR update, and New CFAR workshops
kave11d72

I know many of you folks care a lot about how AI goes. I'm curious how you connect that with – or actively disconnect that from – the new workshops.

The question I'm most interested in: do you have a set of values you intend the workshops to do well by, that don't involve AI, and that you don't intend to let AI pre-empt?[1][2]

I'm also interested in any thinking you have about how the workshops support the role of x-risk, but if I could pick one question, it'd be the former.

  1. ^

    At least in a given workshop. Perhaps you'd stop doing the workshops overall if your thoughts about AI changed

  2. ^

    Barring edge cases, like someone trying to build an AGI in the basement or whatever

Reply
Open Thread Autumn 2025
kave13d30

It looks like last year it was Fall, and the year before it was Autumn.

Reply
Open Thread Autumn 2025
kave13d80

I agree, but that's controlled by your browser, and not something that (AFAIK) LessWrong can alter. On desktop we have the TOC scroll bar, that shows how far through the article you are. Possibly on mobile we should have a horizontal scroll bar for the article body.

Reply
Raemon's Shortform
kave16d20

(I think, by 'positive', Ben meant "explain positions that the group agrees with" rather than "say some nice things about each group")

Reply1
Load More
17Open Thread Autumn 2025
14d
53
31What are the best standardised, repeatable bets?
Q
6mo
Q
10
78Gwern: Why So Few Matt Levines?
1y
10
62Linkpost: Surely you can be serious
1y
8
151Daniel Dennett has died (1942-2024)
1y
5
577LessWrong's (first) album: I Have Been A Good Bing
2y
183
5kave's Shortform
Ω
2y
Ω
12
167If you weren't such an idiot...
2y
76
105New LessWrong review winner UI ("The LeastWrong" section and full-art post pages)
2y
64
41On plans for a functional society
2y
8
Load More
Bayes' rule
2 months ago
(+12/-35)
Vote Strength
2 years ago
(-35)