LESSWRONG
LW

Lucie Philippon
53112602
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Lucie Philippon's Shortform
3y
10
the void
Lucie Philippon26d21

IMO Janus mentoring during MATS 3.0 was quite impactful, as it led @Quentin FEUILLADE--MONTIXI to start his LLM ethology agenda and to cofound PRISM Eval.

I expect that there's still a lot of potential value in Janus work that can only be realized through making it more legible to the rest of the AI safety community, be it mentoring, posting on LW.

I wish someone in the cyborgism community would pick up the ball of explaining the insights to outsiders. I'd gladly pay for a subscription to their Substack, and help them find money for this work.

Reply
the void
Lucie Philippon1mo10

Yeah last post was two years ago. The Cyborgism and Simulators posts improved my thinking and AI strategy. The void may become one of those key posts for me, and it seems it could have been written much earlier by Janus himself.

Reply
the void
Lucie Philippon1mo12

AFAIK Janus does not publish posts on LessWrong to detail what he discovered and what it implies for AI Safety strategy.

Reply
the void
Lucie Philippon1mo74

Positive update on the value of Janus and its crowd.

Does anyone have an idea of why those insights don't move to the AI Safety mainstream usually? It feels like Janus could have written this post years ago, but somehow did not. Do you know of other models of LLM behaviour like this one, that still did not get their "notalgebraist writes a post about it" moment?

Reply
Caleb Biddulph's Shortform
Lucie Philippon3mo9-6

Agreed that the current situation is weird and confusing.

The AI Alignment Forum is marketed as the actual forum for AI alignment discussion and research sharing. However, it seems that the majority of discussion shifted to LessWrong itself, in part due to most people not being allowed to post on the Alignment Forum, and most AI Safety related content not being actual AI Alignment research.

I basically agree with Reviewing LessWrong: Screwtape's Basic Answer. It would be much better if AI Safety related content had its own domain name and home page, with some amount of curated posts flowing to LessWrong and the EA Forum to allow communities to stay aware of each other.

Reply
Insights from a Lawyer turned AI Safety researcher (ShortForm)
Lucie Philippon3mo20

I did not know about this either. Do you know whether the EAs in the EU Commission know about it?

Reply
Building Communities Beyond the Bay
Lucie Philippon3mo10

Thanks for the feedback! It made more sense as en event title. I'll edit it

Reply
The Semi-Rational Militar Firefighter
Lucie Philippon4mo31

Seems your link How to Save Lives & Offend Generals is empty.

Reply
Drake Thomas's Shortform
Lucie Philippon6mo10

Earlier discussion on LW on zinc lozenges effectiveness mentioned that other flavorings which make it taste nice actually prevent the zinc effect.

From this comment by philh (quite a chain of quotes haha):

According to a podcast that seemed like the host knew what he was talking about, you also need the lozenges to not contain any additional ingredients that might make them taste nice, like vitamin C. (If it tastes nice, the zinc isn’t binding in the right place. Bad taste doesn’t mean it’s working, but good taste means it’s not.) As of a few years ago, that brand of lozenge was apparently the only one on the market that would work. More info: https://www.lesswrong.com/posts/un2fgBad4uqqwm9sH/is-this-info-on-zinc-lozenges-accurate

That's why the peppermint zinc acetate lozenge from Life Extension is the recommended one. So your only other option might be somehow finding unflavored zinc lozenges, which might taste even worse? Not sure where that might be available

Reply
Index of rationalist groups in the Bay Area June 2025
Lucie Philippon6mo10

It seems that @Czynski changed the structure of the website and that entries are now stored in this folder.

Maybe you could DM him?

Reply
Load More
Lighthaven
5mo
(+713)
Journaling
3y
(+18/-18)
58How I switched careers from software engineer to AI policy operations
3mo
1
8Building Communities Beyond the Bay
3mo
2
40Index of rationalist groups in the Bay Area June 2025
1y
14
18Determining the power of investors over Frontier AI Labs is strategically important to reduce x-risk
1y
7
6[Research log] The board of Alphabet would stop DeepMind to save the world
1y
0
111Introduction to French AI Policy
1y
12
19Overview of introductory resources in AI Governance
1y
0
23Thriving in the Weird Times: Preparing for the 100X Economy
2y
16
2Lucie Philippon's Shortform
3y
10
5[Rough notes, BAIS] Human values and cyclical preferences
3y
0
Load More