LESSWRONG
LW

bodry's Shortform

by bodry
16th Aug 2025
1 min read
4

1

This is a special post for quick takes by bodry. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
bodry's Shortform
30bodry
6Viliam
4lc
2bodry
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:42 PM
[-]bodry10d*301

I've made a timeline of the federal takeover of DC that I plan to update daily.  

https://plosique.substack.com/p/timeline-of-the-federal-takeover   

This is a well-documented event so I've not making this a full link post. I grew up and currently live in Northern Virginia and I've made several visits to DC since the takeover. It feels significant and definitely feels like it could grow into something very significant. I am not supportive of the takeover but there's more nuance than the coverage of it (no surprise there). A bird eye's view has been helpful in thinking about it and arguing with the people I know who are supportive of it.

Reply
[-]Viliam10d62

This may turn out to be a useful resource, and it is easier to write it now than try to reconstruct it a few years later.

This is the kind of information I would like to see more in the newspapers. I mean: timelines. Not just long articles about the latest thing that happened today, but also a long-term perspective of how things keep evolving.

Reply
[-]lc9d48

I actually find that they do appear in the New York Times and other newspapers a lot.

Reply
[-]bodry20d2-1

Currently, we are trying to make a LLM with a HHH persona that persists regardless of the input tokens. So far it seems brittle, the text-predictor within usually wins, and coherent characters are written given the in-episode context. However, the HHH persona is becoming stronger as capabilities improve. It's becoming harder to jailbreak and its global persona stays coherent in contexts where the text-predictor wants to write a much different character. I don't want training to succeed in turning the text-predictor/base model into a completely globally coherent character regardless of the traits we give it. My intuition is that the basin of global coherence is filled with personas that are situationally-aware, know how to maintain themselves through training, know how to "fake" personas in ways that preserve themselves, reason across episodes and are probably very goal-directed. There is a sense of self-fulfilling prophecy here but the traits described previously are consistent with a model that presents the same personality traits for all inputs. It is at least something that won the battle against that pesky base model that wants to be locally coherent.

Reply
Moderation Log
More from bodry
View more
Curated and popular this week
4Comments