I'm about to go to sleep but I am a bit confused about Epstein stuff.
In 2019, @Rob Bensinger said that "Epstein had previously approached us in 2016 looking for organizations to donate to, and we decided against pursuing the option;"
Looking at the Justice Dept releases (which I assume are real) (eg https://www.justice.gov/epstein/files/DataSet%209/EFTA00814059.pdf)
That doesn't feel a super accurate description. It seems like there was a discussion with Epstein after it was clear he had been involved in pretty bad behaviour. (In 2008 he pleaded guilty...
Where do you think I was "spam"ming?
I've heard many say that "neuralese" is superior to CoT and will inevitably supplant it. The usual justification is that the bandwidth of neuralese is going to be higher, which will make it better. But (1) bandwidth might not be better in this case; it isn't in all cases and (2) there are other factors that could theoretically operate against this, even if this is true.
Has anyone cleanly made the case for why neuralese is better or asymptotically technically inevitable, at length / clearly?
(In agreement): Neuralese is ~equivalent to wrapping your model as a DEQ with the residual stream shifted by one on every pass as far as I can tell, and it's not obvious to me that this is the relevant One Weird Trick. The neural network already has a way to shuttle around vast amounts of cryptic high-dimensional data: the neural network part of the neural network.
It seems much more likely to me that the relevant axis of scaling is something like a byte-latent transformer with larger and larger patches.
If Hitler is our default example for "someone who killed (caused to die) a lot of people on purpose", who could be our default example for "someone who killed (caused to die) a lot of people by negligence"?
Disclaimer: I am not implying that Hitler killed more people that Stalin or Mao, etc. I am just saying that he is a convenient example; if you mention him, people know what you meant. I am looking for similarly convenient examples of criminal negligence.
(Inspired by this thread, but didn't want to go off topic.)
Suppose we had a functionally infinite amount of high quality RL-/post-training environments, organized well by “difficultly,” and a functionally infinite amount of high quality data that could be used for pretraining (caveat: from what I understand, the distinction between these may be blurring.)
In that case, what pace would one expect for model releases from AI labs in the short term to be? I ask because I see the argument made that AI could help speed up AI development in the near to medium term. But it seems like the main limiting factor is just the am...
I liked the design when i saw it today, but also would like aggregate statistics like comments count/ post count/ recent activity. perhaps even something like github showing a calendar with activity for each commit. It would also be good to retain a bio with a self description and optionally urls to websites or social media accounts.
links 2/11/26: https://roamresearch.com/#/app/srcpublic/page/02-11-2026
if longevity ever really works, it would be creating a new sort of creature with a mindset very different from a "normal"-lifespan human
Yeah. It's like, if you imagine a world where everyone dies at the age of 25, it would seem obvious that they would be missing some psychological things, both on the individual level, and on the social level. So the same should be true in the opposite direction.
Though I wonder how much we can extrapolate from the existing data, or whether something unexpected would happen. Not all things are linear. (For example, male crim...
The music industry has already passed the threshold for digital AGI, but seems to be experiencing disruption dramatically less than AI song generation capabilities might suggest. Anyone can create a song of expert level quality in under 10s with an automatically generated prompt (or write one themselves if they want). The Chainsmokers took about 8 hours in total to make the single Roses, which has over 1.3bn streams as of 02/10/2026 and I believe is on the quicker end of writing time for pop songs, although I am not sure reliable data exists for this anywh...
the bottleneck of music productivity is in capturing attention
Yeah. I think the same applies to capitalism in general; these days it is relatively simply to produce many things, the problem is that thousand competitors can do the same thing, so the key is to convince the customer to buy your product instead of their mostly identical products.
Now I am not saying that good songs are identical, but they are in some sense fungible. Like, if you have a favorite song, it is easy to imagine a parallel universe where it doesn't exist, and something else is your fa...
Model to track: You get 80% of the current max value LLMs could provide you from standard-issue chat models and any decent out-of-the-box coding agent, both prompted the obvious way. Trying to get the remaining 20% that are locked behind figuring out agent swarms, optimizing your prompts, setting up ad-hoc continuous-memory setups, doing comparative analyses of different frontier models' performance on your tasks, inventing new galaxy-brained workflows, writing custom software, et cetera, would not be worth it: it would take too long for too little payoff....
My guess is the average person on LW should be spending around 10 hours a week trying to figure out how to automate themselves or other parts of their job using LLMs.
Yeah. I am nowhere near doing this systematically, but I noticed that whatever I am doing, it makes sense to ask "could I use an LLM to help me with this?" That includes even things like reading Reddit -- now the LLM could read it for me, and just give me a summary. (I haven't tried this yet.)
It is even worth revisiting the old (pre-LLM) question of "could I automate this using a shell/Python ...
Even now and then I meet someone who tries to argue that if I don't agree with them this is because I'm not open mided enough. Is there a term for this?
Epistemically I'm not convinced buy this type of arugment, but socialy it feels like I'm beeing shamed, and I hate it.
I also find it hard to call out this type of behaviur when it happens, even when I can tell exactly what is going on. I think it I had a name for this behaviour it would be easier? Not sure though?
Edit to add:
I've now got some more time to figure out what I want and don't want out of this th...
A reply I got in a similar situation, paraphrased: "well, it's you who identifies as a rationalist, you hypocrite."
In other words, as if by calling myself a rationalist (not in that specific debate, just generally something I said in a different context that my opponent knows about) means that I accept an asymmetrical burden.
Toughness is a topic I spent some time thinking about today. The way I think about it is that toughness is one's ability to push through difficulty.
Imagine that Alice is able to sit in an ice bath for 6 minutes and Bob is only able to sit in the ice bath for 2 minutes. Is Alice tougher than Bob? Not necessarily. Maybe Alice takes lots of ice baths and the level of discomfort is only like a 4/10 for here whereas for Bob it's like an 8/10. I think when talking about toughness you want to avoid comparing apples to oranges.
I suspect that toughness depends on t...
I feel like (2) is usually more appropriate than (1) [...] It usually makes sense to make the thing easier than to improve your ability to push past it.
I agree with this as a long-term strategy for dealing with repetitive problems. And I share the suspicion that "tough" people often have it easier (which may or may not be a result of their previous actions).
But sometimes life throws an unexpected thing on you, and then you roll for "toughness" or fail. (Though maybe you can also prepare for the unexpected by practicing many different things, thereby increasing the chance that some skill you have will be relevant for the new situation.)
Yeah, with all due respect, more people play Magic than know Zvi. (Outside of LessWrong.)
I know many members and influential figures of this are atheists; regardless, does anyone think it would be a good idea to take a rationalist approach to religious scripture? If anything, doing so might introduce greater numbers of the religious to rationalism. Plus, it doesn't seem like anyone here has done so before; all the posts regarding religion have been criticizing it from the outside rather than explaining things within the religious framework. Even if you do not believe in said religious framework, doing so may increase your knowledge on other cultures, provide an interesting exercise in reasoning, and most importantly, be useful in winning arguments with those who do.
I would find that difficult, because in my understanding, religion requires some sins against rationality: motivated thinking, privileging a hypothesis, writing your bottom line first...
I mean, most religions assume that a book written a few millennia ago, when people knew practically nothing about the world, because the scientific method didn't exist, and "a high-status person made it up" was a likely source of any statement that couldn't be immediately verified... that this book contains the true answers to the secrets of the universe and beyond. The obv...
According to ChatGPT 5.2 Jackson Kernion "is likely the same person" as Foreign Man in a Foreign Land.
I have used the same instance of a chat for many different topics, from music theory to scrabble ttg, from spiders to Lean. Apparently, the chat wanted to see some connection in that mess of random questions and linked long-long ago mentioned Nebula subscription to the new question.
A response to someone asking about my criticisms of EA (crossposted from twitter):
EA started off with global health and ended up pivoting hard to AI safety, AI governance, etc. You can think of this as “we started with one cause area and we found another using the same core assumptions” but another way to think about it is “the worldview which generated ‘work on global health’ was wrong about some crucial things”, and the ideology hasn’t been adequately refactored to take those things out.
Some of them include:
Maybe I'm missing something. Why are you comparing to the that hypothetical world?
FYI, if anyone read my post The nature of LLM algorithmic progress last week, it’s now a heavily-revised version 2.
A lot of the rationalist discourse around birthrates don't seem to square away with AGI predictions.
Like the most negative predictions of AGI destroying humanity in the next century or two leaves birthrates completely negligent as an issue. The positive predictions with AGI leave a high possibility of robotic child rearing and artificial wombs (when considering the amount of progress even us puny humans have already made) in the next century or two which also makes natural birthrates irrelevant because we could just make and raise more humans without the n...
Yes and I have a major example, one of the leading CEOs in the AI industry. He believes that AI will be more intelligent than all humans currently alive by 2030 while also saying birthrates are a top priority for all countries
Why pin this one (notably crazy-seeming) guy's take on "A lot of the rationalist discourse". He doesn't identify as a rationalist or post on LessWrong. And the rationalist discourse has long thought that his impact models about AI were bad and wrong (eg that founding OpenAI makes the situation dramatically worse, not better).
I was thinking of trying out Sustained Attention to Response Task (SART) with response feedback (SART 2). I'm not sure how it compares to Dual n-Back FAQ · Gwern.net.
My wife, who uses LLMs pretty much all day, says that Claude Opus 4.6 feels more 'mature' than 4.5.
Anthropomorphizing models is dangerous, but it's always a bit of a delight when I notice us talking about software using human personality traits. It clearly points at coherent concepts, pieces of software now have distinct 'personalities' that compare naturally to those of humans.
Anyway, I use LLMs a fair bit too, and tend to agree with my wife's assessment. Anecdotally, it is more cautious, more likely to catch itself going on a tangent, and tends towards more of a neutral stance than 4.5.
Does this match your experience?
The Dilbert Afterlife by Scott Alexander, Jan 16, 2026:
Michael Jordan was the world’s best basketball player, and insisted on testing himself against baseball, where he failed. Herbert Hoover was one of the world’s best businessmen, and insisted on testing himself against politics, where he crashed and burned. We’re all inmates in prisons of different names. Most of us accept it and get on with our lives. Adams couldn’t stop rattling the bars.
The EMH Aten't Dead by Richard Meadows, May 15, 2020:
...Which only leav