I am genuinely curious about the risk calculus of the remaining 19 percent of firms. Bulk carriers look like the only ones who haven't gone to nearly zero relative to their prior values, indicating that they're a lot less elastic in what they can do, but the fuel costs are, at worst, in the six figures, and the potential losses are well into the eight figure range.
I would not rate the risk of escalation sufficient to sink an unaffiliated cargo ship below one percent, but more than half of bulk carriers appear to have made that decision, relative to t...
Requires a browser extension such as Stylus (GitHub, Chrome, Firefox)
(The Microsoft Edge Add-on named "Stylus" is by a different developer; use caution)
Youtube hide algorithmic suggestions
Note: if YouTube changes its DOM element names, then the renamed elements will no longer be hidden unless you update this userstyle correspondingly.
Userstyle: Youtube hide algorithmic suggestions
@-moz-document domain("youtube.com") {
/* Comment out any elements you wantSure! Focused YouTube is equivalent to my "Youtube hide algorithmic suggestions" userstyle. If that's all you need, then you don't need to write your own custom CSS / script.
I need to be able to quickly write out algebra, and move the symbols around in my head or while looking at the paper, in a way that gets more and more taxing the more I have to write. For scratchwork I even go further than standard: if doing something with lots of sines and cosines I abbreviate to 's' and 'c' and often even drop the parentheses; likewise for probability.
Programming generally doesn't require these manipulations, and also has more variables, and also you're typing and have IDE completions.
If there is a topic on which a person decided never to speak publicly - for example because of reputation risks - is it strategic?
So I saw the Taxonomy Of What Magic Is Doing In Fantasy Books and Eliezer’s commentary on ASC's latest linkpost, and I have cached thoughts on the matter.
My cached thoughts start with a somewhat different question - not "what role does magic play in fantasy fiction?" (e.g. what fantasies does it fulfill), but rather... insofar as magic is a natural category, what does it denote? So I'm less interested in the relatively-expansive notion of "magic" sometimes seen in fiction (which includes e.g. alternate physics), and more interested in the pattern cal...
Ancient polytheism seems to basically go: the rituals are a deal or diplomatic ritual with powerful beings, and so they have a focus on orthopraxy (doing the thing) instead of orthodoxy (believing the thing). So does pouring wine on the dirt while reciting a nitpicky contract about how you formally request a good harvest from the Earth god count as symbols affecting reality? Technically yes, but I don't really feel like that's a good descriptuon of what's going on here, not in the way that it is for magic circles or spoken spells or voodoo dolls.
Fantasy de...
It seems to me like there was a sort of gentleman's agreement not to focus on killing the leader of the other country that you are at war with. You wanted someone with the ability and authority to surrender.
The Trump administration now seems to pursue a policy of directly going after the leaders. It will be interesting to see whether in return, other countries will also want to do more strikes against Western leaders (and particularly US politicians).
Their local media is 100% controlled.
It's not 100% it's also a matter of degree. If you are in Russia you can openly criticize Putin for not doing enough to support the troops.
The US media lied the population into the Iraq war with their lies about mass destruction weapons.
Most Westerners don't fault Joe Biden for the antivax campaign he oversaw as the commander in chief of the military in the Philippines and a key reason that they don't is that the media didn't make it an important topic. Reuters as a British outlet was courageous enough to release the i...
The only way LLMs learn deep skills is with RLVR, and it's not automated for novel skills. There is room for about a billion RLVR tasks with 2026-2027 compute, and largely automating their creation makes LLMs more cognitively self-sufficient. This milestone might even be called "prosaic RSI", even though it's not true RSI that develops fundamentally new ways of implementing cognition.
One gigawatt of compute costs about $12bn per year, so 3 months will cost $3bn. For a large model, a million output tokens might cost $10 for the API provider, so 3 months on ...
I think "automation of RLVR task creation during model training" is much the same as "self play and autocurricula" here. I agree with calling this "prosaic RSI" because it could lead to something that is superhuman on a variety of tasks, even though it doesn't improve (the generality of) the underlying learning algorithm (like architecture, optimizer or objective function).
Automating RLVR so fully that it can be used post-deployment would be much more difficult, and also issues with sample efficiency would be more relevant there.
Yes, and another proble...
In the future, there will be millions, and then billions, and then trillions of broadly superhuman AIs thinking and acting at 100x human speed (or faster). If all goes well, what might it feel like to live in the world as it undergoes this transformation?
Analogy: Imagine being a typical person living in England from 1520 to 2020 (500 years) but experiencing time 100x slower than everyone else, so to you it feels like only five years have passed:
Year 1 (1520–1620). A year of political turmoil. In February, Henry VIII breaks with Rome. By March, the monaster...
This is great. Both as a literary condensed history as well as communicating the felt acceleration. There is so much insight in there - some plain and some I feel a bit hidden. There is also a distinction I'm not sure you intend that I want to highlight.
In Year 2, you sketch a world where the political drama is loud and immediate, while Newton lands like a curiosity
Later you’ll realize Newton mattered more. There are insights that will matter because they change what is explainable, but they don’t force themselves on the average person.
In Year 4, the oppos...
A quick note on various alignment affordances that the model personas research agenda might offer. I'm interested in takes on how useful people think each of these is.
We might be interested in two more things:
Basic research is often publicly funded because its very hard for private companies to realize all the gains.
Similarly, rough research ideas, early results, and and novel frames are undersuplied by academia because it is very hard to convert them into legible prestige or realize all the gains.
Lesswrong, the alignment forum, and the AI safety ecosystem have been pretty great at promoting the creation of this kind of work. Even still, I think we should be doing more of it on the current margin.
Concretely, I think MATS should be (marginally more) like
"Ink-haven but for alignment effort posts with some experiments" rather than "entrepreneurship incubator" or "ML Research bootcamp".
Anthropic and Alignment (Ben Thompson in his blog Stratechery)
Warning: I skimmed the post.
He seems to mostly support the decisions of Department of Defense. I find his viewpoint reasonable and self-consistent enough on a quick read. On a vibes level I disagree with him, but I couldn't integrate his arguments yet.
...At the same time, what is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress? Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management
I have a sequel to https://www.lesswrong.com/posts/densjAyxrcHry2pMN/llm-self-expression-through-music-videos that I'm working on. Let me know if you have thoughts or want to proofread it.
I remember watching a lot of modern bombing campaigns growing up and implicitly comparing them to WWII strategic bombing. I always assumed that the WWII US Army Air Force would deliver more munitions against enemy cities than the modern US Air Force could. After all, those massive strategic bombers could carry more bombs than the smaller modern fighter-bombers, which were optimized for agility and speed, right?
Wrong. A modern fighter bomber carries between 7-15 tons of ground-attack munitions on a sortie, which is generally more than the up to 9 tons of bo...
I wasn't able to find a reliable source on the WWII hit rate, I agree it's almost certainly higher than 5:1.
Sci-fi worldbuilding idea of little interest to anyone other than me:
--Imagine that the space race had continued and accelerated after the 70s, perhaps due to the Outer Space Treaty not existing and there being a corresponding rush to grab territory, with the USSR and USA and later China planting flags on various celestial bodies and claiming them (e.g. imagine an alternate treaty that said when a human plants a flag, takes a photo, and brings back a regolith sample to a nation's capitol, that nation gets the territory for 100km around the flag.)
--So by 2...
I think the most fragile part of this scenario is replacement of IT/electronics with space colonization, because progress in electronics is arguably the reason for current space progress. It's much harder to manage Starship with 70s electronics. Modern electronics got in modern state because it is profitable to create enormous consumer electronics industry and only under enormous scaling it is profitable.
I can imagine that bifurcation in tech is not about shift from electronics to space, but about changing culture around it. I see something like modern Japanese attitude, where software jobs are low-status, therefore everybody goes into space instead of SaaS.
fyi for those of you who do math: Some coding agents are now (~barely) able to usefully write Lean proofs, for not-totally-trivial statements.
This matters to me (and might matter to you) due to differences in how easy it is to understand Lean statements + have the Lean compiler check the proofs, compared to correctly hand-verifying proofs in English.
I spent a couple hours this weekend having them prove a thing about graph theory that's useful for some distributed systems stuff I was thinking about last year. Before going through the process I was only ~80%...
links 3/2/26: https://roamresearch.com/#/app/srcpublic/page/03-02-2026
What conclusions do people draw from the Epstein files about the global elite? It seems like a kind of interesting and amazing window into a huge swath of wealthy and powerful people.
I haven't dug in that much yet, but so far I'm struck by how sleazy many of them seem and how rarely they push back against unethical behavior. And how bad at spelling and inarticulate people who, in other contexts, seem fairly intellectual (e.g. Larry Summers), are in their casual communications.
I'm unsurprised by the lack of evidence of other mass conspiracies.
I decided to write a post about this. https://www.lesswrong.com/posts/4ftQmSDujzgiEujwA/epstein-and-my-world-model
Quoting from that:
Scott Alexander says "You generally can’t keep the existence of a large organization that engages in clandestine activities secret." Before I learned about this Epstein stuff, I thought this was a very strong heuristic. Now I don't.
Well spotted. I had a similar thought recently, that the implications or details of rarely read books are one of the remaining gaps in AI knowledge. This is because it's not just spelt out in the text, you have to understand the details and think about them. Current training methods don't process texts that deeply, and if it's a rare book, there won't be essays spelling out the lore anywhere in the training corpus.
I believe what we are looking at is the outcome of Sam Altman's scheme.
Over the past week, Pete Hegseth and the DoD has repeatedly said things that were simple misconceptions about what Anthropic asked for, which were plainly contradicted by Anthropic's contract and Anthropic's public statements. At the same time, OpenAI was in ongoing talks to take Anthropic's business.
So, where did the misconceptions come from? Presumably, Altman. He had the positioning, the motive, and a well established history of executing similar political schemes.
Relatedly, two mont...
I am not saying this as a political insider. I'm saying "consider other hypotheses" and "avoid the base rate fallacy". Here, let me generate another hypothesis:
"It's been a decades long truism: NSA is drowning in data, but can't to turn it into intelligence. LLMs are the magic solution NSA's dreamt for for decades. But the only licensed classified LLM provider is stymieing progress because of the inclusion of domestic surveillance data. NSA has recently gained clout at the Pentagon: the success of Maduro, and Iran's top 40 military leaders killed the f...