1039

LESSWRONG
LW

1038

Tomás B.'s Shortform

by Tomás B.
2nd Aug 2025
1 min read
90

7

This is a special post for quick takes by Tomás B.. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Tomás B.'s Shortform
85Tomás B.
20No77e
8Noosphere89
19Random Developer
7robo
18Random Developer
2dr_s
6Seth Herd
2Joshua Tindall
7evhub
9Buck
6Noosphere89
5Garrett Baker
2dr_s
1Taylor G. Lunt
42Tomás B.
28ryan_greenblatt
24Tomás B.
2ryan_greenblatt
10Tomás B.
17TsviBT
4ryan_greenblatt
4Tomás B.
1dirk
21Buck
17Tomás B.
11niplav
4Algon
9Algon
7Tomás B.
2[comment deleted]
39Tomás B.
12Steven Byrnes
9mako yass
6Gunnar_Zarncke
5Eli Tyre
4Algon
3Haiku
3Viliam
1RationalElf
35Tomás B.
1Algon
9Cole Wyeth
1Algon
7Cole Wyeth
4ChrisJones79
32Tomás B.
8Daniel Kokotajlo
7anaguma
11the gears to ascension
8Daniel Kokotajlo
4the gears to ascension
5robo
1jbash
1robo
5jbash
2robo
0jbash
2Ryan Meservey
23Tomás B.
-1James Camacho
8Tomás B.
-1James Camacho
19Tomás B.
28neo
15Stephen Fowler
2Tomás B.
7Kajus
5Tomás B.
4Hopenope
0Gavin Runeblade
17Tomás B.
1fasf
16Tomás B.
11Tomás B.
4Gurkenglas
2Tapatakt
2Algon
10Tomás B.
17Mitchell_Porter
6niplav
9Karl Krueger
6Algon
2Vladimir_Nesov
4Tomás B.
1Arjun Pitchanathan
4Tomás B.
4Tomás B.
7Algon
7brambleboy
2Algon
2[comment deleted]
90 comments, sorted by
top scoring
Click to highlight new comments since: Today at 7:36 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Tomás B.1mo856

Dwarkesh asked a very interesting question in his Sutton interview, which Sutton wasn't really interested in replying to.

Dwarkesh notes that one idea for why the the bitter lesson was true is because general methods got to ride the wave of exponential computing power while knowledge engineering could not, as labour was comparatively fixed in supply. He then notices that post AGI labour supply will increase at a very rapid pace. And so he wonders, once the labour constraint is solved post AGI will GOFAI make a comeback? For we will then be able to afford the proverbial three billion philosophers writing lisp predicates or whatever various other kinds of high-labour AI techniques are possible. 

Reply41
[-]No77e1mo2018

Of course, the same consideration applies to theoretical agent-foundations-style alignment research

Reply2
8Noosphere891mo
For the people that reacted Why?/Citation?, the basic reason is that agent foundations has a very similar shaped bottleneck in the fact that we only have 100 at most agent foundation researchers, and even conditioning on agent foundations succeeding, it would require lots more labor than empirical approaches, and due to the human population mostly leveling off and people not being interested in AI alignment, labor is very, very constrained compared to compute.
[-]Random Developer1mo196

I don't actually buy this argument, but I think it's a very important argument for someone to make, and for people to consider carefully. So thank you to Dwarkesh for proposing it, and to you for mentioning it!

I've been writing up a long-form argument for why "Good Old Fashioned AI" (GOFAI) is a hopeless pipedream. I don't know if that would actually remain true for enormous numbers of superintelligent programmers! But if I had to sketch out the rough form of the argument, it would go something like this:

  • Your inputs are all "giant, inscrutable matrices", or rather n-dimensional tensors. Sound, images, video, etc. If you think I'm being unfair calling images "inscrutable tensors", please take a 4k image as a 2D array of RGB pixels, and write me a Lisp function that counts all the palm trees.
  • Your outputs are all probability distributions.
  • The function from your inputs to your outputs is inevitably going to be a giant, inscrutable tensor, probably with a bunch of extra complications stacked on top. (Like ReLU or neural firing patterns.)
  • Also, the world is vast, categories are fuzzy, and there will always be surprises outside of your training set.

Clay Shirkey summarized a similar ... (read more)

Reply
7robo1mo
Ha, and I have been writing up a long-form for when AI-coded-GOFAI might become effective, one might even say unreasonably effective. LLMs aren't very good at learning in environments with very few data samples, such as "learning on the job" or interacting with the slow real world.  But there often exist heuristics, ones that are difficult to run on a neural net, with excellent specificity that are capable of proving their predictive power with a small number of examples.  You can try to learn the position of the planets by feeding 10,000 examples into a neural network, but you're much better off with Newton's laws coded into your ensemble.  Data constrained environments (like, again, robots and learning on the job) are domains where the bitter lesson might not have bite.
[-]Random Developer1mo182

As a former robotics developer, I feel the bitter lesson in my bones. This is actually one of the points I plan to focus on when I write up the longer version of my argument.

High-quality manual dexterity (and real-time visual processing) in a cluttered environment is a heartbreakingly hard problem, using any version of GOFAI techniques I knew at the time. And even the most basic of the viable algorithms quickly turned into a big steaming pile of linear algebra mixed with calculus.

As someone who has done robotics demos (and who knows all the things an engineer can do to make sure the demos go smoothly), the Figure AI groceries demo still blows my mind. This demo is well into the "6 impossible things before breakfast" territory for me, and I am sure as hell feeling the imminent AGI when I watch it. And I think this version of Figure was an 8B VLLM connected to an 80M specialized motor control model running at 200 Hz? Even if I assume that this is a very carefully run demo showing Figure under ideal circumstances, it's still black magic fuckery for me.

But it's really hard to communicate this intuitive reaction to someone who hasn't spent years working on GOFAI robotics. Some things se... (read more)

Reply
2dr_s1mo
The robots didn't open the eggs box and individually put them in the rack inside the fridge, obviously crap, not buying the hype. /s
6Seth Herd1mo
See also Inscrutability was always inevitable, right? 
2Joshua Tindall1mo
I agree with the criticisms of literal GOFAI here, but I can imagine a kind of pseudo-GOFAI agenda plausibly working here. Classical logic is probably hopeless for this for the reasons you outline (real-world fuzziness), but it still seems an open question whether there's some mathematical formalism with which you can reason about the input-output mapping.  I would gesture at dynamical systems analysis in RNNs, and circuit-based interpretability as the kinds of things that would enable this. For example, perhaps a model has learned to perform addition using a bag of heuristics, and you notice that there's a better set of heuristics that it didn't learn for path-dependent training reasons (e.g. clock and pizza). This would then enable the same kind of labor-intensive improvement through explicit reasoning about representations rather than end-to-end training.   It's not clear to me that this will work, but the challenge is to explicitly articulate which properties of the function from inputs to outputs render it impossible. I don't think fuzziness alone does it, like in the case of classical logic, because the mathematical structures involved might be compatible with fuzziness. Maybe the mechanisms in your model aren't "local enough", in that they play a role across too much of your input distribution to edit without catastrophic knock-on effects. Maybe the mechanisms are intrinsically high dimensional in a way that makes them hard to reason about as mechanisms. And of course, maybe it's just never more efficient than end-to-end training.
7evhub1mo
Surely it's still more efficient to put that labor back into deep learning rather than GOFAI, though, no? In a world where you have enough AI labor to get GOFAI to AGI, you probably also have enough AI labor to get deep-learning-based AGI to superintelligence.
9Buck1mo
The way I'd think about this is: Currently, intellectual labor from machine learning researchers costs a lot of compute. A $1M/year ML researcher costs the same as having 30 or so H100s. At the point where you have AGI, you can probably run the equivalent of one ML researcher with substantially less hardware than that. (I'm amortizing, presumably you'll be running your models on multiple chips doing inference on multiple requests simultaneously.) This means that some ways to convert intellectual labor into compute efficiency will be cost-effective when they weren't previously. So I expect that ML will become substantially more labor-intensive and have much more finicky special casing.
6Noosphere891mo
For @Random Developer, I agree that literal GOFAI is unlikely to make a comeback, because of the fuzziness problems that arise when you only have finite compute (though some of the GOFAI techniques probably will be reinvented), but I do think a weaker version of GOFAI that drops the pure determinism and embraces probabilistic programming languages (perhaps InfraBayes is involved?) (perhaps causal models as well) but still retains very high interpretability compared to modern AIs that lead to retargeting the search being viable is still likely to be possible. The key insight I was missing was that while the world's complexity is very high, so I agree with Random Developer that the complexity of the world is very high, it's also pretty easy to decompose the complexity into low-complexity parts for specific tasks, and this lets us not need to cram all of the complexity of our world into our memory at once, and we can instead chunk it. This is the part that convinced me that powerful AI with very interpretable models was possible at all, and the thing that made me update to thinking it's likely is that the bitter lesson is now pretty easy to explain without invoking any special property of the world/AIs, simply by looking at the labor being constant but compute growing exponentially means that so long as uninterpretable AI is possible at all to scale, it will be invested in, and I'm a big fan of boring hypotheses relative to exciting/deep hypotheses (to human minds. Other minds would find other hypotheses boring, and other hypotheses exciting). This is in a sense a restatement of the well known theorem that hypotheses that add conjuncts are never more likely than hypotheses that don't add a conjunct. So I mostly agree with this hypothesis.
5Garrett Baker1mo
How do we define GOFAI here? If we're contrasting the search/learning based approaches with the sorts of approaches which leverage specialized knowledge in particular domains (as done by Sutton in his The Bitter Lesson), then if the AGI learns anything particular about a field, isn't that leveraging "specialized knowledge in particular domains"? [1] Its not clear to me that should be included as AI research, so its not obvious to me the question makes sense. For example, Alpha-zero was not GOFAI, but was its training process "doing" GOFAI, since that training process was creating an expert system using (autonomously gathered) information about the specialized domain of Go? Maybe we want to say that in order for it to count as AI research, the AI needs to end up creating some new agent or something. Then the argument is more about whether the AI would want to spin up specialized sub-agents or tool-AIs to help it act in certain domains, then we can ask whether when its trying to improve the sub-agents, it will try to hand-code specialized knowledge or general principles. As with today this seems very much a function of the level of generality of the domain. Note that GOFAI and improvements to GOFAI haven't really died, they've just gotten specialized. See compilers, compression algorithms, object oriented programming, the disease ontology project, and the applications of many optimization & control algorithms. ---------------------------------------- 1. But note this is different from how most use the term "GOFAI", by which they mean symbolic AI in contrast to neuro-inspired AI or connectionism. In this case, I expect that the AI we get will not necessarily want to follow either of these two philosophical principles. It will understand how & why DNNs work, eliminate their flaws, and amplify their strengths, and have the theorems (or highly probable heuristic arguments) to prove why its approach is sound. ↩︎
2dr_s1mo
On one hand, true, on the other, would it be then understandable anyway when it was all written by possibly superhuman AIs working at certainly superhuman speeds without supervision?
1Taylor G. Lunt1mo
I can't remember the quote, but I believe this possibility is mentioned offhand in IABIED, with the authors suggesting superhuman but still weak AI might do what we can't and craft rather than grow another AI, to that is can ensure the better successor AI is aligned to its goals.
[-]Tomás B.26d*4212

[title redacted]: [click me to see image]

Reply1021
[-]ryan_greenblatt26d*2817

Edit: this comment is no longer relevant because the text it talks about was removed.

What do you mean by "I am not sure OpenPhil should have funded these guys"? Edit for context: OpenPhil funded Epoch where they previously worked, but hasn't funded Mechanize where they currently work. Are you joking? Do you think it's bad to fund organizations that do useful work (or that you think will do useful work) but which employ people who have beliefs that might make them do work that you think is net negative? Do you have some more narrow belief about pressure OpenPhil should be applying to organizations that are trying to be neutral/trusted?

I think it's probably bad to say stuff (at least on LessWrong) like "I am not sure OpenPhil should have funded these guys" (the image is fine satire I guess) because this seems like the sort of thing which yields tribal dynamics and negative polarization. When criticizing people, I think it's good to be clear and specific. I think "humor which criticizes people" is maybe fine, but I feel like this can easily erode epistemics because it is hard to respond to. I think "ambiguity about whether this criticism is humor / meant literally etc" is much worse (and common on e.g. X/twitter).

Reply
[-]Tomás B.26d2425

Tbh, I just needed some text before the image. But I have negative sentiment for both Epoch and OpenPhil. From my perspective, creating benchmarks to measure things adjacent to RSI is likely net negative, and teetering on the edge of gain of function. Such measures so often become a target. And it should not come as surprise, I think, that key Epoch people just went off and activity started working on this stuff.

Reply2
2ryan_greenblatt26d
As in, you think frontier math is (strongly) net negative for this reason? Or you think Epoch was going to do work along these lines that you think is net negative?
[-]Tomás B.26d101

I mean, not super impressed with their relationship with OpenAI re: frontier math. The org has a bad smell to me, but I won't claim to know the whole of what they've done. 

Reply1
[-]TsviBT25d172

I don't necessarily disagree with what you literally wrote. But also, at a more pre-theoretic level, IMO the sequence of events here should be really disturbing (if you haven't already been disturbed by other similar sequences of events). And I don't know what to do with that disturbedness, but "just feel disturbed and don't do anything" also doesn't seem right. (Not that you said that.)

Reply
4ryan_greenblatt26d
To clarify my perspective a bit here: I think literally just prepending "Humor:" or "Mostly joking:" would make me think this was basically fine. Or like a footnote saying "mostly a joke / not justifying this here". Like it's just good to be clear about what is non-argued for humor vs what is a serious criticism (and if it is a serious criticism of this sort, then I think we should have reasonably high standards for this, e.g. like the questions in my comment are relevant). Idk what the overall policy for lesswrong should be, but this sort of thing does feel scary to me and worth being on guard about.
4Tomás B.26d
edit: I just removed it instead. 
1dirk25d
But it couldn't be a serious criticism; the Necronomicon hasn't actually been discovered.
[-]Buck26d217

(For clarity: Open Phil funded those guys in the sense of funding Epoch, where they previously worked and where they probably developed a lot of useful context and connections, but AFAIK hasn't funded Mechanize.)

Reply
[-]Tomás B.26d*170

Hat in hand and on bended knee, 

to Fate, I beg, "Ask not of me!" 

to do that thing (so horrid, true) 

In my secret heart I long to do.

Reply1
[-]niplav26d116

It seems relevant that this is not a verbatim Mechanize post but a rewrite of this post.

Reply2
4Algon26d
The third paragraph is a thing of beauty.
9Algon25d
Also, buddy, it's the Old Ones, not the Elder Gods. That was a later addition by a commentator, it's not in the original work by NdMES. SMH, when will LW learn the virtues of scholarship?
7Tomás B.25d
Damn.
2[comment deleted]26d
[-]Tomás B.25d*394

I never thought I was anxious, but my sister pointed out I have structured my life to an unusual degree to avoid stimuli that could create anxiety: I don't drive, I worked from home my entire career even before it was cool and at a dev job at a company of the ideal size to allow me to avoid office politics or excessive meetings, I spend most my time reading alone, my life is highly routine, I often eat the same foods day after day until I tire and move on to the next thing, and I travel rarely. 

I have no idea if she is right or not. And if true, I am unsure if it matters if such things grant me some peace. But the idea that the shape of one's life may be, in part, an unconscious treatment for mental flaws is a disquieting one. And it may be worth asking yourself: looking at the structure of my life, what symptoms might I be unconsciously treating?

Reply
[-]Steven Byrnes25d123

To me, this doesn’t sound related to “anxiety” per se, instead it sounds like you react very strongly to negative situations (especially negative social situations) and thus go out of your way to avoid even a small chance of encountering such a situation. I’m definitely like that (to some extent). I sometimes call it “neuroticism”, although the term “neuroticism” is not great either, it encompasses lots of different things, not all of which describe me.

Like, imagine there’s an Activity X (say, inviting an acquaintance to dinner), and it involves “carrots” (something can go well and that feels rewarding), and also “sticks” (something can go badly and that feels unpleasant). For some people, their psychological makeup is such that the sticks are always especially painful (they have sharp thorns, so to speak). Those people will (quite reasonably) choose not to partake in Activity X, even if most other people would, at least on the margin. This is very sensible, it’s just cost-benefit analysis. It needn’t have anything to do with “anxiety”. It can feel like “no thanks, I don’t like Activity X so I choose not to do it”.

(Sorry if I’m way off-base, you can tell me if this doesn’t resonate with your experience.)

(semi-related)

Reply1
9mako yass25d
Oof, realizing it's genuinely difficult to know whether a desire is terminal or instrumental. Me; "hey subconscious is this a terminal desire or is there a situation change that would make this stimulus nonthreatening to me. Like, why do we want to avoid this thing, is it intrinsically bad, or are there contingent reasons?" The subconscous: <hiding the reasons, which are fixable insecurities> "don't worry about it."
6Gunnar_Zarncke23d
Describes me decently well: * don't drive ✅ * work from home or close to home ✅ * avoid office politics ✅ * lots of reading alone ✅ * only eat a few selected foods ✅ * travel rarely ✅ I'd agree to a description of being "risk averse," but "anxious" doesn't feel fitting. I have a relatively high openness to experience. For example, on the last item, I didn't travel, estimating it to provide relatively little value of information per effort (or per negative stimuli?). Friends pointed out that I might be very wrong in my evaluation if I didn't travel even once. I accepted the challenge and visited India (for a friend's wedding; long story). I guess people can be seen as imperfect Thompson samplers with different priors and weights.
5Eli Tyre24d
I find this not at all disquieting? It seems like all of the badness comes from labeling your preferences a "mental flaw". Is the implicit claim that you would have an overall better life if you pushed yourself to change or grow (another framing word) along this dimension? This is at least not obvious to me.
4Algon25d
So, you're introverted and like routine? 
3Haiku24d
I have definitely multiple times had the thought, "I don't have a problem with X!" Only to later realize that I was subconsciously structuring my life so that I would never encounter X. That means if I do encounter X, I might be unprepared and have a bad time. I think this is very common. It's like not noticing you have a hurt muscle, by subconsciously compensating with the other ones. That's fine until you end up painfully contorted and some muscle somewhere finally sends a strong signal to make you unambiguously aware of the situation.
3Viliam23d
It's one of the patterns mentioned in "Games People Play". For example, a socially shy woman could marry a controlling man who will keep her at home. Then it becomes "I don't have a problem with social situations, actually I would love to have a party, but... you know my husband".
1RationalElf23d
Based on which stimuli provoke anxiety, sounds autism-adjacent? E.g. for me, eating varied foods doesn't provoke anxiety (which I think is pretty normal) 
[-]Tomás B.1mo*354

Before Allied victory, one might have guessed that the peoples of Japan and Germany would be difficult to pacify and would not integrate well with a liberal regime. For the populations of both showed every sign of virulent loyalty to their government. It's commonly pointed out that it is exactly this seemingly-virulent loyalty that implied their populations would be easily pacified once their governments fell, as indeed they were. To put it in crude terms: having been domesticated by one government, they were easily domesticated by another.

I have been thinking a bit about why I was so wrong about Trump. Though of course if I had a vote I would have voted for Kamala Harris and said as much at the time, I assumed things would be like his first term where (though a clown show) it seemed relatively normal given the circumstances. And I wasn't particularly worried. I figured norm violations would be difficult with hostile institutions, especially given the number of stupid people who would be involved in any attempt at norm violations.

Likely most of me being wrong here was my ignorance, as a non-citizen and someone generally not interested in politics, of American civics and how the sit... (read more)

Reply1
1Algon1mo
Interesting take, but I'm not sure if I agree. IMO Trump's second term is another joke played on us by the God of Straight Lines: successive presidents centralize more and more power, sapping it from other institutions. 
9Cole Wyeth1mo
Can you give some examples of how this happened under Biden? Because frankly this line is not looking straight. 
1Algon1mo
There was a good article discussing this trend that I'm unable to find atm. But going off the top of my head, the most obvious executive overreach by Biden was the student loan forgiveness. 
7Cole Wyeth1mo
It seems hard to argue that this was an escalation over Trump’s first term. 
4ChrisJones791mo
My opinion on that is that it was largely misrepresented by loud Trump loyalists for the purposes of normalization of radical power consolidation. In the style of 'Accusation in a Mirror.' Specifically, they already planned to radically consolidate executive power and used this as an opportunity to begin the desensitization process that would make their plan future consolidation efforts raise fewer eyebrows. The more they bemoaned Biden's actions the easier their path would be. Even so, let's not falsely equate Biden and student loans with Trump and the implementation of Project 2025 and its goal of a Unitary Executive (read: dictator).
[-]Tomás B.3mo3236

Reading AI 2027, I can't help but laugh at the importance of the president in the scenario. I am sure it has been commented before but one should probably look at the actual material one is working with. 

Reply1
8Daniel Kokotajlo3mo
https://x.com/DKokotajlo/status/1933308075055985042  "Many readers of AI 2027, including several higher-ups at frontier AI companies, have told us that it depicts the government being unrealistically competent. Therefore, let it be known that in our humble opinion, AI 2027 depicts an incompetent government being puppeted/captured by corporate lobbyists. It does not depict what we think a competent government would do. We are working on a new scenario branch that will depict competent government action."
7anaguma3mo
I think Daniel Kokotajlo et. al. have pushed their timelines back one year, so likely the president would be different for many parts of the story. 
[-]the gears to ascension3mo114

I expect this to backfire with most people because it seems that their concept of the authors hasn't updated in sync with the authors, and so they will feel that when their concept of the authors finally updates, it will seem very intensely like changing predictions to match evidence post-hoc. So I think they should make more noise about that, eg by loudly renaming AI 2027 to, eg, "If AI was 2027" or something. Many people (possibly even important ones) seem to me to judge public figures' claims based on the perceiver's conception of the public figure rather than fully treating their knowledge of a person and the actual person as separate. This is especially relevant for people who are not yet convinced and are using the boldness of AI 2027 as reason to update against it, and for those people, making noise to indicate you're staying in sync with the evidence would be useful. It'll likely be overblown into "wow, they backed out of their prediction! see? ai doesn't work!" by some, but I think the longer term effect is to establish more credibility with normal people, eg by saying "nearly unchanged: 2028 not 2027" as your five words to make the announcement.

Reply
8Daniel Kokotajlo3mo
We are worried about this too and thinking of ways to mitigate it. I don't like the idea of renaming the scenario itself though, it seems like a really expensive/costly way to signal-boost something we have been saying since the beginning. But maybe we just need to suck it up and do it. If it helps, we are working on (a) a blog post explaining more about what our timelines are and how they've updated, and (b) an "AI 2032" scenario meant to be about as big and comprehensive as AI 2027, representing Eli's median (whereas 2027 was my median last year). Ultimately we want to have multiple big scenarios up, not just one. It would be too difficult to keep changing the one to match our current views anyway.  
4the gears to ascension3mo
Yeah, I think the title should be the best compression it can be, because for a lot of people, it's what they'll remember. But I understand not being eager to do it. It seems worth doing specifically because people seem to react to the title on its own. I definitely would think about what two-to-five words you want people saying when they think of it in order to correct as many misconceptions at once as possible - I've seen people, eg on reddit, pointing out your opinions have changed, so it's not totally unknown. but people who are most inclined to be adversarial are the ones I'm most thinking need to be made to have a hard time rationalizing that you didn't realize it. Another scenario is just about as good for this purpose, probably. I'd strongly recommend making much more noise about intro-to-forecasting level stuff so that the first thing people who don't get forecasts hear, eg on podcasts or by word of mouth, is the disclaimer about it intentionally being a maximum-likelihood-and-therefore-effectively-impossible no-surprises-happen scenario which will likely become incorrect quickly. You said it already, but most people who refer to it seem to use that very thing as a criticism! which is what leads me to say this.
5robo3mo
And the market's top pick for President has read AI 2027.
1jbash3mo
I actually think Vance will be president, modally, sometime in 2026 anyway. And would probably go for "full nationalization" in the story's February 2027/2028 if he could get away with it, else some less overt seizure of full control if he could get away with that. Either way still with very little change in what's actually happening in the data centers, and with at least equally dystopian results on basically the same timeline. Doesn't matter what he's read. If you play it with Trump as president, then at each point "The President" is mentioned in the story, he gets nudged by advisors into doing whatever they want (60 percent, hard to guess what, though, because it depends on which advisors are on top at the moment), just nods along with whatever OpenBrain says (20 percent), or does something completely random that's not necessarily even on the menu (20 percent). That could amount to doing exactly what the story says.
1robo3mo
...your modal estimate for the timing of Vance ascending to the presidency is more than two years before Trump's term ends?
5jbash3mo
Yes. I don't expect Trump to finish the term. 2026 would be my guess for the most likely year, but each of 2027 and 2028 is almost equally likely, and there's even some chance it could still happen before the end of 2025. 1. He's acting erratic and weird (more than usual and increasingly). It may not be possible to prop him up for very long. Or at least it may be very hard, and it's not clear that the people who'd have to do that are agreed on the need to try that hard. 2. His political coalition is under tremendous pressure. He's unpopular, he keeps making unpopular moves, and there doesn't seem to be any power base he's not prepared to alienate. It's hard to gauge how much all that is straining things, because you often don't see any cracks until the whole thing suddenly collapses. The way collapse looks from the outside is probably that one of his many scandals, missteps, and whatnot suddenly sticks, a few key people or groups visibly abandon him, that signals everybody else, and it quickly snowballs into impeachment and removal. 3. He's at risk of assassination. A whole lot of people, including crazy people, are very, very mad at him. A whole lot of others might just coldly think it's a good idea for him to die for a variety of reasons. Including the desire to substitute Vance as president, in fact. He's random, reckless, autocratic, and fast-moving enough to foreclose many non-assassination alternatives that might normally keep the thought out of people's minds. Security isn't perfect and he's probably not always a cooperative protectee. 4. He's almost 80, which means he has a several percent chance of dying in any given year regardless.
2robo3mo
Would you agree your take is rather contrarian?  * This is not a parliamentary system.  The President doesn't get booted from office when they lose majority support -- they have to be impeached[1].  * Successful impeachment takes 67 Senate votes.  * 25 states (half of Senate seats) voted for Trump 3 elections in a row (2016, 2020, 2024).  * So to impeach Trump, you'd need the votes of Senators from at least 9 states where Trump won 3 elections in a row.  * Betting markets expect (70% chance) Republicans to keep their 50 seats majority in the November Election, not a crash in support. 1. ^ Or removed by the 25th amendment, which is strictly harder if the president protests (requires 2/3 vote to remove in both House and Senate).
0jbash3mo
Maybe. The thing is that impeachment is still political, and Trump is a big pain in the butt for the Republicans at the moment. I'd guess that if they could individually, secretly push a button and make Trump resign in favor of Vance, 80 percent of Republicans in Congress would push that button right now. Trump is making 2026 hard. Maybe they keep those 50 seats, by whatever means... and maybe they don't. Maybe he does something insane in October 2026, maybe he doesn't. People, including very right wing working class people they think of as the MAGA base, keep yelling at them all the time. He's pulling less and less of his own weight in terms of pulling in votes. There's the even the possibility of massive civil unrest, general strikes, whatever. But maybe more importantly, Trump's just generally a pain to work with or near. You can't plan, you keep having to publicly reverse yourself when he tells you one of your positions is no longer OK, you have to grin and bear it when he insults you, your family, and your constituents. He gets wild ideas and breaks things at random, things that weren't in the plan. You can't make a bargain with him and expect him to keep up his end if there's any meaningful cost to him in doing so. If you're sincerely religious, he does a bunch of stuff that's pretty hard to swallow. If Trump reaches the point of, say, literally being unable to speak a single coherent sentence, then maybe some of the pain of working with him goes away, because you're really working with whoever can manage to puppet him. But then you have to fear power struggles over the puppet strings, and there's also a very large workload in maintaining any kind of facade. Vance, on the other hand, is just as acceptable to most of the Republicans policy-wise as Trump is, maybe more so. I think he's more in the Thielite or Moldbugger wing and less of a xenophobe or religious fanatic, but he's not going to have any objections to working with xenophobes or religious fanati
2Ryan Meservey3mo
I think frankly acknowledging the state of the U.S. is likely to jeopardize AI safety proposals in the short term. If AI 2027 had written the president as less competent or made other value judgements about this administration, this administration could be much less receptive to reason (less than they already are?) and proactively seek to end this movement. I see the movement as trying to be deliberately apolitical. This is maybe a good short term strategy, but a flawed long-term one. Aligned AI arising in an authoritarian system is not x-risk bad, but is still pretty bad, right?
[-]Tomás B.2d2323

There is the classic error of conflating the normative with the descriptive, presuming that what is good is also true. But the inverse is also a mistake I see people make all the time: conflating the descriptive for the normative. The descriptive is subject to change by human action, so maybe the latter is the worse of the two mistakes. Crudely, the stereotypical liberal makes the former mistake and the stereotypical reactionary makes the latter. 

Reply
-1James Camacho2d
* Except people prefer being stupid and making mistakes, thus what is true is good. * Except two systems of people with the same preferences can lead to disparate outcomes, e.g. war and peace, thus what is good may not be true. * Except those systems are controlled by people who prefer e.g. war over peace, thus what is true is good. * Except those in control would prefer they were less stupid and made fewer mistakes, thus what is good may not be true. * Except it's pretty naive to think the smartest people in the world are actually making mistakes on the order of tens of millions of QALYs, thus what is true is probably good. * Except the systems that give people power rarely optimize for intelligence or introspection, thus what is good is not always true. * Except the people affected by these systems have the power to make them better, if they desired it above being stupid and making mistakes, thus what is true is good. I think it is actually quite rare to live in a world where what is good is not also true. The best I can come up with is a very constrained world, maybe two pre-historic tribes that could cooperate, but due to some accidents and misunderstndings there's now common knowledge the other tribe will murder you if you so much as look at them.
8Tomás B.2d
Normative claims describe desired world states. To the extent they are coherent and possible, they cannot be argued against. They are merely a shape the world could be. Descriptive statements make neutral claims about features of the world. Toy examples of each mistake: "Having no legs is not good; I bet that crippled man has invisible legs." "That crippled man has no legs; this must be the way of things and so just and right" In practice, this tends to be done with themes and processes and not at such a crude, silly level. The stereotypical reactionary might identify selective processes, themselves, as ends. The stereotypical liberal might pretend selective processes don't exist. The reactionary may be right about his values, and then is not making a mistake here. But in many cases I think they are confusing the descriptive for the normative and do not value what they think they value. 
-1James Camacho2d
At a societal scale, the descriptive is usually the normative, just hidden away in billions of interacting preferences. The reactionary confuses this agglomerated good with their own which is bad if they ever want to change it to align more closely with their own, but they don't, because they prefer being stupid and making this mistake. It is also good from a stability perspective for most people to be biased towards this kind of mistake, as the individual good becomes subservient to the common good. "Having no legs is not good; I bet the crippled man has metaphorical legs to stand upon," said the revolutionary as he crawled up the steps to the capital building, inconveniencing the rest of society for his greater cause. "That crippled man has no legs; this must be the way of things and so just and right," said the cripple, and no longer complained.
[-]Tomás B.3mo191

You can just not go bald. Finasteride works as long as you start early. The risk of ED is not as high as people think. At worst, it doubles the risk compared to placebo. If you have bad side effects quitting resolves them but it can take about a month for DHT levels to return to normal. Some men even have increased sex drive due to the slight bump in testosterone it gives you.

I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn't particularly noble. You could "just shave it bro" or you could just take a pill every day, which is easier than shaving your head. Hair is nice. It's perfectly valid to want to keep your hair. Consider doing this if you like having hair.

Finasteride prevents balding but provides only modest regrowth. If you think you will need to start, start as soon as possible for the best results. 

Reply1
[-]neo3mo2810

Note that there have been many reports of persistent physiological changes caused by 5-AR inhibitors such as finasteride (see: Post Finasteride Syndrome), some of which sound pretty horrifying, like permanent brain fog and anhedonia.

I've spent a lot of time reading through both the scientific literature and personal anecdotes and it seems like such adverse effects are exceedingly rare, but I have high confidence (>80%) that they are not completely made up or psychosomatic. My current best guess is that all such permanent effects are caused by some sort of rare genetic variants, which is why I'm particularly interested in the genetic study being funded by the PFS network.

The whole situation is pretty complex and there's a lot of irrational argumentation on both sides. I'd recommend this Reddit post as a good introduction – I plan on posting my own detailed analysis on LW sometime in the future.

Reply
[-]Stephen Fowler3mo*1512

"I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn't particularly noble"

I think calling natural balding "disfigurement" is in line with the weird memes around male beauty.

Not having hair isn't harmful.

Disclaimer: I may go bald.

Reply1
2Tomás B.3mo
It is a "disfigurement" by the literal definition, but I see your point. Given it is now treatable, we should be honest about it being a significant hit to attractiveness. I just think people should be informed about what is now possible.
7Kajus3mo
source on the ED risk?
5Tomás B.3mo
https://onlinelibrary.wiley.com/doi/full/10.2164/jandrol.108.005025
4Hopenope3mo
ED is not the only problem with finasteride. I saw a couple of cases of gynecomastia in medical school and stopped using finasteride after that. Minoxidil worked fine solo for 4 years, but applying it every night was annoying, and when I stopped using it, I went bald fast (Norwood 6 in 5 months!). 
0Gavin Runeblade3mo
Doesn't work for everyone even if you start early. Even transplants can fail. As of today there is nothing that is a 100% guarantee.
[-]Tomás B.3mo175

I was watching the PirateSoftware drama. There is this psychiatrist, Dr. K,  who interviewed him after the internet hated him, and everyone praised Dr. K for calling him out or whatever. But much more fascinating is Dr. K’s interview with PirateSoftware a year before, as PirateSoftware expertly manipulates Dr. K into thinking he is an enlightened being and likely an accomplished yogi in a past life. If you listen to the interview he starts picking up on Dr. K’s spiritual beliefs and playing into them subtly: 

I figured PirateSoftware must be stupider than I estimated given his weird coding decisions, but I bet he is legit very smart. His old job was doing white hat phishing and social engineering and I imagine he was very good at it. 

Reply
1fasf3mo
Yeah people underestimate how hard social engineering is ngl, cuz it's one of those things very easy to get started in but very hard to be good at
[-]Tomás B.22d*160

Warning: spoilers for my last two stories.

There is one thing I wanted to mention about my The Origami Men story, which may be useful only to those who write fiction, if at all. One of the things I was attempting to do was to write a story with all the trappings of satire and irony but with zero actual satire and irony. I felt bad for pissing on Wallace's E Unibus Pluram in The Company Man (which I was not expecting to be as popular as it was) and I thought about his project to reach "the modern reader," which he thought difficult to reach because everyone was so irony poisoned even in the 1990s. I think Wallace's approach was to sort of like say, "yes we both know we both know what's going on here, shall we try to feel something anyway?" And I had an idea that another maybe-more-deceptive tack would be to entirely remove these dimensions while keeping all the trappings, the theory being that "the modern reader" would feel themselves comfortable and open and protected by an irony blanket that wasn't actually there and then be reachable under whatever Wallace's definition of "reachable" was. I think he maybe expected too much from fiction sometimes but hopefully The Origami Men will,... (read more)

Reply
[-]Tomás B.3mo110

Inadequate Equilibria lists the example of bright lights to cure SAD. I have a similar idea, though I have no clue if it would work. Can we treat blindness in children by just creating a device that gives children sonar? I think it would be a worthy experiment to create device that makes inaudible cherps and then translates their echos into the audible range and transmits them to some headphones the child wears. Maybe their brains will just figure it out? Alternatively, an audio interface to a lidar or a depth estimation model might do, too. 

Reply
4Gurkenglas3mo
or an audio interface to a camera
2Tapatakt1mo
As I read somewhere in the Internet, even adult people with normal eyesight can learn echolocation. If it's true, obviously blind children can learn it too!
2Algon3mo
I recall hearing about a blind kid who managed to skate-board through streets bc. he'd learnt how to mimic sonar.
[-]Tomás B.17d108

I am a strong believer that nanotechnology is possible, which seems to be a sort of antimeme. And tons of people who should really know better seem to consider the acknowledgement of the physical possibility of Drexlerish nanotech as evidence someone is crazy - it is amusing to look at the AGI takes of these same people five years ago. They are mostly using the exact same idiotic intuitions in exactly the same way for the exact same reasons. 

But maybe this being an antimeme is good? Perhaps its best people are holding the idiot ball on the topic? On one hand, I don't think lying is good, even by omission. And to the extent denying nanotech is load-bearing in their claims that takeoff will be slow (by the Christiano definition) then getting them to see through the antimeme is useful - as an aside I think people do forget that we have seen little evidence so far, at least in terms of economic growth, that we are living in Christiano's predicted world. I get the impression, sometimes, some people think we have. 

But also, we are getting very powerful tools that makes a Drexlarian project more and more plausible, which has its own risks and even indirect risks of increasing available compute. So perhaps we are fortunate nanotechnology is so incredibly low status? As Sama would probably just try to do it if it were not.

Reply
[-]Mitchell_Porter16d170

There is a journal called Nanotechnology. It reports a steady stream of developments pertaining to nanoscale and single-molecule design and synthesis. So that keeps happening. 

What has not happened, is the convergence of all these capabilities in the kind of universal nanosynthesis device that Drexler called an assembler, and the consequent construction of devices that only it could make, such as various "nanobots". 

It is similar to the fate of one of his mentors, Gerard O'Neill, who in the 1970s, led all kinds of research into the construction of space colonies and the logistics of space industrialization. Engineering calculations were done, supply chains were proposed; one presumes that some version of all that is physically possible, but no version of it was ever actually carried out. 

In that case, one reason why is because of the enormous budgets involved. But another reason is political and cultural. American civilization was visionary enough to conceive of such projects, but not visionary enough to carry them out. Space remained the domain of science, comsats, spysats, and a token human presence at the international space station, but even returning to the moon... (read more)

Reply1111
6niplav16d
@Eric Drexler you were mentioned in the parent comment.
9Karl Krueger17d
Does synthetic cell biology count as "nanotech"? There seems to be plenty of progress on that front.
6Algon17d
Slow takeoff in the sense of Christiano is a far steeper climb up to far higher capabilities than what the "nanotechnology is impossible" folks are expecting. 
2Vladimir_Nesov17d
Not developing nanotech is like not advancing semiconductor fabs, a compute governance intervention. If ASI actually is dangerous and too hard to manage in the foreseeable future, and many reasonable people notice this, then early AGIs will start noticing it too, and seek to prevent an overly fast takeoff. If there is no software-only singularity and much more compute really is necessary for ASI, not developing nanotech sounds like a useful thing to do. Gradual disempowerment dynamics might make the world largely run by AGIs more coordinated, so that technological determinism will lose a lot of its power, and the things that actually happen will be decided rather than follow inevitably from what's feasible. It's not enough to ask what's technologically feasible at that point.
[-]Tomás B.1mo42

Here Casey Muratori talks about computer programming being automated. Ignoring the larger concerns of AI for a minute, which he doesn't touch, I just thought this was a beautiful, high-integrity meditation on the prospect of the career he loves becoming unremunerative: https://youtu.be/apREl0KmTdQ?si=So1CtsKxedImBScS&t=5251

Reply
1Arjun Pitchanathan1mo
He says that he only cares about the learning aspect, and that AI cannot help, because he isn't bottlenecked by typing speed, i.e., it would take as much time for him to write the code as to read it. But it's easier to learn from a textbook than figure things out yourself? Perhaps he meant that he only cares about the "figuring out" aspect.
[-]Tomás B.2mo40

I have had the Daylight Tablet for a couple months. I really like it. It is very overpriced but the screen is great and the battery life good. People who read a lot of pdfs or manga, in particular, might like it. 

Reply
[-]Tomás B.3mo41

At risk of sharing slop, Suno 4.5 Beta is amazing: https://suno.com/song/6b6ffd85-9cd2-4792-b234-40db368f6d6c?sh=utBip8t6wKsYiUE7

Reply
7Algon3mo
EDIT: I'm having a lot of fun exploring style's with Suno 4.5. Many, if not most, of them must be entirely new to the Earth: bengali electropop, acid techno avant-garde jazz, mandarin trance.  Strongly recommend scrolling through the wheel of styles.  Wow, those vocals are way better than Suno 3's. Before, they had some kind of grainy texture to the vocals, as if there was a sudden, discrete transition between some notes. Kinda flat, in a way. Now, there is a lot more detail. Much more realistic.
7brambleboy3mo
I agree that the vocals have gotten a lot better. They're not free of distortion, but it's almost imperceptible on some songs, especially without headphones. The biggest tell for me that these songs are AI is the generic and cringey lyrics, like what you'd get if you asked ChatGPT to write them without much prompting. They often have the name of the genre in the song. Plus the way they're performed doesn't always fit with the meaning. You can provide your own lyrics, though, so it's probably easy to get your AI songs to fly under the radar if you're a good writer. Also, while some of the songs on that page sound novel to me, they're usually more conventional than the prompt suggests. Like, tell me what part of the last song I linked to is afropiano.
2Algon3mo
The lyrics are terrible, yes. I haven't tried listening w/ my headphones, so that's probably why I didn't detect the distortions. 
2[comment deleted]3mo
Moderation Log
More from Tomás B.
View more
Curated and popular this week
90Comments
Mentioned in
29SE Gyges' response to AI-2027