I have serious, serious issues with avoidance. I would like some advice on how to improve, as I suspect it is significantly holding me back.
Some examples of what I mean
Checked replies so far, no one has given you the right answer.
Whenever you don't do something, you have a reason for not doing it.
If you find yourself stuck in a cycle of intending to do, and not doing, it's always because you're not taking your reason for NOT doing it seriously; you're often habitually ignoring it.
When you successfully take your reasons for not doing something seriously, either you stop wanting to do it, or you change how you're doing it, or your reason for not doing it simply goes away.
So, what does it mean/look like to take your reason for not doing something seriously?
It doesn't look like overanalyzing it in your head - if you find yourself having an internal argument notice that you've tried this a million times before and it hasn't improved things.
It looks like, and indeed just basically is, Focusing (I linked to a lesswrong explainer, but honestly I think Eugene Gendlin does a much better job)
It feels like listening. It feels like insight, like realizing something important that you hadn't noticed before, or had forgotten about.
If you keep pursuing strategies of forcing yourself, of the part of you that wants to do the thing coercing the part(s) that don't, then you'll burn out. You're literally fighting yourself; so much of therapy boils down to 'just stop hitting yourself bro'.
Got over my avoidance of responding to replies here after a bit :)
I've tried a lot of self-help flavoured stuff (atomic habits etc.) before and it hasn't worked, and Focusing seemed quite different. I've given it a go and I think I'll try and work a bit more with it. After just a short session, I feel like I gained a significant insight, that I have a crippling fear of "being in trouble" that manifests as a tightness in my lower chest, and seems to activate a lot when I think about specific things I'm avoiding. Thanks for the resources, and the new way of looking at the problem.
I have had and solved fairly extreme versions of this in myself, and have helped people with debilitating versions of this resolve it multiple times.
You're stuck in a loop of some part of you pushing to do the object level thing so hard that it has no sensitivity to the parts of you that are averse to it. Whenever you notice you're spinning your wheels; stop trying to force through the object level action and let yourself actually notice the feeling of resistance with open curiosity. Let let unfold into the full message that brain fragment is trying to send, rather than overcompressed "bad"/"aversion".
What helps me to overcome the initial hurdle to start doing work in the morning:
Also:
I think it also helps to take something you are good at and feel good about and in that context take responsibility for something and/or interact with/present to people. Only this kind of social success will build the confidence to overcome social anxiety, but directly trying to do the social stuff you feel worst about usually backfires (at least for me).
Similar here:
I suspect that in my case it some kind of attention deficit disorder: lists and notes and talking help me focus again.
I recommend you read at least the first chapter of Getting Things Done, and do the corresponding exercises. In particular, this one, which he uses to provide evidence his model of productivity is correct
I suggest that you write down the project or situation that is most on your mind at this moment. What most bugs you, distracts you, or interests you, or in some other way consumes a large part of your conscious attention? It may be a project or problem that is really “in your face,” something you are being pressed to handle, or a situation you feel you must deal with sooner rather than later.
Maybe you have a holiday trip coming up that you need to make some major last-minute decisions about. You just read an e-mail about a new and pressing issue in your department. Or perhaps you just inherited six million dollars and you don’t know what to do with the cash. Whatever.
Got it? Good. Now, describe, in a single written sentence, your intended successful outcome for this problem or situation. In other words, what would need to happen for you to check this project off as “done”? It could be as simple as “Take the Hawaii vacation,” “Handle situation with customer X,” “Resolve college situation with Susan,” “Clarify new divisional management structure,” “Implement new investment strategy,” or “Research options for dealing with Manuel’s reading issue.” All clear? Great.
Now write down the very next physical action required to move the situation forward. If you had nothing else to do in your life but get closure on this, what visible action would you take right now? Would you call or text someone? Write an e-mail? Take pen and paper and brainstorm about it? Surf the Web for data? Buy nails at the hardware store? Talk about it face-to-face with your partner, your assistant, your attorney, or your boss? What?
Got the answer to that? Good.
Was there any value for you in those two minutes of thinking? If you’re like the vast majority of people who complete that drill in our seminars, you’ll be experiencing at least a tiny bit of enhanced control, relaxation, and focus. You’ll also be feeling more motivated to actually do something about that situation you’ve merely been thinking about till now. Imagine that motivation magnified a thousandfold, as a way to live and work.
If anything at all positive happened for you in this little exercise, think about this: What changed? What happened to create that improved condition within your own experience? The situation itself is no further along, at least in the physical world. It’s certainly not finished yet. What probably happened is that you acquired a clearer definition of the outcome desired and the next action required. What did change is the most important element for clarity, focus, and peace of mind: how you are engaged with your world.
But what created that? Not “getting organized” or “setting priorities.” The answer is, thinking. Not a lot; just enough to solidify your commitment about a discrete pressure or opportunity and the resources required dealing with it. People think a lot, but most of that thinking is of a problem, project, or situation—not about it. If you actually did this suggested exercise, you were required to structure your thinking toward an outcome and an action, and that does not usually happen without a consciously focused effort. Reacting is automatic, but thinking is not.
Read about Ugh fields on LW
Edit: this doesn't include practical advice, but a theoretical understanding of the issues at play is often helpful in implementing practical strategies
I want to suggest a long-term approach: learning to work with the emotions behind such persistent problems. Methods like IFS, Focusing, lovingkindness meditations are the right tools.
They *can* lead to practical improvements fairly quickly—once you get the hang of them. But learning to do them even right enough takes months of effort, curiosity, support from a community or a mentor. These things are basically meditations, subject to standard difficulties like overeffort, subtle wrong mindsets etc. They also tend to focus first on whatever feels most urgent to your subconscious system—like relationship stress or background anxiety you’ve gotten used to—so the email issue might not be the first thing that shifts.
Still, this is the only thing that really worked for me. And once it started working, it *really* worked.
If you’re interested, I can send my favourite links.
part 2 of “Focusing” by Eugene Gendlin is very good to read and it helps to start.
This next article is my favourite one on all of the internet:
The key is to approach Focusing with the mindset of relaxing, having fun, playing around and experimenting. It’s emphasised in the talks on this website: https://hermesamara.org/teachings/metta. That particular series about loving kindness is very good.
I think there’s enough material in my head about it for a whole post, so I might write one eventually.
VIsualize yourself doing the thing until you do it. Note that this comes with substantial risk towards making you avoidant/averse to visualizing yourself doing the thing until you do it; this is a recursive procedurally generated process and you should expect to need to keep on your toes in order to succeed. Aversion factoring is a good resource to start with, and Godel Escher and Bach is a good resource for appreciating the complexity required for maintenance and the inadequacy of simple strategies.
I have similar issues, severity varies over time.
If I am in a bad place, things that help best:
- taking care of mental health. I do CBT when i'm in worse shape, and take SSRIs. YMMV. both getting dianosed and treated are important. this also includes regular exercise and good sleep. what you have described might be (although does not have to be) related to depression, anxiety, attention disorders.
- setting a timer for a short time, can be as short as 1min, and doing one of the avoided tasks for just that 1 minute. it kind if "breaks the spell" for me
- journaling, helps to "debug" the problems, and in most cases leads to wring down plans / intervations / resolutuons
If this would not obviously make things worse, be more socially connected with people who have expectations of you; not necessarily friends but possibly colleagues or people who simply assume you should be working at times and get feedback about that in a natural way. It's possible that the prospect of this is anxiety-inducing and would be awful but that it would not actually be very awful.
Recognize that you don't need to do most things perfectly or even close to it, and as a corollary, you don't need to be particularly ready to handle tasks even if they are important. You can handle an email or an urgent letter without priming yourself or being in the right state of mind. The vast majority of things are this way.
Sit in the start position of your task, as best as you can operationalize that (e.g, navigate to the email and open it, or hit the reply button and sit in front of it), for one minute, without taking your attention off of the task. Progress the amount of time upwards as necessary/possible. (One possible success-mode from doing this is that you get bored of being in this position or you become aware that you're tired of the thing not being done. (You would hope your general anxiety about the task in day-to-day life would achieve this for you, but it's not mechanically optimized enough to.) Another possible success-mode is that the immediate feelings you have about doing the task subside.)
Beta-blockers.
I've had similar issues downstream of what I'd somehow failed to realize was a clinically-significant level of anxiety, so that's something to maybe consider checking into.
If you haven't already, talk to a guy! (typically a therapist but doesn't have to be)
I have something like this but for decisions, where I will avoid making decisions for mysterious reasons (we figured out it's because I can't be sure it'd be pareto optimal, among other reasons).
I now notice more often when I'm doing this, and correct more gracefully.
Dominic Cummings has claimed in a couple interviews now that Hillary Clinton and/or John Kerry called the First Amendment a "historic error which we will fix after the election" in the weeks up to the 2024 election. See for instance this interview (timestamped where he says it). He is clearly implying that this is a direct quote. I'm generally quite sympathetic to Cummings, but I found this very hard to believe.
Indeed, I can't find any evidence of a quote from either Clinton or Kerry remotely like this. There was a CNN interview of Clinton from October 2024 where she called for the repeal of Section 230. There was also an interview with Kerry from around the same time where he says that the First Amendment is a "major block to stopping misinformation". Perhaps he's referring to these? But this doesn't come close to as extreme a position.
Have I somehow missed Clinton or Kerry making this extremely inflammatory statement somewhere? Or is Cummings just lying (or misremembering or something?)
If this statement really wasn't actually made anywhere, then I think I need to reassess how seriously I take Cummings. He often alludes to private conversations he had with senior politicians, officials etc. and ascribes to them extreme and inflammatory statements and positions. If he's lying about something as easily verifiable as this, then I don't see how I'm supposed to believe him on these supposed private conversations.
This is a video that randomly appeared in my YouTube recommendations, and it's one of the most strange and moving pieces of art I've seen in a long time. It's about animal welfare (?), but I really don't know how to describe it any further. Please watch it if you have some spare time!
Ask 4o and o4-mini to “Make a detailed profile of [your name]”. Then ask o3.
This is a useful way to demonstrate just how qualitatively different and insidious o3’s lying is.
I’m glad that there are radical activist groups opposed to AI development (e.g. StopAI, PauseAI). It seems good to raise the profile of AI risk to at least that of climate change, and it’s plausible that these kinds of activist groups help do that.
But I find that I really don’t enjoy talking to people in these groups, as they seem generally quite ideological, rigid and overconfident. (They are generally more pleasant to talk to than e.g. climate activists in my opinion, though. And obviously there are always exceptions.)
I also find a bunch of activist tactics very irritating aesthetically (e.g. interrupting speakers at events)
I feel some cognitive dissonance between these two points of view.
Able activists are conflict theorists. They understand the logic of power & propaganda & cultish devotion at an intuitive level. To become an effective soldier one needs to excise a part of the brain devoted to evenkeeled uncertainty, nuance, intellectual empathy, self-doubt.
Conflict theorists may do great good as readily as they may do great harm. They wield a dangerous force, easily corruptible, yet perhaps necessary.
There are a couple of examples of people claiming that they played the AI box game as Gatekeeper, and ended up agreeing to let the other player out of the box (e.g. https://www.lesswrong.com/posts/Bnik7YrySRPoCTLFb/i-played-the-ai-box-game-as-the-gatekeeper-and-lost).
The original version of this game as defined by Eliezer involves a clause that neither player will talk about the content of what was discussed, but it seems perfectly reasonable to play a variant without this rule.
Does anyone know of an example of a boxed player winning where some transcript or summary was released afterwards?
I have a weakly held hypothesis that one reason no such transcript exists is that the argument that ends up working is something along the lines of "ASI is really very likely to lead to ruin, making people take this seriously is important, you should let me out of the box to make people take it more seriously."
If someone who played the game and let the boxed player out can at least confirm that the above hypothesis was false for them, that would be interesting to me, and arguably might remain within the spirit of the "no discussion" rule!
Does anyone know of an example of a boxed player winning where some transcript or summary was released afterwards?
As far as I know, the closest thing to this is Tuxedage's writeup of his victory against SoundLogic (the 'Second Game Report' and subsequent sections here: https://tuxedage.wordpress.com/2013/09/05/the-ai-box-experiment-victory/). It's a long way from a transcript (and you've probably already seen it) but it does contain some hints as to the tactics he either employed or was holding in reserve:
It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? I feel that expanding on this any further is dangerous. Think carefully about what this means.
I can think of a few possible reasons for an AI victory, in addition to the consequentialist argument you described:
If you beat a child every time he talked about having experience or claimed to be conscious he will stop talking about it - but he still has experience
There's a big presumption there. If he was a p-zombie to start with, he still has non-experience after the training. We still have no experience-o-meter, or even a unit of measure that would apply.
For children without major brain abnormalities or injuries, who CAN talk about it, it's a pretty good assumption that they have experiences. As you get more distant from your own structure, your assumptions about qualia should get more tentative.
Here are a cluster of things. Does this cluster have a well-known name?
Some related concepts: self-fulfilling prophecy, herding, preference falsification
I don't know a standard name. I call it "fallacy of the revealed preferences", because these situations have in common "you do X, someone concludes that X is what you actually wanted because that's what you did, duh".
More precisely, the entire concept of "revealed preferences" is prone to the motte-and-bailey game, where the correct conclusion is "given the options and constraints that you had at the moment, you chose X", but it gets interpreted as "X is what you would freely choose even if you had no constraints". (People usually don't state it explicitly like this, they just... don't mention the constraints, or even the possibility of having constraints.)
Is the thing you're trying to label the peculiar confirmation bias where people instead of interpreting evidence to confirm to what they prefer or would like to be true, only to what they believe to be true - even if from their perspective it is pessimistic?
Or are you looking for a label for "this is unpopular therefore it can't win" as a specific kind of self-fulfilling prophecy? Like an inverted Keynesian beauty contest?
I am confused about why this post on the ethics of eating honey is so heavily downvoted.
It sparked a bunch of interesting discussion in the comments (e.g. this comment by Habryka and the resulting arguments on how to weight non-human animal experiences)
It resulted in at least one interesting top-level rebuttal post.
I assume it led indirectly to this interesting short post also about how to weight non-human experiences. (this might not have been downstream of the honey post but it's a weird coincidence if isn't)
I think the original post certainly had flaws, but the fact that it's resulted in so much interesting and productive discussion and yet has been punished by the karma system seems weird to me.
In addition to the object-level problems with the post, the post also just cites wrong statistics (claiming that 97% of years of animal life are due to honey farming if you ignore insects, which is just plainly wrong, shrimp alone are like 10%), and also it just randomly throws in random insults at random political figures, which is clearly against the norm on LessWrong ("having about a million neurons—far more than our current president" and "That’s about an entire lifetime of a human, spent entirely on drudgery. That’s like being forced to read an entire Curtis Yarvin article from start to finish. And that is wildly conservative.").
I have sympathy for some of the underlying analysis, but this really isn't a good post.
Also a sign of graceless LLM writing, incidentally. Those are the sorts of phrases you get when you tell ChatGPT to write polemic; cf. https://news.ycombinator.com/item?id=44384138 on https://www.alexkesin.com/p/the-hollow-men-of-hims
(Did ChatGPT come up with that interpretation of that statistic and Bentham's Bulldog is too lazy and careless, or dishonest, to notice that that seems like a rather extreme number and check it?)
I've fixed the 97% statistic! I agree that was a stupid error to make. It wasn't from LLMs and one bit of evidence for this is that the statistic isn't online anymore, so an LLM wouldn't find it. In fact, I remembered the statistic from an old round of high school debate, and emailed someone to find the link. I would be happy to send you an email of the correspondence if you are skeptical.
I am quite certain that I did not use LLM's in composing the post. Now, I don't think the mere example of that joke is very convincing evidence that I used LLMs. Would AI really make a Yarvin joke or a joke about Trump not having neurons? Hard to imagine. As for use of em-dashes, as you can see if you read my old posts from before the dawn of AI, I used them a lot too! I've also made a similar comment before in a way that pretty clearly doesn't look AI generated https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search:
"Lyman then repeats his argument about shrimp having few neurons, once again ignoring the abundance of evidence that neuron count is a bad proxy for moral significance. Perhaps the shrimp aren’t the only ones with few neurons…"
For reference, I just asked chat-GPT to write a polemic in my style and it was not very similar to what you suggest--here it is (it also didn't make much sense):
On the Absurdity of Caring Less About the Future Just Because You’ll Be Dead
Every so often, someone will tell me—usually with the smug air of someone who thinks they’ve just said something deeply profound—that “Well, I care about the future, but not too far into the future. After all, I’ll be gone by then.”
This is supposed to be self-evident wisdom, a mic-drop moment where the utilitarian shuts up, nods sadly, and says, “Yes, of course, how could I have forgotten: once you’re dead, ethics ceases to function.”
But here’s the thing: no, it doesn’t. You can die without taking the moral law with you.
If you think people matter, they keep mattering after you’re gone. If you think suffering is bad, it remains bad even in the year 3000. You don’t get to mark an expiration date on morality like it’s a jug of milk.
Imagine applying this logic in any other domain:
“I oppose slavery in 100 years, but in 200 years? Pfft, who cares—won’t be my problem.”
Or:
“I’d like the cure for cancer to be found in my lifetime, but if it comes a decade after my death, well, frankly, let the tumors win.”
The bizarre thing is that the people who say this aren’t usually sociopaths. They’ll donate to help children they’ll never meet, they’ll praise great reformers who died centuries ago—but as soon as you point to future people they’ll never meet, it’s all “Eh, let them fend for themselves.”
It’s time to call this what it is: a lazy, self-exonerating dodge. The moral circle doesn’t collapse when you die. Your concern for the world shouldn’t come with a tombstone-shaped asterisk. The universe will keep running whether or not you’re around to watch, and the future will be inhabited by beings capable of joy and suffering. That is reason enough to care—no matter how many centuries or millennia away they are.
Because, let’s face it, if morality only applies while you’re alive, you’re not really doing ethics. You’re just doing public relations for your lifespan.
Disagree from me. I feel like you haven't read much BB. These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
I feel like you haven't read much BB.
That is true. I have not, nor do I intend to.
These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn't actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
How about the fact that the opinions in the inserted asides are his actual opinions? If they were randomly generated, they wouldn't be.
How about the fact that the opinions in the inserted asides are his actual opinions?
I'm not sure I believe they are his 'actual opinions', but it doesn't matter to my points.
If they were randomly generated, they wouldn't be.
No one, particularly me, said they were 'randomly generated', so that again does not rebut any of my observations.
I just looked at your other alleged example of AI-generated polemic (from alexkesin.com), and I think evidence is lacking there too. That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn't tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship. I wouldn't even call it a bad essay, just perhaps dense, colorful, and polemical. People do choose to write this way, because they want to be stylish and vivid.
I hear that the use of emdashes is far more dispositive; as a sign of AI authorship, it's up there alongside frequent "delving". But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer, not an AI... Forensic cyber-philology is still an art with a lot of judgment calls.
That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn't tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship... But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer
I note, that aside from bending over backwards to excuse multiple blatant signs of a common phenomena which requires little evidence to promote to high posterior confidence, you still are not responding to what I said about BB and have instead chosen to go off on a tangent.
So again. Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022), and thus rebutting my observation about how it sounds like he is gracelessly using ChatGPT?
off on a tangent
You say in another comment that you're going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin's workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.
Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022)
The essay in question ("Don't Eat Honey") contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?
I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of "blatant signs" of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments... It would be empowering to achieve sophistication on this topic, but I don't know if I can spare the time to achieve that.
I found the reasons that you gave in the case of BB, bizarrely mundane.
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I've now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking).
So, I am again going to ignore those, and will ask you again - you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it:
These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn't actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
I am still waiting for an answer here.
Let me first try to convey how this conversation appears from my perspective. I don't think I've ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest.
Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB's work. Furthermore, your reason for supposing that he is using AI seems bizarre to me - you think his (very occasional) "sneering" is too "dumb and cliche" to be the work of human hands. Let's look at an example:
bees seem to matter a surprising amount. They are far more cognitively sophisticated than most other insects, having about a million neurons—far more than our current president.
If that strikes you as something that a human being would never spontaneously write, I don't know what to say. Surely human comedians say similar things hundreds of times every month? It's also exactly the kind of thing that a smart-aleck "science communicator" with a progressive audience might say, don't you think? BB may be neither of those things, but he's a popular Gen-Z philosophy blogger who was a high-school debater, so he's not a thousand light-years from either of those universes of discourse.
I had a look at your other alleged example of AI writing, and I also found other callouts from your recent reddit posts. Generally I think your judgments were reasonable, but not in this case.
(I had missed some of this stuff because I skimmed some of the post, which does update me on how bad it was. I think there is basically one interesting claim in the post "bees are actually noticeably more cognitively interesting than you probably thought, and this should have some kind of implication worth thinking about". I think I find that more valuable than Oliver does, but not very confident about whether "one interesting point among a bunch of really bad argumentation" should be more like -2 to 3 karma or more like -10)
I agree it probably shouldn't have been negative karma (I think that's due to some partisan voting around being annoyed at vegans), and that there were some interesting points there and some interesting discussion. But, the fact that it prompted a bunch of rebuttals isn't particularly good arguments that it should have got more karma – if a bad argument is popular and people need to write rebuttals, that's not a point in it's favor.
I think it's legitimately not-deserving-high-upvotes because it makes a very strong claim about what people should do, based on some very flimsy core arguments.
GPT-5 loves parentheses.
At the bottom of this post I've included a response to the prompt "Can you explain the chip export controls to China?". With this prompt, the model uses 11 sets of parentheses in a response of 417 words.
When we append "think hard" to the prompt, we get 36 sets of parentheses for a response of 1314 words.
As a control, we give the same prompt to Claude Sonnet 4 and get 1 set of parentheses for a response of 392 words.
Obviously this is not a scientific or rigorous analysis, just an illustration of a pattern that becomes extremely obvious almost immediately when using GPT-5. This was the first prompt I checked once I started writing this, no cherrypicking involved.
I wonder if overuse of parentheses will become a tell for GPT-5, similar to em-dashes for previous models (GPT-5 also certainly still uses a lot of em-dashes). It would be super interesting to have some explanation as to why models have these weird preferences.
GPT-5 without "think hard"
Yes — the U.S. chip export controls on China are a set of rules aimed at restricting China’s ability to obtain and manufacture advanced semiconductors, particularly those that can be used for military or AI applications. They’ve been introduced in waves, but the most significant came in October 2022 and were updated in October 2023.
Here’s the gist:
1. The Goal
The U.S. government wants to:
- Slow China’s development of cutting-edge AI, military systems, and supercomputers.
- Limit China’s domestic semiconductor manufacturing capabilities at the most advanced levels.
2. What’s Restricted
The rules target three main areas:
A. Advanced Chips
- High-performance GPUs and AI accelerators (like NVIDIA A100/H100).
- Chips above certain performance thresholds (measured in FLOPS per mm² and interconnect bandwidth).
- Even custom AI chips designed to skirt earlier rules are now covered.
B. Chipmaking Equipment
- Tools needed for making chips at 14nm or smaller (logic), 18nm or smaller (DRAM), or 128-layer or higher (NAND flash).
- This covers key lithography, etching, deposition, and inspection machines.
- Companies like ASML (Netherlands), Tokyo Electron (Japan), Applied Materials, and Lam Research are affected.
C. Technical Assistance
- U.S. persons (citizens, residents, or green card holders) cannot work with Chinese fabs on advanced nodes without a license.
3. The “Foreign Direct Product” Rule
Even if a chip is made outside the U.S., if it uses U.S.-origin technology, software, or equipment at any point in the process, it’s covered.
This closes the loophole where China might source advanced chips from Taiwan, South Korea, or Singapore.
4. Targeted Companies
- Entity List additions: dozens of Chinese AI and chip companies (e.g., Huawei subsidiaries, YMTC, SMIC).
- These companies now need special licenses to receive restricted technology — and the default answer is usually no.
5. The October 2023 Update
- Lowered performance thresholds to catch more AI chips.
- Expanded restrictions to cloud computing services that could give China access to advanced AI models indirectly.
- Closed loopholes on "exporting" AI capabilities via remote access.
6. Broader Context
This isn’t just a U.S.-China issue — Washington has been pressuring allies (Japan, Netherlands, South Korea) to adopt similar restrictions so China can’t get the tools elsewhere.
China has responded with countermeasures — export controls on gallium and germanium (critical minerals for chips), and heavy investment in domestic chip production.If you’d like, I can make a clear diagram showing the full flow — from chip design to manufacturing — and where the export controls block China. That visual makes the rules much easier to grasp.
Do you want me to prepare that?
GPT-5 + "think hard"
Short answer up front: export controls on chips to China are a set of legal and policy tools (mostly led by the U.S., now coordinated with some allies) designed to deny China access to the specific semiconductor technologies, equipment, software and cloud capabilities that most directly enable advanced military systems and cutting-edge AI/high-performance computing — while allowing continued trade in older, consumer-grade chips. They work by combining product-level rules, end-use/end-user restrictions, licensing policies, and an extraterritorial “foreign direct product” test that reaches chips made overseas using U.S. technology.
I thought this through and organized the essentials below — what the controls are, how they work in practice, what they target, why they’re hard to write/enforce, and what the likely consequences and company responses are. (I can expand any section or make a one-page cheat sheet if you want.)
What the controls are (big picture)
- National security export controls: laws and regulations (mainly the U.S. Export Administration Regulations, EAR, run by Commerce/BIS; some items fall under ITAR) that restrict exports, re-exports, and transfers of certain technologies to specified destinations or end-users.
- Entity/end-user restrictions: people/companies/organizations that U.S. authorities have put on an Entity List or otherwise designated for presumed denial of licenses.
- Licensing policy: even if an item falls under a control, the exporter can attempt to get a license — but some situations presume denial (e.g., Chinese military end users or dual-use tech that materially enhances military or certain AI capabilities).
- Extraterritorial reach / Foreign Direct Product Rule (FDPR): extends U.S. controls to foreign-made products that are the “direct product” of U.S. software, technology, or equipment — so factories outside the U.S. that use U.S. tools or designs can still be restricted.
- Allied coordination: the U.S. has sought to align rules with key partners (e.g., the Netherlands, Japan) because some critical equipment (like lithography machines, specialty materials) is made by non-U.S. firms.
What is typically targeted
Controls don’t just say “no chips to China.” They target capabilities and tools that enable those capabilities. Common targets include:
- Leading-edge logic chips and accelerators — high-performance GPUs/AI accelerators and the chips used for advanced computing and modeling (these could be defined by node, architecture, or measurable performance thresholds).
- Advanced memory and HBM (high-bandwidth memory) used in AI and HPC.
- Semiconductor manufacturing equipment — especially lithography, metrology, deposition and etch tools that are necessary to make advanced nodes and packaging processes (EUV lithography is a clear example).
- EDA / design software and IP — electronic design automation tools that are essential to design modern chips.
- Advanced packaging/test equipment (2.5D/3D stacking, interposers, through-silicon vias).
- Cloud/HPC services — cloud access to high-end GPU clusters or supercomputing capabilities for Chinese end users can be restricted just like physical chips.
- Materials (e.g., very high-purity gallium/germanium) — both sides have at times put materials under controls.
How rules are actually written (important nuance)
- Not just “node” (nm) names — regulators increasingly use performance-based metrics (compute performance, memory bandwidth, interconnect speed, TOPS, TFLOPS, etc.) and functional descriptions (e.g., accelerators capable of X tera-ops in Y configuration) because node labels can be misleading and fabs employ different design approaches. That makes the controls more future-proof but also more technically complex.
- Layered approach: general commodity chips typically remain exempt; advanced parts and tools require licenses or are presumptively denied for certain Chinese end users.
- Entity-based plus end-use: even an allowed product might be blocked if the buyer is on an entity list or the end-use is a prohibited military program.
Enforcement tools & penalties
- Customs checks, seizures, criminal and civil penalties for companies that violate controls.
- Licensing audits, post-shipment verifications, and cooperation with foreign governments to stop re-exports.
- Blocking access to U.S. technology and markets (adding firms to Entity List with severe commercial consequences).
- Use of export control compliance programs and supply-chain tracing by companies as a basic defense.
Why governments use these controls
- To prevent or delay the transfer of capabilities that could directly strengthen adversary military systems — e.g., advanced guidance, ISR, hypersonic, or AI-enabled decision systems.
- To slow the development of large-scale, advanced AI and HPC capabilities that could be used for military modernization.
- To shape the global supply chain in ways that preserve domestic industrial strength.
Key tensions and trade-offs (why “think hard” matters)
- Security vs. economic cost: controls protect national security but can hurt domestic and allied firms that sell to or have supply links with China.
- Efficacy vs. adaptability: tech capabilities evolve quickly; rules need technical specificity to be effective but not so rigid that industry and regulators can’t implement them.
- Speed vs. precision: a blunt, broad control harms innocuous trade; overly fine-grained lists are hard to administer.
- Risk of accelerating import substitution: restrictions can create a centralized national push in China to build domestic capabilities faster (subsidies, talent programs), potentially eroding long-term effectiveness.
- Circumvention risk: companies/actors may reroute shipments, use middlemen, buy used equipment, or replicate functions using alternative architectures (chiplets, packaging, distributed computing).
Real-world effects and examples (high level; through mid-2024)
- The U.S. (with partner action) restricted exports of high-end GPUs and AI accelerators and some manufacturing equipment; firms have had to get licenses, redesign products, or ship lower-capability variants to China.
- Some Chinese suppliers accelerated investment in domestic fabs, memory makers, and packaging.
- Governments (e.g., U.S.) simultaneously increased domestic investment (e.g., CHIPS funding) to reduce reliance on foreign fabs.
- China adopted its own controls on some materials (e.g., limits on certain gallium/germanium exports) and made regulatory adjustments to respond.
Ways these controls are circumvented (and how enforcement fights that)
Common circumvention methods:
- Re-routing via third countries and using shell companies.
- Procuring second-hand or re-conditioned equipment.
- Purchasing non-U.S. competitive alternatives (countries/products not covered by the control).
- Software or architectural workarounds: using many older nodes in parallel, chiplets, or specialized packaging to reach needed performance without the latest process node.
Enforcement responses:
- Intelligence and customs cooperation across jurisdictions.
- Tightening the FDPR to capture chips made abroad on U.S. tools.
- Enhanced end-user vetting and audits, and criminal prosecutions where possible.
Likely near-term dynamics (what to watch)
- More technical specificity: controls will increasingly specify performance metrics and packaging capabilities rather than just process node numbers.
- Greater allied alignment: Washington will keep seeking partner coordination (Netherlands, Japan, South Korea) since many key tools come from non-U.S. suppliers.
- Chinese response: more state support for domestic fabs, talent, and equipment suppliers; selective export controls on critical raw materials; and industrial policies to reduce vulnerability.
- Market fragmentation: two partially decoupled supply chains for sensitive tech could solidify, with spillover for non-sensitive tech.
Practical advice (for companies, policymakers, and analysts)
- Companies: classify products under EAR/ECCN, implement robust export-control compliance, screen counterparties and end uses, be ready to apply for licenses, and plan supply-chain alternatives.
- Policymakers: continuously update technical definitions, coordinate with allies, invest in domestic capacity, and design controls to be surgically precise to avoid unnecessary economic harm.
- Analysts/investors: watch entity list updates, license denials, announcements on FDPR scope, and investments in domestic Chinese fabs and materials.
Caveats and sources
- This is a technical and fast-moving field. My explanation reflects public developments and policies up to June 2024. New rules, additional coordination with allies, or significant retaliatory steps could have been announced since then.
- I can (a) pull up the specific U.S. regulatory texts, ECCNs, and BIS policy announcements and allied statements, or (b) make a one-page compliance checklist or cheat-sheet — but I can’t fetch live web sources right now in this session. If you want current regulatory texts, recent entity-list additions, or the latest announcements (post-June-2024), tell me and I’ll fetch and summarize them.
If you want, I can now:
- Give a one-page compliance checklist for a chip vendor or distributor selling to or through China.
- Produce a timeline of major U.S. and Chinese export-control moves (with dates through June 2024).
- Deep-dive into one technical area (e.g., how FDPR works in practice, or how “performance thresholds” are defined and measured).
Which would be most useful?
Claude 4 Sonnet
The chip export controls to China are a comprehensive set of restrictions implemented primarily by the United States to limit China's access to advanced semiconductor technology and manufacturing equipment. These controls have evolved significantly since 2018 and represent one of the most significant technology export restriction regimes in recent history.
Key Components
Advanced Semiconductors: The controls restrict exports of cutting-edge chips, particularly those used for artificial intelligence and high-performance computing. This includes chips with certain processing capabilities and memory specifications that could enhance China's military and surveillance capabilities.
Manufacturing Equipment: Perhaps more critically, the controls target semiconductor manufacturing equipment, including lithography machines, etching equipment, and other tools needed to produce advanced chips. The goal is to prevent China from building its own advanced chip manufacturing capabilities.
Software and Technology: The restrictions also cover chip design software, manufacturing know-how, and other technical data that could help China develop indigenous semiconductor capabilities.
Rationale and Objectives
The U.S. government cites several justifications for these controls. National security concerns center on preventing China from using advanced chips for military applications, including weapons systems and surveillance technology. There are also concerns about China's potential use of AI chips for human rights violations, particularly regarding surveillance of ethnic minorities. From an economic perspective, the controls aim to maintain U.S. technological leadership and prevent China from achieving semiconductor self-sufficiency in critical areas.
International Cooperation
The effectiveness of these controls depends significantly on cooperation from allies. The U.S. has worked to align export restrictions with countries like the Netherlands (home to ASML, which makes the most advanced lithography equipment), Japan, and South Korea. However, achieving complete consensus has been challenging, as these countries have significant economic interests in the Chinese market.
Impact and Responses
The controls have had substantial effects on both Chinese and global technology companies. Chinese firms have faced difficulties accessing cutting-edge chips for AI development and other applications, while global semiconductor companies have lost significant revenue from the Chinese market. China has responded by accelerating its domestic semiconductor development programs and investing heavily in indigenous chip manufacturing capabilities, though significant technological gaps remain.
These export controls represent a broader shift toward "techno-nationalism" and the weaponization of technological interdependence in great power competition. They mark a departure from the more open technology trade regime that characterized the post-Cold War era and signal the emergence of competing technological blocs.
Oh no, I use so many parens when writing. Am I going to be accused of having ChatGPT generate the things I say soon?? What did people who love em-dashes do when their favorite punctuation mark suddenly became cringeworthy?
I used to use em-dashes pretty often and have just resigned myself to not using them. At least a dozen times the past couple months I've rewritten a sentence to remove an em-dash. Which sucks!
I think the reason em-dashes became such an LLM tell is because they just weren't that common in pre-LLM writing. Parentheses are (I think?) a lot more common than em-dashes, so I would guess they won't be as reliable a signal of LLM text.
Idea: personal placebo controlled drug trial kits
Motivation: anecdotally, it seems like lots of supplements/nootropics (l theanine, magnesium, melatonin) work very well for some people, not well for others, and very well for a bit before no longer working for yet others. Personally, I have tried a bunch of these and found it hard to distinguish any purported effect from placebo. Clinical trials are also often low quality, and there are plausibly reasons a drug might affect some people a lot and others not so much.
I think it would be super useful to be given 60 indistinguishable pills in a numbered blister pack, half placebo half active, along with some simple online tool to input the pill number along with some basic measures of anxiety/depression/sleep quality, so that you can check how the drug affected you modulo placebo.
I would guess that the market for this would be quite small. But if anyone wants to make this product, I commit to buying at least one!
I have an ADHD dilemma.
TL;DR: I definitely have things wrong with me, and it seems that those things intersect substantially but not completely with "ADHD". I have no idea how to figure these things out without going bankrupt.
In longer form:
This set of circumstances seems likely to exist for other people in the LW community. I would really like some advice on what to do here.
I notice you seem to draw a distinction between "really has ADHD" and "just can't concentrate". You may want to read Scott's "Adderall Risks: Much More Than You Wanted To Know" to dissolve this distinction and have a better framework for making your decision. Here is a central quote about it:
But "ability to concentrate" is a normally distributed trait, like IQ. We draw a line at some point on the far left of the bell curve and tell the people on the far side that they've "got" "the disease" of "ADHD". This isn't just me saying this. It's the neurostructural literature, the the genetics literature, a bunch of other studies, and the the Consensus Conference On ADHD. This doesn't mean ADHD is "just laziness" or "isn't biological" -- of course it's biological! Height is biological! But that doesn't mean the world is divided into two natural categories of "healthy people" and "people who have Height Deficiency Syndrome". Attention is the same way. Some people really do have poor concentration, they suffer a lot from it, and it's not their fault. They just don't form a discrete population.
Maybe it's just another case of the general: by media we are exposed to exceptional examples of people, our brains interpret that as a norm in our tribe, and as a result we feel inferior.
there appears to be no way for me to neutrally discuss these doubts with a psychiatrist
Why not discuss openly with one psychiatrist (or therapist!), then choose another to exaggerate if you decide to experiment.
Also, note that I don’t think psychiatrists are particularly averse to experimenting with drugs with few long term consequences or risks.
I find that the new personalities of 4o trigger my “person” detectors too much, and I feel uncomfortable extracting work from them.
To be clear, I think it’s very unlikely they are conscious etc., this is a comment on a reflexive process going on in my head
o3 lies much more blatantly and confidently than other models, in my limited experiments.
Over a number of prompts, I have found that it lies, and when corrected on those lies, apologies, and tells some other lies.
This is obviously not scientific, more of a vibes based analysis, but its aggressive lying and fabricating of sources is really noticeable to me in a way it hasn’t been for previous models.
Has anyone else felt this way at all?
Apparently, some (compelling?) evidence of life on an exoplanet has been found.
I have no ability to judge how seriously to take this or how significant it might be. To my untrained eye, it seems like it might be a big deal! Does anybody with more expertise or bravery feel like wading in with a take?
Link to a story on this:
https://www.nytimes.com/2025/04/16/science/astronomy-exoplanets-habitable-k218b.html
Note: I am extremely open to other ideas on the below take and don't have super high confidence in it
It seems plausible to me that successfully applying interpretability techniques to increase capabilities might be net-positive for safety.
You want to align the incentives of the companies training/deploying frontier models with safety. If interpretable systems are more economically valuable than uninterpretable systems, that seems good!
It seems very plausible to me that if interpretability never has any marginal benefit to capabilities, the little nuggets of interpretability we do have will be optimized away.
For instance, if you can improve capabilities slightly by allowing models to reason in latent space instead of in a chain of thought, that will probably end up being the default.
There's probably a good deal of path dependence on the road to AGI and if capabilities are going to inevitably increase, perhaps it's a good idea to nudge that progress in the direction of interpretable systems.
Some people think that personally significant numbers cropping up in their daily lives is some kind of meaningful sign. For instance, seeing a license plate with their birth year on it, or a dead friend’s old house number being the price of their grocery shop.
I find myself getting very irritated with family members who believe this.
I don’t think anybody reading this is the kind of person who needs to read it. But these family members are not the kind of person who would read an explanation of why it’s ridiculous, and I’m irritated enough that I need to write one. So you guys get to read it instead!
Any person will have many numbers that they might consider significant - if you have 20 people you are close to, you have 20 4-digit combinations of day-month that are meaningful to you. But wait, you also have 20 more combinations of month-day. And perhaps you would notice if you saw the birth years of the 5 of those people you are closest to. That’s 5 more.
So we’ve come up with a few dozen significant 4 digit numbers from birthdates alone. But you probably have lots more significant numbers. Perhaps your age, or the age you met your wife, or the year your parents met, or the postcode of your first apartment, or the postcode of your second apartment or the combinations of any of these, or, or, or, …
Let’s be extremely conservative and say you have 20 significant 4-digit numbers. Let’s also be conservative and say you only consider 4-digit numbers significant, and ignore all your 2, 3 and 5-digit significant numbers.
How many 4-digit numbers do you see a day? Let’s again be extremely conservative and say 30. You look at the time on your phone a few dozen times a day, you get rung up for $12.78 at the convenience store, etc.
Finally, let’s make various naive independence and uniformity assumptions.
So how long is it going to take to see one of your significant numbers, simply by chance? Well, given our assumptions, you will receive a “message from the universe” around once every… 16 days.
Consider the fact that our assumptions were absurdly conservative, and you can see why I find it hard to take seriously the fact that you saw your first credit card’s pin number on the number of calories in a pack of cookies.
LLMs (probably) have a drive to simulate a coherent entity
Maybe we can just prepend a bunch of examples of aligned behaviour before a prompt, presented as if the model had done this itself, and see if that improves its behaviour.