LESSWRONG
LW

AlphaAndOmega
631Ω33880
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
4AlphaAndOmega's Shortform
7mo
11
leogao's Shortform
AlphaAndOmega15h42

I did try and make it clear that I'm only talking about therapeutic usage here, and even when off-label or for PED purposes, at therapeutic doses. I apologize for not stressing that even further, since it's an important distinction to make. 

I agree that it's rather important to use it as prescribed, or if you're sourcing it outside the medical system, making a strong effort to ensure you take it as would be prescribed (there's nothing particularly complicated about the dosage, psychiatrists usually start you at the lowest dose, then titrate upwards depending on effect). 

The Claude Research report seems fine to me, and I would think it aligns with my claims. The main issue with recreational usage is that a lot of people aren't trying to be responsible users, or are taking intentionally talking large doses for recreational purposes. That's more on them than it is on the drug! If you take it within the standard dosage range, the drug itself will not produce much in the way of craving for more. 

>CNS drugs are powerful so yes I think we should still have some limits on this? 

I am, if not outright libertarian, certainly leaning in that direction. So it depends on what the "limits" actually are. I think that doctors are currently overly risk-averse and conservative about prescribing them, but I don't think they should be handed out like candy over the counter. I think there's plenty of room in between that avoids the pitfalls of a maximalist position. 

 

>I think one of the main things that are a bit difficult with them is that it can be hard to perceive the difference that they induce in yourself? Like if you're on them, you don't necessarily notice that you have less creativity and awareness, that is not how it feels and so if you're overusing them or similar you just don't get that feedback? (based on some modafinil experience & observations from friends) 

The effects on creativity are rather minor. I can't really tell a difference when I'm on them, but I do have ADHD so that might confound things. Some of the best and most creative things I've written were while I was on methylphenidate or dextroamphetamine! If you're using it to make an often monotonous task like programming more palatable, or to improve your ability to study, then I wager the benefits massively outweigh the slight cognitive inflexibility. I don't think you want stimulants if you're trying to paint or write poetry, even if they won't massively handicap you. The effects are subtle, you're not becoming an automaton. 

Reply
leogao's Shortform
AlphaAndOmega1d161

(I'm a psychiatry resident. I also have ADHD and take prescription stimulants infrequently) 

 

The answer is: not really, or at least not in a meaningful sense. You aren't permanently losing anything, your brain or your wellbeing isn't being burnt out like a GPU running on an unstable OC:

  1. Prescription stimulants often have unpleasant comedowns once they wear off. You might feel tired and burned out. They often come with increased anxiety and jitteriness.
  2. Sleep is negatively affected, you get less REM sleep, and you might experience rebound hypersomnia on days you're not on the drug.
  3. There are minor and usually unimportant elevations in blood pressure.
  4. While focus and stamina are improved, creativity and cognitive flexibility suffer. I've read claims that it also makes people overconfident, which strikes me as prima facie plausible. Ever seen how people behave after using coke?
  5. Animal studies show oxidative damage to the brain, but this has not been demonstrated in humans on therapeutic doses, even if used for performance enhancement in those who don't meet the normal criteria for ADHD.
  6. If started at a young age, growth velocity could be slightly hampered, mostly because of appetite suppression.
  7. Dependence or addiction liability, while is low but not nil at therapeutic doses. 

 

In my opinion, all of these are inconsequential, and the side effects vanish quickly on cessation. I certainly need the meds more than the average Joe, but I don't think even neurotypical people using it as a PED are at much risk, as long as they keep the doses within reason. I'm of the opinion that current medical guidelines are far too conservative about stimulants, but in practice, they're easily circumvented. 

 

On a more speculative note:

I'm of the opinion that the ancestral environment didn't demand that our ancestors be always switched on. Attention and focus were useful during activities like hunting and foraging, but there was immense amounts of forced downtime and slack. Even if you have less than ideal levels of conscientiousness or executive function, gnawing hunger or a desire for shelter probably kept you doing the right thing. 

With agriculture, this began to change dramatically. A lot of the previous highly tight reward and feedback loop ends up deferred. A farmer can do a lot more to prepare for the future and hedge his bets than a hunter gatherer can. And modernity rewards such an approach even more. 

Reply52
JustisMills's Shortform
AlphaAndOmega1d*200

I am a psychiatry resident, so my vibes are slightly better than the norm! In my career so far, I haven't actually met or heard of a single psychotic person being admitted to either hospital in my British city who even my colleagues claimed were triggered by a chafbot. 

 

 But the actual figures have very wide error bars, one source claims around 50/100k for new episodes of first-time psychosis. https://www.ncbi.nlm.nih.gov/books/NBK546579/

But others claim around:

>The pooled incidence of all psychotic disorders was 26·6 per 100 000 person-years (95% CI 22·0-31·7). 

That's from a large meta-analysis in the Lancet

https://pubmed.ncbi.nlm.nih.gov/31054641/

It can vary widely from country to country, and then there's a lot of heterogeneity within each. 

To sum up:

About 1/3800 to 1/5000 people develop new psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time. 

I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:

>For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.

If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practice of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.

(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)

Other reasons for doubt:

  1. Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.
  2. People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.
Reply
AlphaAndOmega's Shortform
AlphaAndOmega2d20

On using LLMs for review and self-critique while avoiding sycophantic failure modes:

(Originally written as a reply to Kaj's post) 

For a long time, just as long as they were productively capable of the same, I've used LLMs to review my writing, be it fictional or not, or offer feedback and critique. 

The majority of LLMs are significantly sycophantic, to the point that you have to meaningfully adjust downwards unless you're in it for the sole purpose of flattering your ego. I've noticed this to a degree in just about all of them, but it's particularly bad in most GPT models and in Gemini 2.5 Pro. Claude is less effusive (it's also less verbose), and Kimi K2 seems to be the least prone to that failure mode. 

To compensate, I've found a few tricks:

  1. Present your excerpts as something "I found on the internet", and not your own work. This immediately cuts down on the flattery.
  2. Do the same, and also ask it to carefully note all potential objections and failings in the text. 

Another option I've seen recommended online, but doesn't work so well in practice, is to tell it that the material is from an author you personally dislike, and that you want "objective" reasons for why it's bad. 

This hasn't worked well for me at all. This causes a collapse in the opposite direction, with the LLM finding the most tenuous grounds to object, and making a mountain out of molehills. Most objections aren't even "objective"! While it is immediately painful to read such critique, I call it accurately because I am still largely convinced of the quality of my writing. I am used to receiving strongly positive feedback on most things I write, from other humans, and the kind of issues it raises are not those I've ever came across from the same. 

Of course, if even the LLM taking on the mantle of a hater provides weak/bad reasons for your writing being bad, that's a great sign! It shows that it's grasping at straws, and is a healthy signal of innate quality. 

On the topic, one of my hobbies is getting into pointless arguments productive debates with internet strangers, often with no clear resolution or acceptance of defeat from either side. I've found it helpful to throw the entire comment chain into Gemini 2.5 Pro or Claude, and ask it to declare a winner, identify those arguing in good faith, and so on. I take pains to make sure that it doesn't have an obvious tell which of the users in the dialogue I am, to prevent sycophancy creeping back in. Seems to work. If you're feeling up to it, you can get very good mileage out of asking the virtual judge to emulate someone like Scott or Gwern, it's very helpful to have a track record of high quality and neutral output, with a massive amount of writing ending up in just about every training corpus. 

Reply
You can get LLMs to say almost anything you want
AlphaAndOmega2d90

For a long time, just as long as they were productively capable of the same, I've used LLMs to review my writing, be it fictional or not, or offer feedback and critique. 

The majority of LLMs are significantly sycophantic, to the point that you have to meaningfully adjust downwards unless you're in it for the sole purpose of flattering your ego. I've noticed this to a degree in just about all of them, but it's particularly bad in most GPT models and in Gemini 2.5 Pro. Claude is less effusive (it's also less verbose), and Kimi K2 seems to be the least prone to that failure mode. 

To compensate, I've found a few tricks:

  1. Present your excerpts as something "I found on the internet", and not your own work. This immediately cuts down on the flattery.
  2. Do the same, and also ask it to carefully note all objections and failings in the text. 

 

Another option I've seen recommended online, but doesn't work so well in practice, is to tell it that the material is from an author you personally dislike, and that you want "objective" reasons for why it's bad. 

This hasn't worked well for me at all. This causes a collapse in the opposite direction, with the LLM finding the most tenuous grounds to object, and making a mountain out of molehills. Most objections aren't even "objective"! While it is immediately painful to read such critique, I call it accurately because I am still largely convinced of the quality of my writing. I am used to receiving strongly positive feedback on most things I write, from other humans, and the kind of issues it raises are not those I've ever came across from the same. 

Of course, if even the LLM taking on the mantle of a hater provides weak/bad reasons for your writing being bad, that's a great sign! It shows that it's grasping at straws, and is a healthy signal of innate quality. 

On the topic, one of my hobbies is getting into pointless arguments productive debates with internet strangers, often with no clear resolution or acceptance of defeat from either side. I've found it helpful to throw the entire comment chain into Gemini 2.5 Pro or Claude, and ask it to declare a winner, identify those arguing in good faith, and so on. I take pains to make sure that it doesn't have an obvious tell which of the users in the dialogue I am, to prevent sycophancy creeping back in. Seems to work. 

Reply
JustisMills's Shortform
AlphaAndOmega2d96

The base rate for acute psychosis is so high that I am deeply skeptical that LLMs are making a difference without much more evidence. And in a lot of cases, the persons at risk were going to be triggered by something, and we're not really doing a good job of assessing if it was just bad luck that an LLM appears to be the proximate cause. I'm not saying it can't be true, but I have strong reservations. 

Reply
leogao's Shortform
AlphaAndOmega15d10

I'd be down to try something along those lines. 

I wonder if anyone has ball-park figures for how much the LLM, used for tone-warnings and light moderation, would cost? I am uncertain about what grade of model would be necessary for acceptable results, though I'd wager a guess that Gemini 2.5 Flash would be acceptable. 

Disclosure: I'm an admin of themotte.org, and an unusually AI-philic one. I'd previously floated the idea of fine-tuning an LLM on records of previous moderator interactions and associated parent comments (both good and bad, us mods go out of our way to recognize and reward high quality posts, after user reports). Our core thesis is to be a place for polite and thoughtful discussion of contentious topics, and necessarily, we have rather subjective moderation guidelines. (People can be very persistent and inventive about sticking to the RAW while violating the spirit) 

Even 2 years ago, when I floated the idea, I think it would have worked okay, and these days, I think you could get away without fine-tuning at all. I suspect the biggest hurdle would be models throwing a fit over controversial topics/views, even if the manner and phrasing were within discussion norms. Sadly, now, as it was then, the core user base was too polarized to support such an endeavor. I'd still like to see it put into use. 

>argument mapping is really cool imo but I think most attempts fail because they try to make arguments super structured and legible. I think a less structured version that lets you vote on how much you think various posts respond to other posts and how well you think it addresses the key points and which posts overlap in arguments would be valuable. like you'd see clusters with (human written and vote selected) summaries of various clusters, and then links of various strengths inter cluster. I think this would greatly help epistemics by avoiding infinite argument retreading 

Another feature I might float is the idea of granular voting. Let's say there's a comment where I agree with 90% of the content, but vehemently disagree with the rest. Should I upvote, and unavoidably endorse the bit I don't want to? Should I make a comment stating that I agree with this specific portion and not that? 

What if users could just select snippets of a comment and upvote/downvote them? We could even do the HackerNews thing and change the opacity of the text to show how popular particular passages were. 

Reply
A Depressed Shrink Tries Shrooms
AlphaAndOmega15d10

Thank you! You'll be waiting a while, but I'll try and do a follow up, either as edits to the main post, or as standalone entries. 

Reply
A Depressed Shrink Tries Shrooms
AlphaAndOmega15d30

You're welcome! I can only hope that the effects last, but even a short relief makes a big difference when it comes to how bad things had gotten. I can only look forward to, hopefully, positive clinical results and the liberty to take it myself and prescribe for my patients. 

Reply1
A Depressed Shrink Tries Shrooms
AlphaAndOmega17d30

You're welcome! 

Reply
Load More
44A Depressed Shrink Tries Shrooms
17d
6
12Escaping the Jungles of Norwood: A Rationalist’s Guide to Male Pattern Baldness
25d
10
4AlphaAndOmega's Shortform
7mo
11