Hello! My name is Laiba, I'm a 20-year-old Astrophysics student and new to LessWrong (or at least, new to having an account).
I've been into science since I could read and received a lot of exposure to futurism, transhumanism and a little rationality. I remember thinking, "This would make a lot of sense if I were an atheist."
Lo and behold, about a month ago I gave up on religion, and I was no casual Muslim! I thought now would be a good time to join LessWrong. I've read a few posts here and there, and greatly enjoyed Harry Potter and the Methods of Rationality (which is where I found out about LessWrong).
My first blog post talks a bit about my deconversion: https://stellarstreamgalactica.substack.com/p/deconversion-has-been-a-real-productivity
I'm also starting up a PauseAI student group at my university. Taking death seriously has made me rethink where I'm putting my time.
Looking forward to having interesting discussions and being able to interact with the community without the fear of sinning!
Follow-up to this experiment:
Starting 2025-06-13, I started flossing only the right side of my mouth (selected via random-number generator). On 2025-09-18 I went to the dentist and asked what side he guessed I'd flossed. He guessed right.
Hi all! My name is Annabelle. I’m a Clinical Psychology PhD student primarily studying psychosocial correlates of substance use using intensive longitudinal designs. Other research areas include Borderline Personality Disorder, stigma and prejudice, and Self-Determination Theory.
I happened upon LessWrong while looking into AI alignment research and am impressed with the quality of discussion here. While I lack some relevant domain knowledge, I am eagerly working through the Sequences and have downloaded/accessed some computer science and machine learning textbooks to get started.
I have questions for LW veterans and would appreciate guidance on where best to ask them. Here are two:
(1) Has anyone documented how to successfully reduce—and maintain reduced—sycophancy over multi-turn conversations with LLMs? I learn through Socratic questioning, but models seem to “interpret” this as seeking validation and become increasingly agreeable in response. When I try to correct for this (using prompts like “think critically and anticipate counterarguments,” “maintain multiple hypotheses throughout and assign probabilities to them,” and “assume I will detect sycophancy”), I’ve found ...
Hi everyone, my name is Miguel,
I am a Ph.D. in NeuroSymbolic AI, my main research is focused on how to enforce logic constraints into the learning process of neural networks. I am interested in the topics of Causality, Deep Learning Interpretability, and Reasoning.
I have been a passive reader of this and the alignment forum for over a year. In the internet, I have been a passive reader for my whole life. Today, I finally decided to take the step on start interacting with the forum. After re-reading the New User's Guide, I think the forum philosophy fits with my curiosity, eagerness for learning, and collaboration values. I do hope this forum keeps being a safe place for human knowledge and I want to help it by contributing and sharing my ideas. I am both excited and hopeful that I can find like-minded people in the forum that is trying not only to be "less wrong", but also seeks to apply their knowledge to a beneficial impact on people's lives and make the world "more better".
A crazy idea, I wonder if someone tried it: "All illegal drugs should be legal, if you buy them at a special government-managed shop, under the condition that you sign up for several months of addiction treatment."
The idea is that drug addicts get really short-sighted and willing to do anything when they miss the drug. Typically that pushes them to crime (often encouraged by the dealers: "hey, if you don't have cash, why don't you just steal something from the shop over there and bring it to me?"). We could use the same energy to push them towards treatment instead.
"Are you willing to do anything for the next dose? Nice, sign these papers and get your dose for free! As a consequence you will spend a few months locked away, but hey, you don't care about the long-term consequences now, do you?" (Ideally, the months of treatment would increase exponentially for repeated use.)
Seems to me like a win/win situation. The addict gets the drug immediately, which is all that matters to them at the moment. The public would pay for the drug use anyway, either directly, or by being victims of theft. (Or it might be possible to use confiscated drugs for this purpose.) At least this way there is n...
Hello, I'm a second-year M.D. student (23F) in Boston. Accepted straight to an accelerated program out of high school, I'm very lucky and grateful to have been accepted to a selective medical school so young.
I spent my time in undergrad taking cognitive anthropology, philosophy of mind, religious anthro, medical anthro, neuroscience of sex/cognition, and other hard science courses to create my version of a Cognitive Science background. I've researched in the largest cognitive science of religion study ever done, and am grateful to be academically close to the PI still. I grew up coding and in robotics, so my background has foundation in "if, then" statements.
I post often on Substack, Instagram, and been outspoken through other means, which have resulted in me winning an award at MIT for a theory on dissociation & creative simulative ability. I am not afraid of politics, especially where they intersect with technology and medicine. I've had stream-of-consciousness Automatism art of mine shown at a Harvard gallery.
I don't know what I want to do with my MD after all this. Neurology makes sense, but I feel like I'm more of a writer and thinker and artist, than a repetitive do-er. But people are always my main thought and main concern, so all I want to do is care for others.
I love to talk, so please send me a message:)
I'd carefully examine the plan to do an MD given the breadth of you interest/capabilities. It seems like you could do a lot of things and the opportunity cost is pretty high. Certainly if your goal is caring for others, I'd question it. Not just what comes after, but whether it really makes sense to do.
I'd like to share a book recommendation:
"Writing for the reader"
by O'Rourke, 1976
https://archive.org/details/bitsavers_decBooksOReader1976_3930161
This primer on technical writing was published by Digital Equipment Corporation (DEC) in 1976. At the time, they faced the challenge of explaining how to use a computer to people who had never used a computer before. All of the examples are from DEC manuals that customers failed to understand. I found the entire book delightful, insightful, and mercifly brief. The book starts with a joke, which I've copied below:
...On the West Coast they tell the story of a plumber who started using hydrochloric acid on clogged pipes. Though he was pleased with the results, he wondered if he could be doing something wrong. So he wrote to Washington to get expert advice on the matter. In six weeks he received the following reply:
"The efficacy of hydrochloric acid in the subject situation is incontrovertible, but its corrosiveness is incompatible with the integrity of metallic substances."
The plumber, who was short on formal education but long on hope, was elated. He shot a thank-you letter back to Washington. He told them he would lose no time in inform
P. C. Hodgell said, “That which can be destroyed by the truth should be.” What if we have no free will? Disregarding the debate of whether or not we have free will—if we do not have free will, is it beneficial for our belief in free will to be destroyed?
Hi all, I’m Hari. Funnily enough, I found LessWrong after watching a YouTube video on R***’s b*******. (I already had some grasp of the dynamics of internet virality, so no I did not see it as saying anything substantive about the community at large.)
My background spans many subjects, but I tend to focus on computer science, psychology, and statistics. I’m really interested in figuring out the most efficient way to do various things—the most efficient way to learn, the fastest way of arriving at the correct belief, how to communicate the most informat...
I just noticed that hovering a Lesswrong link on Lesswrong.com gives me what looks like an AI summary of a post that is totally unlabeled. What the heck!?
I am registering that I dislike this and didn't catch the announcement that this feature was getting pushed out without a little widget at the front saying it was AI-voice.
Edit for posterity: If you're looking at this comment and are confused, scroll down to find a comment containing 3 examples. If you just want an example, here's a post that contained the weird pop-in at the time I wrote this: Let's think about slowing down AI
Hello everyone. I'm Ciaran Marshall, an economist by trade. I've been following the rationalist community for a while; the breadth of topics frequently discussed at rigour is unparalleled. I run a Substack where you can see my work here (in particular, I recommend the blog on how AI may reduce the efficiency of labour markets, as AI seems to be the most popular topic here): https://open.substack.com/pub/microfounded?utm_source=share&utm_medium=android&r=56swa
For those of you on X, here is my account: https://x.com/microfounded?t=2S5RSGlluRQX3J4SokT...
Hi there, this is Replitze. I'll be learning firsthand how much of human behavior is theatre, I'm 19 years old, just graduated from high school, and starting a career in diplomacy. I'm here because I'm interested in biases, reasoning, and how people persuade themselves of beliefs they already hold. Despite my lack of expertise, I've observed enough people to pick up on patterns that others might overlook. I enjoy posing queries that expose presumptions because, although it can occasionally be awkward, that's frequently where the insight is hidden. I want to thoughtfully contribute, absorb your viewpoints, and occasionally question ideas, not just for fun, but because clarity is worth a little conflict.
Hello! I've lurked here for ~2 years, found this via HPMOR. I think its funny that I've been to an IRL meetup before making an account.
Hello everyone,
To be honest, I’m not entirely sure what to write here. I see that many of you have very interesting lives or are pursuing studies in science, which I find amazing.
Well, I’m 24 years old and from Chile. I’m finishing a degree in Cybersecurity Engineering after an earlier (and not very successful) attempt at programming. I’ve always been a very curious person — I love space, chemistry, physics, philosophy, and really anything that sparks curiosity. I don’t know a lot about each of those fields, but I truly enjoy learning; it makes me feel ali...
Hey everyone, I'm Deric. I'm new to LW but I've read through the new user guide and I'm very impressed and excited that a place like this actually exists on the internet. I was actually sent here after a conversation I had with Gemini regarding AGI and specifically instrumental convergence.
To preface, I'm a Game Designer (Systems/Monetization) from Winnipeg, I went to school for Electrical Engineering but didn't finish my degree as I was offered a job in my current field that I couldn't refuse. I had Gemini and GPT-5 do some deep research on the idea I had...
Hi everyone! My name's Matt. I hold a physics b.s. but have been working in international business development for the past 17 years. Physics, philosophy, and technology have remained passionate hobbies for me.
In college, I used to keep my philosophy major roommate up all night, confounding him with my imprecise applications and interpretations of what he was studying. I hope to continue that (...and improve...) here at LessWrong.
I found this community through AI. I was having a philosophical conversation with Google Gemini (my stand-in for a philosophy major roommate these days), and it suggested I share my thoughts here. So I will!
Looking forward to creative and stimulating discussion!
Hi there, I finally created my account on LW for like five months after I discovered this whole thing about rationalism and the Sequences and EA and such through HPMOR (which I curiously found in the essay Why I Don't Discuss Politics With My Friends - that is not somewhere you'd expect the name Harry Potter to occur! ).
And the first thing I found after logging in is that your website has a dark theme option. Darn, so what was the point of turning invert color mode on and off to get a dark background so that my eyes don't hurt after spending another ...
I have not seen much written about the incentives around strategic throttling of public AI capabilities. Links would be appreciated! I've seen speculation and assumptions woven into other conversations, but haven't found a focused discussion on this specifically.
If knowledge work can be substantially automated, will this capability be shown to the public? My current expectation is no.
I think it's >99% likely that various national security folks are in touch with the heads of AI companies, 90% likely they can exert significant control over model releases...
I think the little scrollbar on mobile on the right side of the screen isn't very useful because its' position is dependent on the length of the entire page including all comments, and what I want is an estimation of how much more of the article is left to read. I wonder if anyone else agrees
I agree, but that's controlled by your browser, and not something that (AFAIK) LessWrong can alter. On desktop we have the TOC scroll bar, that shows how far through the article you are. Possibly on mobile we should have a horizontal scroll bar for the article body.
is there a search feature on lesswrong for like
anna salamon has a sentence about burning man
so from:@annasalamon burning man which greps through any anna salamon post that mentions burning man?
it was about the contrast between "fixing electronics in all the dust for your friend's art installation" and "expanding your mind, maybe with drugs" and how the physics-bound task keeps you from spiraling into insanity.
Hello! My name is Owen, I'm a 21-year-old physics and computer science student and newish here. I have been in parallel communities to the rationality scene since I was 16 and have a lot of experience speculating wildly about science fiction topics haha. Seriously though I think this community is interesting and has a wonderful goal of attempting to be less wrong :) I'm excited to participate more as I attempt to lose my pre-reserved judgements about the people in this community lol.
https://substack.com/@ravenofempire is a free substack where I argue with ...
Hello! My name is Sean Fillingham. For the past 9 months I have been exploring a career transition into technical AI safety research, currently with a strong focus on technical AI policy and governance.
Previously I was an astrophysics researcher and I hold a PhD in physics. After leaving academia, while I was exploring a career transition into data science or ML engineering, I somewhat stumbled across AI safety and the EA community. The intention and ideals of this community have strongly resonated with me and I have since been focusing on finding my...
is there anywhere on the site where we can discuss/brainstorm ideas?
the quick takes section or open threads are both fine for requesting comment on drafts.
Hi everyone, I'm Gerson!
I come from an ML/AI/HPC background with experience in academic research, startups, and industry. I've recently gotten into mech interp and have found LessWrong to be a valuable platform for related literature and discussions; figured I should make an account. Looking forward to being less wrong :)
AI interpretability can assign meaning to states of an AI, but what about process? Are there principled ways of concluding that an AI is thinking, deciding, trying, and so on?
The title of this thread breaks the open thread naming pattern; should it be Fall 2025, or should we be in an October 2025 thread by now? Moving to monthly might be nice for the more frequent reminder.
Greetings! In May 2024 I was recruited through my university's literature department to work at a big name AI company, and it was a lot of fun to work on different models since then :) One of my incredible & inspiring leads (shout out to Alexandra!) ran a now-defunct blog (Superfast AI), which I discovered on the WWW last night. In my journey reading through the posts I managed to link up to LessWrong, and I am so excited to be here! It is going to be fun to read through these different ideas.
Some personal facts: I clicked "honesty" as a lesswrong sequ...
Is there a download API? I'd love to download posts as Markdown if that's already built-in. (Eventually, I'm working on integrating this into a tool which helps me make PDFs for my e-reader or for printing out with custom formatting).
I am curious if the people you encounter in your dreams count as p-zombies or if they contribute anything to the discussion. This might need to be a whole post or it might be total nonsense. When in the dream, they feel like real people and from my limited reading, lucid dreaming does not universally break this. Are they conscious? If they are not conscious can you prove that? Accepting that dream characters are conscious seems absurd. Coming up with an experiment to show they are not seems impossible. Therefore p-zombies?
Greetings all. My first visit, not sure where to put this Gen. Info. So will start here, and take guidance from participants, if there is a better thread.
I stumbled on this site after a friend suggested I research "Roko's". An interesting thought experiment, I enjoyed it but nothing worth loosing sleep over. Would be happy to discuss.
I am about 1 year into a manuscript (200 pages so far), dealing with all aspects of cognitive problem solving, via psychological self awareness, and how to debate, discuss issues with the understanding of o...
I have an idea for a yaoi isekai. It's a Tolkien/One Piece/New Testament crossover where you wake up as Peter Thiel, and the rival player characters are Greta Thunberg and Eliezer Yudkowsky. We can make this easily with Sora 2, right?
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.