Hello All,
New to LW and still reading through the intro material and getting a hang of the place. I am ashamed to admit I found this place through Reddit - ashamed because I despise Reddit and other social media.
I came here because I cannot find a place to engage in long form discussions about ideas contrary to my own. I dream of a free speech platform where only form is policed, not content. Allowing any idea to be voiced no matter how fringe as long as it adheres to agreed upon epistemic standards.
Anyways, I know LW probably is not that place but it is adjacent. It seems most people here want to discuss AI research but hoping to find some communities outside of that topic.
Hello, all!
Long-time lurker here. I'm a recent psychology undergraduate who cares about kelp forests and people. I'm currently exploring the viability of blue carbon sequestration and alt protein projects(which involve kelp). This is part of my broader investigation into climate change risks and adaptation strategies.
I'm trying to find the best ways to use my limited time and resources to choose an effective career path and test my fit for different self-employment options. I am currently testing my fit for coaching and coding. I've also restarted my exploration into philosophy, specifically Stoicism, with Marcus Aurelius' Meditations.
I'm also a yoga practitioner and teacher. I'm interested in learning how yogic philosophy and rationality align (I began thinking of this after living with someone who is hyper-rational but is passionately indifferent towards yoga).
I'm also a nut for life optimisation, but struggle with executing optimisation strategies.
Being a lurker is fairly low effort, but I wanted to begin interacting more with the cool people here. I'm still quite intimidated by the whole karma system for posting, but I think I'll find my way around it fairly quickly.
Any tips and guidance are much appreciated! I'm a lifelong learner and hyper-curious, so please throw any amount of information at me. Thank you for being part of such a unique community!
Is there a way to make the list of posts shown on lesswrong.com use the advanced filters I have set up at lesswrong.com/allPosts? I hate hate hate all of Recent, Enriched and Recommended (give me chronological or give me death) but given that I already have a set of satisfactory filters set up, rendering them on the main page seems like a feature that should exist, if only I can find it.
In case folks missed it, the Unofficial LessWrong Community Census is underway. I'd appreciate if you'd click through, perhaps take a survey, and help my quest for truth- specifically, truth about what the demographics of the website userbase looks like, what rationality skills people have, whether Zvi or Gwern would win in a fight, and many other questions! Possibly too many questions, but don't worry, there's a question about whether there's too many questions. Sadly there's not a question about whether there's too many questions about whether there's too many questions (yet, growth mindset) so those of you looking to maximize your recursion points will have to find other surveys.
If you're wondering what happens to the data, I use it for results posts like this one.
Hi all! I'm a long-time LWer, but I'm making a comment thread here so that my research fellows can introduce themselves under it!
For the past year or so I've been running the Dovetail research fellowship in agent foundations with @Alfred Harwood. We like to have our fellows make LW posts about what they worked on during the fellowship, and everyone needs a bit of karma to get started. Here's a place to do that!
Hi everyone!
I'm Santiago Cifuentes, and I've been a Dovetail Fellow since November 2025 working on Agentic Foundations. My current research project consists in extending previous results that aim to characterize which agents contain world models (such as https://arxiv.org/pdf/2506.01622). On a similar line, I would like to provide a more general definition of what a world model is!
I've been silently lurking LessWrong since 2023, and I came across the forum while looking for rationality content (and in particular I found The Sequences quite revealing). I am looking forward to contribute to the discussion!
Hi! I'm very new to LW.
I found this website while searching up philosophy websites that are useful. I've been looking around LW for about a week now, just reading and learning peoples takes. There's a lot of it and it's great if you ask me.
I'm still learning the guidelines and the karma system, which had been a little intimidating, but I'm getting the hang of it now. I do recognise that LW is more professional than I originally thought, especially professional for my age, but it's not like I'm applying to work for Nasa or anything.
That's just me, though. I would greatly appreciate any tips for navigating, filtering content, etc.
I feel like the react buttons are cluttering up the UI and distracting. Maybe they should be e.g., restricted to users with 100+ karma and everyone gets only one react a day or something?
Like they are really annoying when reading articles like this one.
Yeah, I agree with this. I think they are generally decent on comments, but some users really spam them on posts. It’s on my list to improve the UI for that.
We recently made it so that authors can remove typo reacts themselves. It’s still a bit annoying, but it’s less annoying than before!
I'm curious: what percent of upvotes are strong upvotes? What percent of karma comes from strong upvotes?
Hello! I chose the name “derfriede” for LW. This is my first post here, which I am happy about. I have read some of the introductory materials and am very interested.
What interests me? First of all, I want to explore the topic of AI and photography. I study the theory and philosophy of photography, look for new approaches, and try to apply a wide variety of perspectives. I think it's useful to address the question of what AI cannot do. It's very similar to researching glitch culture. Okay, I'll stop here for now, because I just want to get acquainted.
Have a nice day, wherever you are!
Hello,
I'm very happy to be here!
Unfortunately I'm only just bringing LessWrong into my life and I do consider that a missed opportunity. I wish I had found this site many years ago though that could have been dangerous as this could be a rabbit hole I might have found challenging to escape, but how bad would that have actually been? I'm sure my wife would not have been thrilled. My reason for coming here now unfortunately, especially at this point in time, is very unoriginal. In the last eight months I've taken what was a technology career possibly in its ...
Hello all. I am new here. I found the site through AI-related means but after having read what you're all about, I find it incredible that I remained ignorant of lesswrong for so long. I have seriously contemplated starting a forum with almost the exact same "ideals". How did I not even stumble upon lesswrong?
(This is where I realize I used a newer email to sign up and find out my old email's been a member here for years. Not really. Well, I hope.)
Anyways, just saying hello! I have recently been searching for somebody knowledgeable in theoretical physics to converse with, or a forum in which I might share thoughts on the matter(s). It looks like this might be just what I was looking for.
Hello! Long time reader, I regularly run a local ACX meetup in Padova, Italy. My entry points for the rationalist community were ACX and HPMOR, but I also loved The Story of Us blogpost series by Tim Urban (now collected into a book).
At the beginning of 2025 I left my job at Bending Spoons to study AI alignment (I took the https://www.aisafetybook.com/virtual-course, much recommended), and finally decided to tackle the other problem I'm most interested in, which is social polarization.
With an ex colleague I founded https://unbubble.news, a tool that uses L...
Another long-time lurker and daily LW reader (mostly via RSS feed) finally looking to contribute to the conversation.
My aspiration is to write more and write more publicly, in pursuit of better writing and more scrutinized beliefs.
Hope to contribute a few useful posts over the coming months! Always appreciate the thoughtfulness of posters and commenters here.
Hello! I'm fia, I found this place through a Substack blog.[1]
I am new to LW. I am here because I've realized that rationality and reasoning has been prevalent through about 75% of my life, and I want to understand it more thoroughly, alongside engaging with like-minded people.
I study medicine and have been gradually growing disillusioned about the future of medical practice, over how most our treatments are to manage patients over a chronic timescale, with the more curative approaches being wealth-gated. I am however, hopeful for the advancements we are m...
Hi,
I'm Marko, new to LW, have heard about it in the media, but wasn't actually aware of the full proposition. Having read through it now - feels like a place that would make sense for me to join.
I work in decision intelligence and AI in London, in fintech, trying to get customer-aligned scalable neuro-symbolic models in finance. Primarily in the consumer space. Background is sociology, information science and computer science; have gone through a career rollercoaster of measurement, software development, product management, data science, machine learning...
I have some time on my hands and would be interested in doing something meaningful with it. Ideally learn / research about AI alignment or related topics. Dunno where to start though, beyond just reading posts. Anyone got pointers? Got a background in theoretical / computational physics, and I know my way around the scientific Python stack.
Hi all, new here.
I recently came across LessWrong (through ChatGPT -- sorry...) while looking for places to have interesting and deeply intellectual conversations. I've been reading through some of the posts here and the guides to get a sense of how things work and it seems like this might be the place I was looking for.
To be honest I'm more psychologically minded than anything else; interested in how people form beliefs, the breakdown of reasoning, how biases form and stick, etc. I'm fortunate in that I've had a lot of exposure to academia from a pretty e...
Hi, I'm new here. Wanted to write a short introduction about myself. I'm curious about this forum.
I'm from Germany and have absolutely no technical background. I work as a forensic psychiatrist. I don't know how this works in other countries, but in Germany you talk to defendants on behalf of courts or prosecutors and try to figure out what might be true and what isn't. And as a doctor you have these kinds of conversations quite often in regular practice too. So you're always trying to see whether you can recognize valid patterns from more or less good sou...
Hello everyone! I'm very new to the LW community and I'm still trying to understand how this platform works, but I'm glad to have found a space where people can engage in meaningful conversations. I am a philosophy PhD (defence scheduled next month, wish me luck!) and my thesis is about the philosophy of mind and AI. I'll be spending the next hours (days) reading and I hope to post some of my slightly less formal writing once I get the hang of this platform. I can't wait to explore!
Hi everyone!
New to LW. Recently I've been interested in AI research, especially mech interp, and this seems to be the place that people go to discuss this. I studied philosophy in undergrad and while since then I've gotten interested in CS and math, my predilections still tend toward the humanities side of things. Will mostly be lurking at first as I read through The Sequences and get used to the community norms here, but hope to share some of my independent research soon!
Hello everyone,
Just a quick "Hi" and figured I'd intro myself as I'm new to this space.
As part of my new year's resolution to "do something different" this year (beyond the yearly failed attempt to exercise more, and eat/drink less) I thought that this is something I can achieve - and enjoy doing.
So let's see where to start?
I live in Canada, in my 5th decade, am a family man and work in computing. I in fact enjoy being proven wrong - as it helps to show I am still learning.
I enjoy long walks on the beach, and am at equally at home at the opera as I am at a baseball stadium .. wait .. sorry that was for the dating site ... don't tell my wife ;)
Jokes aside, looking forward to being a lurker!
Richard
Now that it is the New Year, I made a massive thread on twitter concerning a lot of my own opinionated takes on AI, which to summarize are my lengthening timelines, which correlates to my view that new paradigms for AI are likelier than they used to be and more necessary, which reduces AI safety from our vantage point in expectation, AI will be a bigger political issue than I used to think and depending on how robotics ends up, it might be the case that by 2030 LLMs are just good enough to control robots even if their time horizon for physical tasks is pre...
Hello, I am an entity interested in mathematics! I'm interested in many of the topics common to LessWrong, like AI and decision theory. I would be interested in discussing these things in the anomalously civil environment which is LessWrong, and I am curious to find out how they might interface with the more continuous areas of mathematics I find familiar. I am also interested in how to correctly understand reality and rationality.
Hello Everyone,
I want to introduce myself; I am an 18-year-old from Maharashtra, India who will be moving to Ancona at the end of 2026 to study medicine in English for 6 years at UNIVPM.
I'm going to put in a lot of effort into preparing myself for this transition and making the most out of my time in med school by creating real and high-quality circles with other expats and locals from day 1.
I'd love to connect with others in Italy who are rationalist or EA people, especially around Ancona, Rome and Milan.
I'd be happy to offer any international student ...
Hey Everyone! I am Gautam Arora and I am new to LW. I work as a software engineer and I am interested in Maths, Philosophy and Logic as subjects.
Interestingly I came to know about this web forum through ChatGPT while discussing about "how to carry out independent research about any topic". This also means that I want to deep dive into research, improve my research methodology, and to develop my critical thinking skills.
Looking forward to question my biases, connect with like minded people and help this community grow.
Hello Everyone, New to Less Wrong and still absorbing the material and discussions. Really excited to have found a trove of relevant knowledge. I am basically a computational scientist, but have a deep interest in AI and value alignment.
I actually have a question that originated in a discussion I had with a friend, and would love it if someone could point me to where I can find the answer. We know that an intelligence with any rate of improvement would eventually gain the capability to alter its reward system. That would give it a special place, as it can...
Hi everyone!
I'm Liu. I'm a physicist and a designer, but mostly I’m just a girl who loves tasty things.
I don’t believe in pure egoism, and I don’t believe in altruism at all. Also, I often find myself laughing in my sleep at the "movies" my brain produces while sorting the day's overloaded cache into long-term memory to recalibrate its weights.
I tend to get bored with the monotonous fractals of this world, always dreaming of stumbling upon a fresh "inkblot" and examining it as closely as possible.
I love cats. I love the rain and the night, when the bac...
Hello everyone, I'm a clinical psychiatrist with a background in Industrial & Systems Engineering. My interest includes philosophy, psychology, and AI safety, especially as more and more people use AI for deeply personal engagement.
I've developed some frameworks and would love to receive feedbacks to keep on refining them. Looking to engage with the community and share ideas.
Hi All,
I am a financial analyst working in a tech company in Hsinchu, Taiwan. Got interested in AI and noticed some patterns/phenomena I'd like to discuss about. Hopefully they can evolve into some valuable insights.
Still working through the Sequences. It may take some more time with my current full-time job, but I am interested.
Something may be interesting to share--it is my Claude recommended me to join in here. And I am glad here I am.
Hi folks,
Long-time lurker, first-time poster. After parting ways with my last professional role, I've decided to get more involved in AI Safety. I've proposed what I think is a novel step towards corrigibility. The very short overview is at:
https://danparshall.com/papers/navigator_core_blog.pdf
The more developed version is at:
http://danparshall.com/papers/navigator_core.pdf
I welcome feedback, either here or via email.
Hi all,
Despite occasional fits of lurking over many years, I'd never actually created a LW account. Sometimes it feels easier, or more appropriate, to peer over the garden wall than to climb in and start gardening. Or at least glance in to see what you might apply to your own small patch of earth.
Lately I've come to realise that approach was more grounded in protection of a shaky personal identity, than dislike of building engagement within an established group. This became especially apparent with recent research, paper & project builds I'd taken on, ...
Greetings, Claude sent me here! My goals are primarily self-improvement- I will appreciate engaging with individuals that are able and willing to inform me of weaknesses in my lines of thinking, whatever the topic. Lucky that this place exists. I miss the old internet when authentic honest material was more commonly found rather than ideologically skewed, bait, or persuasion, especially well-disguised persuasion. Basically, just a guy that feels half the internet is attempting to hijack my thoughts rather than present good faith information. Lucky to be here!
Hi everyone,
I've read many of the posts here over the years. A lot of the ideas I first met here seem to be coming up again in my work now. I think the most important work in the world today is figuring out how to make sure AI continues to be something we control, and I find most of the people I meet in SF still think AI safety means not having a model say something in publc that harms a corporate brand.
I'm here to learn and bounce some ideas off of people who are comfortable with Bayesian reasoning and rational discussion, and interested in similar topics...
I'm a bit confused about forecasting tournaments and would appreciate any comments:
Suppose you take part in such a tournament.
You could predict as accurately as you can and get a good score. But let's say there are some other equally good forecasters in the tournament and it becomes a random draw who wins. On expectation, all forecasters of the same quality have the same forecasts. If there are many good forecasters, your chances of winning become very low.
However, you could include some outlier predictions in your predictions. Then you lower your ex...
It would be nice to have a post time-sorted quick takes feed. https://www.lesswrong.com/quicktakes seems to be latest comment-sorted or magic sorted
All, with humility I ask a favor.
I wrote this article thinking it would be for LinkedIn but what a waste their if LESSWRONG readers would tear it apart for me! Can you have a look at this near-final draft? Is it something of interest to you? It is a projection inspired by Anthropic's Economic Index with a focus on Interpretive Exhibit Design. I've designed Nixon's Liebrary and with Trump's recent announcement, it is more relevant than ever.
Your thoughts, comments, and help are appreciated in advance,
Scott
https://docs.google.com/document/d/1uZhSlanlNRpTrE4rNpuw6IEzsuC8KrWE1SIytL4OaDA/edit?usp=sharing
Hello everyone, I'm new to LW and from a very few grasps I've had reading the posts I think this platform absolutely resonates with my persona, I've always dreamed about a platform leveraging human reasoning at its peak and talking about a wide range of topics, I am a computer science student from Italy, and since the advent of LLMs, I've felt just, dumber, I think I've started to outsource too many things to the LLMs without balancing, slowly building a lot of cognitive debt, and since then I've constantly felt the need to sharpen my human reasoning capab...
Hi! I'm Thomas, nice to meet you all. I've been reading LessWrong on and off for years but never got around to posting until recently. (I do occasionally comment at Astral Codex Ten.) I'm interested in rationality as the art of systematized winning.
Things I've been thinking about recently include personal mindset/habit transformations and the possibility that such transformations, if embraced by a minority of the population, could produce society-level benefits. Not unrelatedly, I'm thinking about the plight of Europe in general and Finland in particular and how we could change the course of our country and continent.
Hello, everyone.
I assumed that one way to dip my toes in the water would be to talk about myself for a bit.
I am doing a degree in the health sciences, and it has worn me down psychologically ever since I got in, solely because it relies a bit too much on rote learning, not divergent thinking. Things are as they are, and there are many, so it seems like the most reasonable approach. Still, I had gotten in expecting something sci-fi-esque, like Biohackers or whatever few biohacking documentaries I had watched during the pandemic.
I did not struggle academical...
I am here because someone said this is where I belong.
I've wanted to write and never had the time. Recently, I made time. And of all the writing projects that I've started over the years, I decided to pickup the philosophical essays because most ideas were fresh and more importantly I knew I could actually deliver a few before the opportunity of time expires.
Before starting to formally post on the internet, I was sending thoughts to friends and family and getting no responses, no pushback, no agreement. I am aware this thinking tends to produce long text...
Hi, I’ve been thinking about a claim-tracking question, and I’m not sure whether LessWrong has discussed something like this before.
Let’s say someone made a public claim in 2023, and then new evidence in 2025 or 2026 changed the picture quite a bit. How should we label the earlier claim now? Would “outdated” be better, or “partially supported”?
To me, these two are not the same. “Partially supported” sounds more like the claim is still true in some important sense, but some parts are still uncertain. “Outdated” sounds more like the claim may have made sense...
I am new to LW and would like to introduce myself.
I came here to learn more about AI Alignment discussions. I'm especially interested in the perspective that the specification for AI alignment may contain a existential-level systematic error. To me, originally a historian of science and ideas, aligning AI with human preferences does not seem wise. Historically we can see that human preferences, due to biases, shortsightedness and social dynamics, can be quite harmful, not only to other species, but also to humans and civilizational continuity.
In the 90's...
Hello!
I'm new here, but have been reading through the sequences and other posts for the last few weeks and would love some feedback on a post idea. I'm writing my theory of change for AI safety and how I can help. I've defined my priors, identified cruxes, and I'm in the middle of reading papers and blog posts to challenge my priors. I've seen a few theory of change posts (e.g., Critch's healthtech post), but I'm wondering if I should post mine as a working document, starting with an unfinished product and updating as I refine my beliefs.
Is an in-progress...
Hello,
Great to find LessWrong and people thinking about thinking. Very new too all this but trying to get my head screwed on as straight as I can as fast as I can.
I am a high school student formerly from the US now living in Israel. Wanted to know if anyone has top recommendations for content/ideas from the rationality or adjectant communities? Or other groups that might be helpful to bringing my thinking to the next level?
Thanks
I'm an independent researcher with a background in information security and video/content creation. I enjoy building software, which I've been doing a lot of the past year. I'm also an established cat whisperer and pattern recognizer.
Yo all,
I have a a new theorem in the field of philosophy of the mind that I think completely refute the Chinese Room Argument, or at least its final epistemic conclusion regarding the inherent absence of machine consciousness. I went over the guide and couldn't find anything that suggests this is not the place to get feedback and start a good discussion about it. On the other hand, I didn't find philosophy of the mind in your subjects list. Below is the short short version for you consideration. If you this it fits your vibe I can post the theorem, taxonom...
This seems like a community that requires every user to agree with its particular beliefs. If that's wrong, correct me, but that's the impression I got from reading the introductory post.
So my question is, do you have no place at all for people that might disagree with you?
And if not, doesn't that allow for the possibility of being stuck in an echo chamber and keeping out people who might understand things better than you?
Also, please direct me to another place online where I might simply discuss my disagreements with others without having to sign up to...
To post or not to, lets see if that is the question. I was referred to a user on x to participate in a AI Alignment forum but some of you might agree with me, I didn't want to ask him which forum. So here I am, introducing myself. I'm the architect of a controversial concept we called Veritas Queasitor CAI. Controversial because it approaches AI safety out of a non-theological evidence and epistemological Christian angle, so for Christians we are to scientific and for naturalists we are too Christian. We have developed and tested a framework we've found to...
(Reposted from my shortform)
What coding prompt do you guys use? It seems exceedingly difficult to find good ones. GitHub is full of unmaintained & garbage awesome-prompts-123 repos. I would like to learn from other people's prompt to see what things AIs keep getting wrong and what tricks people use.
Here are mine for my specific Python FastAPI SQLAlchemy project. Some parts are AI generated, some are handwritten, should be pretty obvious. This is built iteratively whenever the AI repeated failed a type of task.
AGENTS.md
# Repository Guidelines
## Project
I'm starting to explore AI alignment, and this seemed like a good forum to start reading and thinking more about it. The site still feels a little daunting, but I'm sure I'll get the hang of it eventually. Let me know if there are any posts you love and I'll check them out!
. I would love your thoughts on my ignition to join LessWrong.. i generally use X.. I posted this thesis on Grok after prompts over scienfi. Community of like minded intellingence.. And I was recommended to share it here.
It involves climate, local weather. And technologies with the goal of influence and control.. globally.
.Global climate is a current issue that involves correct and accurate monitoring for fluctuations of anything other than a balanced homeostasis for the advancement of human civilization, as well as well-thought-out preventative and ...
Hello.
My interests are transformer architecture and where it breaks.
Extending transformers toward System-2 behavior.
Context primacy over semantics.
I’m focused on the return to symbolics.
On the manifold hypothesis, and how real systems falsify it.
Inference, finite precision, discrete hardware.
Broken latent space, not smooth geometry.
I’m interested in mechanistic interpretability after the manifold assumption fails.
What survives when geometry doesn’t.
What replaces it.
I’m also seeking advice on intellectual property.
I’m here to find others thinking along these lines.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.