Hello All,
New to LW and still reading through the intro material and getting a hang of the place. I am ashamed to admit I found this place through Reddit - ashamed because I despise Reddit and other social media.
I came here because I cannot find a place to engage in long form discussions about ideas contrary to my own. I dream of a free speech platform where only form is policed, not content. Allowing any idea to be voiced no matter how fringe as long as it adheres to agreed upon epistemic standards.
Anyways, I know LW probably is not that place but it is adjacent. It seems most people here want to discuss AI research but hoping to find some communities outside of that topic.
Hello, all!
Long-time lurker here. I'm a recent psychology undergraduate who cares about kelp forests and people. I'm currently exploring the viability of blue carbon sequestration and alt protein projects(which involve kelp). This is part of my broader investigation into climate change risks and adaptation strategies.
I'm trying to find the best ways to use my limited time and resources to choose an effective career path and test my fit for different self-employment options. I am currently testing my fit for coaching and coding. I've also restarted my exploration into philosophy, specifically Stoicism, with Marcus Aurelius' Meditations.
I'm also a yoga practitioner and teacher. I'm interested in learning how yogic philosophy and rationality align (I began thinking of this after living with someone who is hyper-rational but is passionately indifferent towards yoga).
I'm also a nut for life optimisation, but struggle with executing optimisation strategies.
Being a lurker is fairly low effort, but I wanted to begin interacting more with the cool people here. I'm still quite intimidated by the whole karma system for posting, but I think I'll find my way around it fairly quickly.
Any tips and guidance are much appreciated! I'm a lifelong learner and hyper-curious, so please throw any amount of information at me. Thank you for being part of such a unique community!
In case folks missed it, the Unofficial LessWrong Community Census is underway. I'd appreciate if you'd click through, perhaps take a survey, and help my quest for truth- specifically, truth about what the demographics of the website userbase looks like, what rationality skills people have, whether Zvi or Gwern would win in a fight, and many other questions! Possibly too many questions, but don't worry, there's a question about whether there's too many questions. Sadly there's not a question about whether there's too many questions about whether there's too many questions (yet, growth mindset) so those of you looking to maximize your recursion points will have to find other surveys.
If you're wondering what happens to the data, I use it for results posts like this one.
Hi all! I'm a long-time LWer, but I'm making a comment thread here so that my research fellows can introduce themselves under it!
For the past year or so I've been running the Dovetail research fellowship in agent foundations with @Alfred Harwood. We like to have our fellows make LW posts about what they worked on during the fellowship, and everyone needs a bit of karma to get started. Here's a place to do that!
Hi everyone!
I'm Santiago Cifuentes, and I've been a Dovetail Fellow since November 2025 working on Agentic Foundations. My current research project consists in extending previous results that aim to characterize which agents contain world models (such as https://arxiv.org/pdf/2506.01622). On a similar line, I would like to provide a more general definition of what a world model is!
I've been silently lurking LessWrong since 2023, and I came across the forum while looking for rationality content (and in particular I found The Sequences quite revealing). I am looking forward to contribute to the discussion!
Is there a way to make the list of posts shown on lesswrong.com use the advanced filters I have set up at lesswrong.com/allPosts? I hate hate hate all of Recent, Enriched and Recommended (give me chronological or give me death) but given that I already have a set of satisfactory filters set up, rendering them on the main page seems like a feature that should exist, if only I can find it.
Hi! I'm very new to LW.
I found this website while searching up philosophy websites that are useful. I've been looking around LW for about a week now, just reading and learning peoples takes. There's a lot of it and it's great if you ask me.
I'm still learning the guidelines and the karma system, which had been a little intimidating, but I'm getting the hang of it now. I do recognise that LW is more professional than I originally thought, especially professional for my age, but it's not like I'm applying to work for Nasa or anything.
That's just me, though. I would greatly appreciate any tips for navigating, filtering content, etc.
I feel like the react buttons are cluttering up the UI and distracting. Maybe they should be e.g., restricted to users with 100+ karma and everyone gets only one react a day or something?
Like they are really annoying when reading articles like this one.
Yeah, I agree with this. I think they are generally decent on comments, but some users really spam them on posts. It’s on my list to improve the UI for that.
We recently made it so that authors can remove typo reacts themselves. It’s still a bit annoying, but it’s less annoying than before!
Hello,
I'm very happy to be here!
Unfortunately I'm only just bringing LessWrong into my life and I do consider that a missed opportunity. I wish I had found this site many years ago though that could have been dangerous as this could be a rabbit hole I might have found challenging to escape, but how bad would that have actually been? I'm sure my wife would not have been thrilled. My reason for coming here now unfortunately, especially at this point in time, is very unoriginal. In the last eight months I've taken what was a technology career possibly in its ...
Hello! Long time reader, I regularly run a local ACX meetup in Padova, Italy. My entry points for the rationalist community were ACX and HPMOR, but I also loved The Story of Us blogpost series by Tim Urban (now collected into a book).
At the beginning of 2025 I left my job at Bending Spoons to study AI alignment (I took the https://www.aisafetybook.com/virtual-course, much recommended), and finally decided to tackle the other problem I'm most interested in, which is social polarization.
With an ex colleague I founded https://unbubble.news, a tool that uses L...
I'm curious: what percent of upvotes are strong upvotes? What percent of karma comes from strong upvotes?
Hello! I chose the name “derfriede” for LW. This is my first post here, which I am happy about. I have read some of the introductory materials and am very interested.
What interests me? First of all, I want to explore the topic of AI and photography. I study the theory and philosophy of photography, look for new approaches, and try to apply a wide variety of perspectives. I think it's useful to address the question of what AI cannot do. It's very similar to researching glitch culture. Okay, I'll stop here for now, because I just want to get acquainted.
Have a nice day, wherever you are!
I have some time on my hands and would be interested in doing something meaningful with it. Ideally learn / research about AI alignment or related topics. Dunno where to start though, beyond just reading posts. Anyone got pointers? Got a background in theoretical / computational physics, and I know my way around the scientific Python stack.
Hello everyone! I'm very new to the LW community and I'm still trying to understand how this platform works, but I'm glad to have found a space where people can engage in meaningful conversations. I am a philosophy PhD (defence scheduled next month, wish me luck!) and my thesis is about the philosophy of mind and AI. I'll be spending the next hours (days) reading and I hope to post some of my slightly less formal writing once I get the hang of this platform. I can't wait to explore!
Hi everyone!
New to LW. Recently I've been interested in AI research, especially mech interp, and this seems to be the place that people go to discuss this. I studied philosophy in undergrad and while since then I've gotten interested in CS and math, my predilections still tend toward the humanities side of things. Will mostly be lurking at first as I read through The Sequences and get used to the community norms here, but hope to share some of my independent research soon!
Hello everyone,
Just a quick "Hi" and figured I'd intro myself as I'm new to this space.
As part of my new year's resolution to "do something different" this year (beyond the yearly failed attempt to exercise more, and eat/drink less) I thought that this is something I can achieve - and enjoy doing.
So let's see where to start?
I live in Canada, in my 5th decade, am a family man and work in computing. I in fact enjoy being proven wrong - as it helps to show I am still learning.
I enjoy long walks on the beach, and am at equally at home at the opera as I am at a baseball stadium .. wait .. sorry that was for the dating site ... don't tell my wife ;)
Jokes aside, looking forward to being a lurker!
Richard
Now that it is the New Year, I made a massive thread on twitter concerning a lot of my own opinionated takes on AI, which to summarize are my lengthening timelines, which correlates to my view that new paradigms for AI are likelier than they used to be and more necessary, which reduces AI safety from our vantage point in expectation, AI will be a bigger political issue than I used to think and depending on how robotics ends up, it might be the case that by 2030 LLMs are just good enough to control robots even if their time horizon for physical tasks is pre...
Hello, I am an entity interested in mathematics! I'm interested in many of the topics common to LessWrong, like AI and decision theory. I would be interested in discussing these things in the anomalously civil environment which is LessWrong, and I am curious to find out how they might interface with the more continuous areas of mathematics I find familiar. I am also interested in how to correctly understand reality and rationality.
Hi all,
Despite occasional fits of lurking over many years, I'd never actually created a LW account. Sometimes it feels easier, or more appropriate, to peer over the garden wall than to climb in and start gardening. Or at least glance in to see what you might apply to your own small patch of earth.
Lately I've come to realise that approach was more grounded in protection of a shaky personal identity, than dislike of building engagement within an established group. This became especially apparent with recent research, paper & project builds I'd taken on, ...
Greetings, Claude sent me here! My goals are primarily self-improvement- I will appreciate engaging with individuals that are able and willing to inform me of weaknesses in my lines of thinking, whatever the topic. Lucky that this place exists. I miss the old internet when authentic honest material was more commonly found rather than ideologically skewed, bait, or persuasion, especially well-disguised persuasion. Basically, just a guy that feels half the internet is attempting to hijack my thoughts rather than present good faith information. Lucky to be here!
Hi everyone,
I've read many of the posts here over the years. A lot of the ideas I first met here seem to be coming up again in my work now. I think the most important work in the world today is figuring out how to make sure AI continues to be something we control, and I find most of the people I meet in SF still think AI safety means not having a model say something in publc that harms a corporate brand.
I'm here to learn and bounce some ideas off of people who are comfortable with Bayesian reasoning and rational discussion, and interested in similar topics...
I'm a bit confused about forecasting tournaments and would appreciate any comments:
Suppose you take part in such a tournament.
You could predict as accurately as you can and get a good score. But let's say there are some other equally good forecasters in the tournament and it becomes a random draw who wins. On expectation, all forecasters of the same quality have the same forecasts. If there are many good forecasters, your chances of winning become very low.
However, you could include some outlier predictions in your predictions. Then you lower your ex...
It would be nice to have a post time-sorted quick takes feed. https://www.lesswrong.com/quicktakes seems to be latest comment-sorted or magic sorted
(Reposted from my shortform)
What coding prompt do you guys use? It seems exceedingly difficult to find good ones. GitHub is full of unmaintained & garbage awesome-prompts-123 repos. I would like to learn from other people's prompt to see what things AIs keep getting wrong and what tricks people use.
Here are mine for my specific Python FastAPI SQLAlchemy project. Some parts are AI generated, some are handwritten, should be pretty obvious. This is built iteratively whenever the AI repeated failed a type of task.
AGENTS.md
# Repository Guidelines
## Project
I'm starting to explore AI alignment, and this seemed like a good forum to start reading and thinking more about it. The site still feels a little daunting, but I'm sure I'll get the hang of it eventually. Let me know if there are any posts you love and I'll check them out!
Hello.
My interests are transformer architecture and where it breaks.
Extending transformers toward System-2 behavior.
Context primacy over semantics.
I’m focused on the return to symbolics.
On the manifold hypothesis, and how real systems falsify it.
Inference, finite precision, discrete hardware.
Broken latent space, not smooth geometry.
I’m interested in mechanistic interpretability after the manifold assumption fails.
What survives when geometry doesn’t.
What replaces it.
I’m also seeking advice on intellectual property.
I’m here to find others thinking along these lines.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.