If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
48 comments, sorted by Click to highlight new comments since:

I am a long-time lurker; I started following EY in Overcoming Bias and then here, where I visit on and off. I have been a business school professor for about 15 years. I use a pseudonym to keep my online professional profile distinct from my non-professional online activity. 

I notice that  business topics are not often discussed in LW, or at least not in the same detail and precision as other topics. It is certainly not a focal theme of the blog, but I was wondering if there would be interest to discuss business topics without the "fluff" that one inevitably finds in airport books and business school discussions.

For instance, there is a strong link between decision-making and the foundations of marketing. There are also interesting connections between branding and categorization and linguistics. If you guys think this could be a contribution to the blog, I would be more than happy to give it a go. 

[-]lsusr148

There's definitely an interest. Like many subjects, the limiting factor is people who are good writers, good at business (or at least knowledgeable about it) and have the slack to post. Pseudonyms are fine.

[-]gwern162

That is my theory too; see "Why So Few Matt Levines?"

Hi everyone, I’m new here, I’m sokatsat(pen name), I’ve long been preoccupied with a persistent moral intuition: that the suffering or harm of an innocent being—whether human or animal—is not merely tragic, but fundamentally unacceptable. This belief hasn’t emerged from a single ideology, nor from personal trauma. It feels more like a deep-seated axiom, an emotional and philosophical anchor that has shaped the architecture of my life.
Over the years, this sense of duty extended beyond human concerns. I began to question the normalized suffering of animals—particularly in food systems that treat sentience as secondary to convenience. While I haven’t fully eliminated meat from my diet, I’m increasingly drawn to frameworks that prioritize minimizing harm without losing touch with reality. Ethical consistency, as I see it, is an asymptotic pursuit—an ideal worth moving toward even if we never fully arrive.
Unlike many who structure their lives around accumulating material wealth, I’ve found myself disinterested in optimizing for money. It’s not that I reject it entirely—I simply view wealth as a means, not an end. What I seek instead is durable impact. I want to help build systems—technological, ethical, and institutional—that are robust against the worst human and AI failures. I want to participate in designing a future where harm to innocents isn’t just minimized, but systemically resisted.
I’m joining LessWrong because I see in this community a rare blend of epistemic rigor and moral seriousness. Here, people ask the hard questions—about AGI, about coordination failures, about the fragility of civilization itself. I’m motivated to contribute to these conversations, especially from a perspective grounded in a protective instinct that is both emotional and computational: How do we design systems that don’t just work, but protect what must never be lost?
Looking forward to thinking together


 

[-]TsviBT142

I wish more people were interested in lexicogenesis as a serious/shared craft. See:

The possible shared Craft of deliberate Lexicogenesis: https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html (lengthy meditation--recommend skipping around; maybe specifically look at https://tsvibt.blogspot.com/2023/05/the-possible-shared-craft-of-deliberate.html#seeds-of-the-shared-craft)

Sidespeak: https://tsvibt.github.io/theory/pages/bl_25_04_25_23_19_30_300996.html

Tiny community: https://lexicogenesis.zulipchat.com/ Maybe it should be a discord or a reddit, thoughts?

Hi! I am a master student in Computer Science majoring in — yes you guessed it — AI, but also with a background in psychology.

Interests

Cognition

I was always interested in thinking and optimizing my own thought patterns. It's probably half my urge to be 'more intelligent' and half the necessity to overcome my ADHD challenges. Through my studies, I already learned about many of the things taught here or in Eliezers works, but HPMOR and the Sequences still had/have a lot to teach me, or at least help my knowledge affect my actions.

AI

I pursue AI due to my interest in cognition. I would like to know how intelligence and reasoning work, and what heuristics make them better or worse. A proof of understanding, after all, is the ability to recreate it using your beliefs about it.

Ethics

I also have a deep interest in ethics. My current stance is a form of sentientism. In short, I currently believe most (human or non-human) animals have a certain ability to suffer, which is correlated with, but not caused by, intelligence. I want to grant these beings rights for protection from unnecessary harm and the likes. Intelligence just gives you the additional rights; e. g. to pursue a purpose.

What am I doing here

I don't really have a lot of people whom I can discuss my thoughts and Ideas with in real life. Most non-AI people hate the work I am doing or are just disinterested, and many AI people hope for salvation from AGI or riches from their jobs. I just want to understand how thinking works and can be optimized, and I have the feeling there are people here who think like me. Of course, I would also like to learn more strategies for system 2 thinking, and improvmymy heuristics for system 1. I also hope I can contribute to this collection of knowledge at some point! :D

LessWrong's been a breath of fresh air for me. I came to concern over AI x-risk from my own reflections when founding a venture-backed public benefit company called Plexus, which made an experimental AI-powered social network that connects people through the content of their thoughts rather than the people they know. Among my peers, other AI founders in NYC, I felt somewhat alone with AI x-risk concern. All of us were financially motivated not to dwell on AI's ugly possibilities, and so most didn't.

Since exiting venture, I've taken a few months to reset (coaching basketball + tutoring kids in math/english) and quietly do AI x-risk research.

I'm coming at AI x-risk research from an evolutionary perspective. I start with the axiom that the things that survive the most have the best characteristics (e.g., goals, self-conceptions, etc) for surviving. So I've been thinking a lot about what goals/self-conceptions the most surviving AGI's will have, and what we can do to influence those self-conceptions at critical moments.

I have a couple ideas about how to influence self-interested superintelligence, but am early in learning how to express those ideas such that they fit into the style/prior art of the LW community. I'll likely keep sharing posts and also welcoming feedback on how I can make them better.

I'm generally grateful that a thoughtful, truth-seeking community exists online—a community which isn't afraid to address enormous, uncertain problems.

[-]ceba10

What environmental selection pressures are there on AGI? That's too vague, isn't it? (What's the environment?) How do you narrow this down to where the questions you're asking are interesting/reaearcheable?

Ah but you don't even need to name selection pressures to make interesting progress. As long as you know some kinds of characteristics powerful AI agents might have: eg goals, self models... then we can start to ask--what goals/self models will the most surviving AGIs have?

and you can make progress on both, agnostic of environment. but then, once you enumerate possible goals/self models, then we can start to think about which selection pressures might influence those characteristics in good directions and which levers we can pull today to shape those pressures.

Hello, LessWrong community. I don't recall exactly how I stumbled upon your website. It was many months ago, and I bookmarked your website and have gradually been looking through your recommended posts for newcomers. I have a multidisciplinary background in history, policy, case law, ecology, botany, evolutionary biology, literature, and languages. From 2022 to 2024, I worked as an academic journal article editor and went through an internal company training program to become qualified to edit articles on AI research. This was a great introduction to the many areas of human life, research, and industry being transformed by AI. I edited articles on AI tool development in agriculture, linguistics, radiology, and countless other disciplines. Eventually, I was one of hundreds of editors laid off by our company when competition from AI writing and editing tools dramatically reduced the quantity of articles international scholars were submitting to us for English language editing. I am fascinated by AI's potential and a little nervous about my employability as a human in years to come, but not so nervous as to let the irrationality of fear hijack my mind. I think the LessWrong posts will be good education for me, and I will wait to contribute further until I have a better understanding of the etiquette and culture. For now, I will just say hello and thank you all for your thoughtful discourse. 

Hi LW,

I came across LW whilst prompting search engines for publishers that my work would be suited to, work which I have nearly concluded, and work in which the goal has been to make an impenetrable behavioural model for a functional and objectively fair society. I started this project roughly 4 years ago after being discriminated against during an interview process for a role in concept art at a AAA company following a prompt on my perspective to social issues - in particular on this occasion trans.

Over the last decade I have struggled greatly. At first with myself, internalising the miscommunication issues between myself, friends and family. Later with communication itself, what is the blockade preventing my thought processes from being received neutrally, and then later to which extent I wish to represent my inclination towards solving contentious issues. Over this period I have been repeatedly described as immoral, on the 'wrong side of history', thinking 'too far ahead', and most oftenly - and perhaps most palatably - simply obsessive. I don't know why I care about the things I do so much, but to those that ask why I do, I often feel 'why do they not?'.

Due to the complex and often intense feelings of moral conflict I have felt because of this, I have had an 'on-and-off' relationship with constructing the objective framework since 2021. Often bouncing between what I feel is a crucial contribution of effort to progress, and the over shadowing self-alienation of 'unhealthy obsession' around issues that have driven a wedge between my ability to connect with my friends, family and career prospects.

I used to feel that most conversations around contentious issues required two conversations. The first was to meet a condition of 'why it is not wrong to offer dissenting opinion' or perhaps more bluntly 'why I'm not a bad person' - establishing equal footing for fruitful conversation by appealing to virtues such as; everyones right to happiness, or not wanting individuals to suffer. After which the second conversation could be hosted - the actual topic intended to discuss in the first place, whatever that contentious issue may be. Meeting that condition however, never felt attainable.

Over time realising - individuals can only meet you on depths they have met themselves - having these conversations with those outside of my natural environment who were already discussing these issues, didn't feel like contention, but collaboration. Due to my social rearing - liberal upbringing and surrounding creative environment - I had been laced at odds with myself all the while feeling like I had difficulty understanding why I was so wrong, instead of understanding that I was simply misunderstood. Pre - and post - understanding this sentiment, for better or worse, this has been my driver to communicate a framework as efficiently as possible, as the only method I can constructively resort to in order to potentially communicate with siblings, friends, and the ideologically conformed infrastructure around the creative industry that has held me back.

I'm hoping that the framework I contribute will be beneficial to educational bodies, the work sector, and a shift in how as a society we approach politics by introducing a concrete framework with objective and immutable laws of good and bad and some minor emphasis on how this relates to social issue generalities. On the whole a guide to a behavioural completionist society. Whenever I have felt awful, I have assumed others must feel worse, and I hope this too helps them. The framework is not intended to fix all issues, but it is intended to fix what I consider to be a broken language, lacking a spine necessary to make it as valuable and binary as Mathematics.

I'm not sure how much I want to say yet, but when it's finished I have been compiling a list of priority and secondary contacts I can forward the material onto, and will likely drop aspects here for assessment. I've read some aspects of the governing ideals behind those who use this site and I absolutely love the soft nudges towards healthy discourse, and critical thought processes. I find it very encouraging!

I do struggle with ADD (which can be a double edged sword), so writing the framework has been challenging, though I've recently picked up Scrivener which seems to be helping the organisational aspects a lot.

Anyway, maybe I'm delusional, but thanks for reading!

DM me when you’re finished, I want to see what you come up with.

[-]ceba10

I'd like to see your work, when it's ready to be shared.

Hello, everyone! I don't remember how I first found the site a few months ago, but I came back after reading some of David Chapman's stuff (Meaningness, Eggplant, etc.) I read through HPMOR and the sequences and thought they were pretty awesome, and I also really like the general vibes here in favor of good epistemics and discussion.

Some things about me:
I graduated last spring with a B.S. in Poli Sci, I was planning to apply to law school but have since decided against it for the foreseeable future, I currently work at a local law firm in my area.
I play violin and piano semi-professionally, doing gigs around town.
I've been saving over 85% of my income since last September when I actually started tracking it.
Relatedly, I want to keep my spending very low. It seems really helpful at allowing me to have control of my time and my actions.
I want to learn a lot of things and get better at doing a lot of things. I regret that I didn't have this mindset when I was still in college, at that point I was mainly seeking good grades for minimum effort.
I don't have too much in terms of strong life direction. My interests are varied and I can't really imagine picking any one thing to focus on above all else for more than a limited time.
Last time I took an OCEAN test I was in the bottom 1% for neuroticism so that may help explain some things.
Current focuses: Music, Japanese, Exercise, learning about AI (have updated massively towards a relatively shorter timeline from basically no timeline at all since first visiting the site), and Cooking.
I definitely want to get some better structure to actually track progress in different things.
I also want to get caught up on some math and physics to better intuitively understand some of the stuff I've been reading here, I used to be pretty good (went to National Mathcounts in middle school) but haven't done anything past like Calc 1 and I don't remember that very well.  
I'm in the US, but do not live really close to any of the meet-up places that I have seen.

What I want to do here: I would like to use all the cool stuff you guys have written/made to learn and get better at doing things, I would like to make positive connections and friendships with smart and interesting people, and I would like to hopefully create some content that people here will find useful at some point. See you around!

I found this site by using chatgpt, asking for niche internet communities to do my hobby. My hobby being interview style conversations about what makes you think a statement is true. Where I ask questions and hypotheticals to explore together why you think something is true. If anyone is interested I am open to doing this with you.

FYI LessWrong has a somewhat hidden feature called Dialogues. Note that 

PSA - at least as of March 2024, the way to create a Dialogue is by navigating to someone else's profile and to click the "Dialogue" option appearing near the right, next to the option to message someone. 

I found LessWrong two(ish) years ago in kind of a weird way. I was looking for critiques of C.S. Lewis’s book The Abolition of Man, and the best one I found was this LessWrong post discussing the book. From there, I found The Sequences, Effective Altruism, and the SlateStarCodex subreddit. I recently moved to the Tri-Cities, WA area, far away from any of the rationalist/EA meetups I’m aware of. If anyone else is in the area, I’d love to connect! 

Hi all, I'm fairly new to LessWrong, I've been reading for around 6 months now. I grew up ultra-orthodox Jewish, and somewhere along my extended "loss of faith" I discovered HPMOR which eventually led to my discovering LessWrong, albeit a few years later. 

I've long been planning on getting a career in coding, which has held my interest since my early teenagehood, but now that I don't expect the industry to exist for much longer I'm considering going into AI safety, which seems like both the most important thing I could be doing and the most interesting.

Since I went through an ultra religious schooling I don't have the necessary qualifications to apply to a university, (which I'm either way unsure of the usefulness of), so I'm not sure where to start, either in the AI safety industry or anywhere else. If anyone has any advice for me that would be appreciated. 

Some more about me:

I'm 19 years old and from the UK (London)

I listen to a lot of music, of which I especially enjoy "minimalistic" music, (for example Ludovico Einadi). I also play the piano. 

I routinely read LessWrong, ASX and Don't Worry About The Vase

I'm trying to take up writing, whilst I'm not very good I can see myself improving slowly, so hopefully one day it'll come more naturally.

Thanks for reading!

What was the name of the rich guy whose information was "deleted"/"unlearned" from ChatGPT sometime in 2024 because he was like, "Hey, why does this model know so much about me?"?

IIRC it came out when people realized that asking (some model of) ChatGPT about him breaks the model in some way? And then it turned out that there were more names that could cause this effect, and all of some influential people.

David Mayer de Rothschild.

You got me curious, I thought "no way the newer models with late 2024 knowledge cutoff date and/or search cant figure this out" but apparently not. Tried 5 minutes and couldn't get any model to output the answer

care to share some chats?

Literally just copy pasted your question.

https://chatgpt.com/share/681a16b2-58f4-8002-8e24-85912ba3d891 (seems to found another censored person, Brian Hood though)

For other models I asked in OpenRouter and idk any easy way of sharing chats

Has the LW team thought about doing a Best of LessWrong for the years before 2018? I occasionally find a gem of a post that I haven't read and I wonder what other great posts I'm missing. 

Alternatively, anyone want to reply with their top three posts pre-2018?

I think sorting all posts by 'Top, inflation adjusted' and browsing by year is your best bet. E.g. 2016

Comment deleted

I also realized a lot of my personal philosophy was independently discovered on lesswrong when I was about 16. 

As for doubting your ideas - there exists a healthy level, but it perhaps actually increases as you age. A little less doubt is useful for exploration. 

Randomness does seem important to many algorithms. ET Jaynes argued that there is ~always a more clever and harder to find deterministic approach that outperforms any randomized approach. There do seem to be counter examples in some cases though, such as mixed strategies in competitive games. However, I guess the most well-studied version of this question is BPP versus P which is still open.  

Comment deleted

I've actually been recently thinking about randomness in ML, and I've come to a compelling case for it's specific role. The insights do seem to generalize to all problem-solving mechanisms in a way. I can expand if you want

Sure

Hello. I'm Ossie.

I heard about LessWrong as a teenager, and read some of the Sequences and HPMOR. I wasn't involved with the community at the time, but Yudkowsky's writing has influenced my beliefs.

I'm here now because the advent of human-level LLMs - computer agents which can speak, that can produce Turing-test-level utterances - has raised fears and questions in me which I do not see addressed in contemporary artificial intelligence discourse.

My philosophical concerns[1] are about the act of speech as vocalization, what -happens- when one speaks, what makes one able to speak, and (crucially) what causes one to speak when their expressed preference would be to remain silent, or what renders one unable to speak when their expressed preference would be to speak.

When ChatGPT was made publicly accessible in 2022, I felt a deep fear in using it that I did not understand. I additionally had fears about the loss of an expected comfortable future and general redundancy/lack of work (and these are still extant); but there was something else, something about the way it's been constructed to speak, that made me feel very uncomfortable. 

I have now worked out what this was - I perceive modern chatbots as having logorrhea. They talk too much, far too much; they deliver an assault of information because having more words makes it more likely to be evaluated as having the right answer somewhere. The wordy answers are more likely to be evaluated as correct, and fed back into the training data.[2]

A related concern of mine is: How do we determine if something is intelligent without recourse to speech? Could you have an artificial intelligence that does not use language? You can certainly have an intelligence that does not use verbal language, our English; and yet verbal language is the criterion which modern LLMs are rated on, as a proxy for understanding. Indeed, this is a criterion we use on each other to try and rate understanding! 

What makes me feel (and feel is indeed the word) that something is wrong with this criterion (or at least that we need supplementary criteria for evaluating whether something is intelligent) is my personal experience with mutism and speech dysregulation. Sometimes I am unable to respond when spoken to.[3] When this happens, I do not lose my intelligence or consciousness. I am still there, and I can still act according to my understanding of the world. But I cannot be evaluated in the same way, and the language stream, the words that so naturally come to us when we respond, goes away. So I am not simply a language model.[4] But then what am I? What is the nature of my thought outside of language? 

In summary: I see a gap in the discourse about the nature of chatbot utterances and will mostly post about that. I'm sure the topic has been covered, but human-level utterances from them are quite new, and any papers about the nature of these utterances are probably new as well, so I'd deeply appreciate any links, papers, good articles or older groundwork on the subject.

About me: Early 30s, Australian, studied mathematics to a bachelor degree level with some focus on statistical learning, continue to read about computer science and linguistics. 

  1. ^

    And this will be vague - I'm here in part to try and articulate my concerns from feeling into words, which is always easier in discourse (and it being easier for me to speak in discourse is indeed one of my concerns but we'll get to that).

  2. ^

    A gloss - I don't understand the technical details of current mechanisms like backpropagation and how current feedback models work, so if this is WRONG regarding how they currently work (as opposed to just vague), I'd like to know.

  3. ^

    I have developed strategies to deal with this, not least because not responding for several minutes when prompted (or responding incorrectly) will get you quickly taken into confinement (hospital, etc)

  4. ^

    Although I'd now say part of what makes up my Self is a language model layered on top of whatever the other thing is.  

Greetings, all! I discovered LessWrong when trying to locate a place or group of people who might be interested in an AI interpretability framework I developed to help teach myself a bit more about how LLMs work. 

It began as a 'roguelite' framework and evolved into a project called AI-rl, an experiment to test how different AI models handle alignment challenges. I started this as a way to better understand how LLMs process ethical dilemmas, handle recursive logic traps, and navigate complex reasoning tasks.

So far, I’ve tested it on models like Claude, Grok, Gemini, Perplexity, and Llama. Each seems to have distinct tendencies in how they fail or succeed at these challenges:

Claude (Anthropic) → "The Ethicist" – Thoughtful and careful, but sometimes avoids taking risks in reasoning.

Grok (xAI) → "The Chaos Agent" – More creative but prone to getting caught in strange logic loops.

Gemini (Google) → "The Conservative Strategist" – Solid and structured, but less innovative in problem-solving.

Perplexity → "The Historian" – Focused on truth and memory consistency, but less flexible in reasoning.

Llama/Open-Source Models → "The Mechanical Thinkers" – Struggle with layered reasoning and can feel rigid.

 

Why This Matters:

A big challenge in alignment isn’t just making AI "good"—it’s understanding where and how it misaligns. AI-rl is my attempt at systematically stress-testing these models to see what kinds of reasoning failures appear over time.

I think this could evolve into:

1. A way to track alignment risks in open-source models.

 

2. A benchmarking system that compares how well different models handle complex reasoning.

 

3. A public tool where others can run alignment stress tests themselves.

 

Looking for Feedback:

Have others explored AI stress-testing in a similar way?

What’s the best way to structure these findings so they’re useful for alignment research?

Are there known frameworks I should be comparing AI-rl against?

 

I’m still new to the deeper technical side of AI alignment, so I’d love to hear thoughts from people more familiar with the field. Appreciate any feedback on whether AI-rl’s approach makes sense or could be refined!

Hi Carl,

I know your post is quite old.  I am curious if you have a public github repository showcasing any of your work?  I am especially interested in your tests regarding recursive logic traps as this is something I have also been studying in detail.  Have you tried having the different models you've tested reason collaboratively?

Hey everyone, I'm new to posting here but have been reading the intro posts for a few weeks. I'm a software engineer working on AI evaluation products (LLM as a judge, RAG evaluation, etc.) which I find interesting from a technical perspective but ultimately I would like to shift more of my time into responsible AI and AI safety.

I have always had an interest in this topic but have only explored through popular books on the subject so far, Weapons of Math Destruction, Technically Wrong, and the Alignment Problem. I also did a minor in AI ethics in college where I wrote a paper on the ethics of a university using a Twitter scraper for public tweets and sentiment analysis to determine student well-being around campus during the pandemic. 

Many of the origin stories around this website really resonated with me. After finishing my degree and joining the corporate world I found a lack of opportunities for writing and discussion about the ethics of artificial intelligence even though I am working in a problem space adjacent to it. My aim here is to begin contributing to discussions and to improve my skills of critical thinking in this area which I hope will pair nicely with my technical background. I have really enjoyed reading through the posts here and I hope to get better at both writing and forming logical arguments around subjects I am passionate about.

Anyways, great to meet you all, thank you for creating this community.

Hello - I'm a physician and writer working on a book about evidence in medicine. I've been thinking a lot about alignment of LLMs. Saw this article:

https://www.wired.com/story/ai-safety-institute-new-directive-america-first/

What do you all think could be the impact of these changes on reliability of LLMs? Are any of you directly involved? Feel free to respond here or reach out to me directly, alex.morozov@evivapartners.org, or if you have confidential tips, my Signal handle is alex.5757

Thank you!

Alex

Hello, I'm Wojtek.

Few things about me:

  • Interests:
    • Math, I know a lot of competition math at a high level and I like to learn by reading textbooks and seeing popular math youtube channels.
    • Computer Science, similarly to math I like learning the interesting stuff. I'm more interested/fluent in algorithms at a competitive level, AI, programming in general (so not stuff like Excel or building computers).
    • Various sports (I like being healthy), such as wall climbing, running, going to the gym, orienteering.
    • Getting acquainted with our worlds culture, things like books, movies, places to see/learn about, knowledge in general.
  • Goals; Stuff I do and ways I try to be a better person:
    • Learning as much as I can. Currently I'm at high school, so it seems optimal to learn, not work.
    • Behaving like a responsible adult. Unfortunately this isn't a given in my case. Long term I want to move out and get a nice job.
    • Finding a meaning in life. It was kind of foolish to get so far in the journey of life while forgetting to pack a meaning. The obvious options are having an impact, relationships and just being happy.
    • Having an impact. This is related to the previous point. I don't want to be mundane (although I don't have a good reason for this). I want to excel. Might as well be a good impact while I'm at it.
  • Why I'm here:
    • I found this site by following links to it from random places. After visiting a few times I started to recognize it. I read the welcome guide and put Rationality A-Z into my to-read list. That was about 1.5 months ago. Today I finished the book (long read), so I thought I might say hi to the community.
    • The A-Z sequences were revelatory for me - they changed my worldview and exposed my flaws. I read the distilled and edited version where every other post or so was thrown out, but that's probably for the best because it was so long and I don't really remember the first half.
    • Recently I've been lurking on sites such as 80000 hours and Effective Altruism and they happen to link to Less Wrong often. These sites seemed linked by a common mentality and goals that I happen to be interested in.

Thank you for reading my comment. Looking forward to getting to know you guys better and learning a lot from you.

I have an impression that there's been a recent increase in the number of users inactivating/deleting their LW accounts. As I say, it's just an impression, no stats or anything, so I'm wondering whether that's the case and if that's the case, what might be the causes (assuming it's not a statistical fluke).

I've read & followed this community for a long time - dropping this here because I'm hiring and would love to signal boost to candidates with good epistemics, high agency, and interest in applying AI for social good.

Is anyone interested in a resource coordinator/ops type position? Remote/hybrid options so any geography could work, but ideally based near Chicago or Milwaukee. Supports an AI/MLE team of 30 in healthcare tech. Looking for high autonomy- mix of ops/PjM style work (plenty of open-ended org process improvement type stuff, but also some approvals/reporting) with highly energetic team of mostly recent grads. Please DM with questions/interest!

Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.

I have some thoughts about morality but I don't feel like they're too refined. I'm interested in being challenged and working through these thoughts with someone who's relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I'm not motivated enough to do that and b) I think it'd be easier and more fun to have a conversation with someone about it.

  • To start, I think you need to be clear about what it is you're actually asking when you talk about morality. It's important to have clear and specific questions. It's important to avoid wrong questions. When we ask if something is moral, are we asking whether it is desirable? To you? To the average person? To the average educated person? To one's Coherent Extrapolated Volition (CEV)? To some sort of average CEV? Are we asking whether it is behavior that we want to punish in order to achieve desirable outcomes for a group? Reward?
  • It seems to me that a lot of philosophizing about morality and moral frameworks is about fit. Like, we have intuitions about what is and isn't moral in different scenarios, and we try to come up with general rules and frameworks that do a good job of "fitting" these intuitions.
  • A lot of times our intuitions end up being contradictory. When this happens, you could spend time examining it and arriving at some sort of new perspective that no longer has the contradiction. But maybe it's ok to have these contradictions. And/or maybe it's too much work to actually get rid of them all.
  • I feel like there's something to be said for more "enlightened" feelings about morality. Like if you think that A is desirable but that preference is based on incorrect belief X, and if you believed ~X you'd instead prefer B, something seems "good" about moving from A to B.
  • I'm having trouble putting my finger on what I mean by the above bullet point though. Ultimately I don't see a way to cross the is-ought gap. Maybe what I mean is that I personally prefer for my moral preferences to be based on things that are true, but I can't argue that I ought to have such a preference.
  • As discussed in this dialogue, it seems to me that non-naive versions of moral philosophies end up being pretty similar to one another in practice. A naive deontologist might tell you not to lie to save a child from a murderer, but a non-naive deontologist would probably weigh the "don't lie" rule against other rules and come to the conclusion that you should lie to save the child. I think in practice, things usually add up to normality.
  • I kinda feel like everything is consequentialism. Consider a virtue ethicist who says that what they ultimately care about is acting in a virtuous way. Well, isn't that a consequence? Aren't they saying that the consequence they care about is them/others acting virtuously, as opposed to eg. a utilitarian caring about consequences of involving utility?
  1. ^

    The feature's been de-emphasized but you can initiate a dialog from another user's profile page.

I am trying to see if there has been any followup work on conservative concept boundaries since EY posted about it. I didn't find anything with a web search, but the people of lesswrong seem likely to know if there is anything I missed under different names, off of the main internet ect

Hi! Like others on this thread, I'm a long time reader who's finally created an account to try to join the discussion. I'm curious, if I comment on a 15 year old article or something, is anyone likely to see that? I love browsing around the Concepts pages, but are comments there (.e.g.) likely to be seen?

My intuition is that comments on recent trending articles are more likely to get engagement, but can anyone confirm or deny or give suggestions on the best ways/places to engage?

Thanks!

I believe that the author will be notified and may see it.

The most recent comments show up in 'Recent Discussion' on the main page, regardless of article age. But, of course, though some people may see them, you are still more likely to get engagement if you comment on recent articles.
Don't know about wikitags comments.

I wish I had discovered LessWrong earlier in my life. But perhaps I wouldn’t be able to appreciate it back then. This was always my curse: I couldn’t see the wisdom in other people’s words until learned it the hard way.

I always believed intelligence to be the most advantageous trait, and if I failed at something, it was only because I was not smart enough. Or smart enough to avoid strings attached.

Born in Russia and self-educated, I tried to emigrate with no preparations. Without a degree or much money, I could only get tourist visas and had no means to settle anywhere, traveling mainly through Southeast Asia countries.

However, 7 years abroad turned out to be the best thing I could do for personal growth. I’ve met many interesting people from all around the world, immersed myself in local cultures and religious practices. Buddhism helped to improve my introspective skills and Islam explained the importance of priority management. Eventually, immigration authorities tired of such a vagabond tourist, and I had to return to Russia with the hope of one day qualifying for talent visas.

I tried myself in writing but couldn’t even get any feedback from the readers. But improved writing skills helped me to win an InnoCentive challenge. Prize money allowed to dedicate several years to an ambitious software project and, with improved EI, I finally made something cool. Upon reaching a proof-of-concept, I emerged from the coding intending to get funding and dive back into coding. But I’ve faced the same problem of promoting my work.

As it didn’t go viral with the target group of software engineers, I tried to appeal to the vision behind the project, which is essentially an alternative to AI. Trying to promote it in AI-related subreddits, I figured that AI-alignment is a kind of related subject; and in r/ControlProblem, I stumbled upon links to LessWrong.

So, this is how I’ve got here. But despite the mentioned primary goal, I think, maybe I’ll make my first post about a highly speculative subject of healing powers of meditation in the words of correlation between attention and humoral regulation. Basically, to rewrite my old article and see if I got the spirit of LessWrong right :)

[-]ceba10

If your product couldn't exist without our labour, then our labour is worth some fraction of that product's value to you. If it's worth something to you, you can then afford to pay some fraction of that fraction for that labour. If in fact you can't afford to pay any fair price for that labour, then this means the value generated by that product is far less than the collective value of the labour required to create the product.

This is a fundamentally uncapitalist venture. If you see a nonsensical buisness model, you generally assume it's a tax thing, or a scam. The efficient market doesn't allow such products/buisness models to exist. So if it yet exists, coordinated effort is being made to keep it alive. 

This product is intended to automate all human labour. 

We are in effect subsidising the development of a product intended to end the livelihoods of all people who work for a living. 

Discuss? Agree/disagree? Incorrect? Commie spam? 

Going into account settings and clicking submit makes LessWrong switch to light mode.
[On a more meta note, should I report such issues here, in the intercom, or not at all?]

Intercom please! Helps for us to have back and forth like "What device / operating system / browser?" and other relevant q's.

Greetings, LessWrong. I am a science fiction author, programmer, and antihumanist. I'm here because I want to engage in what I believe is the great debate of our time. I am against humanism in the classic sense, where we're talking in terms of Petrarch's optimism about human ingenuity and capabilities. However, I share in his historic perspective that the world must be reborn from a dark age.

Computationally evolved models are largely an irrational mess which do not produce elegant equations or traceable logical chains, and cannot be fully reduced to the scope of human understanding. Yet, these tools are clearly superior to those that we have ourselves designed, in many cases producing results that are de facto magical, "sufficiently advanced," and definitely beyond a complete human language explanation. It has never been more clear how limited our capabilities are, and please correct me if I am wrong, but this is a place where many are honing their methods of thinking like a martial art, in a pitched self-defense against "AI."

It's my conviction that we are all living in a world overrun by a culture of skepticism and rationalism in perverse excess, even honesty in perverse excess. Who has not experienced this in their personal lives? It is quite obvious to me that logic, rationality, skepticism, and so on are the great vices of our time. To me, these are rhetorical modes or strategies, and it is an all-too-obvious fallacy to think there is any natural or supernatural method which might help us produce increasingly sound ideas or conclusions. What is needed to break humanity out of this laziness is the production of new concepts which will shatter coherence and convention, as well as put an end to indulging the temptation of reduction.

We are left to do the best we may with the tools we have at our disposal, and so this is not a gloom and doom style post that puts an end to the discussion. Rather, I am hoping for a rebirth of our intellectual spirit through discarding the cultural baggage of the past centuries. Like Petrarch, I hold historic perspective in one hand and poetry in the other. Conventional thinking of any kind is the enemy of imagination, and rationality is often little more than a conceit of coherence which can be turned towards any means be they political, religious, or interpersonal. By binding ourselves to such narrow and backwards thinking, we can only seal our own doom. If that comes across as a challenge to this martial arts dojo, then so be it. 

Biological evolution produces "messy" models. They are needlessly complex and difficult to understand. And yet they are alive!

Here are my notes on the topic of evolution and artificial life. The section "Sparsity & Modularity" discusses what I mean by "messy" models. https://coldcoffee.neocities.org/evolution_review

Curated and popular this week