Hello everyone!
I am new here and I thought I should introduce myself, I am currently reading the highlights of the sequences and it has been giving me a sharper worldview and I do feel myself being more rational, I think a lot of people who call themselves rational are motivated by biases and emotions more than they think, but it is good to be aware of that and try to work to be better, so I am doing that.
I am 17 years old from Iraq, I found the forum through Daniel Schmachtenberger, I am not sure how known he is here.
I am from a very Muslim country, I was brainwashed by it growing up like most people, but at 11 I started questioning and reading books as well, which was very hard, since the fear of "hell" is imprinted in someone growing up in this environment, but by 14 I broke free, I had a three months existential crisis as a result where I felt like I don't exist, and was in anxiety 24/7.
At that point I got interested in the new age movement, eastern religions and spirituality, especially Buddhism and certain strands of Hinduism, I wasn't interested in them to take as dogmas or as absolute views, I also got into western philosophy later, especially the Idealism vs. Realism...
Hi! I joined LW in order to post a research paper that I wrote over the summer, but I figured I'd post here first to describe a bit of the journey that led to this paper.
I got into rationality around 14 years ago when I read a blog called "you are not so smart", which pushed me to audit potential biases in myself and others, and to try and understand ideas/systems end-to-end without handwaving.
I studied computer science at university, partially because I liked the idea that with enough time I could understand any code (unlike essays, where investigating bibliographies for the sources of claims might lead to dead ends), and also because software pays well. I specialized in machine learning because I thought that algorithms that could make accurate predictions based on patterns in the world that were too complex for people to hardcode were cool. I had this sense that somewhere, someone must understand the "first principles" behind how to choose a neural network architecture, or that there was some way of reverse-engineering what deep learning models learned. Later I realized that there weren't really first principles regarding optimizing training, and that spending time trying to har...
I'm planning to run the unofficial LessWrong Community Census again this year. There's a post with a link to the draft and a quick overview of what I'm aiming for here, and I'd appreciate comments and feedback. In particular, if you
then I want to hear from you. I care a lot about rationality skills but don't know how to evaluate them in this format, but I have some clever ideas if I had a signal I could sift out of the survey. I don't care about politics, but lots of people do and I don't want to spoil their fun.
You can also propose other questions! I like playing with survey data :)
I found the site a few months ago due to a link from an AI themed forum. I read the sequences and developed the belief that this was a place for people who think in ways similar to me. I work as a nuclear engineer. When I entered the workforce, I was surprised to find that there weren’t people as dispositioned toward logic as I was. I thought perhaps there wasn’t really a community of similar people and I had largely stopped looking.
This seems like a good place for me to learn, for the time being. Whether or not this is a place for me to develop community remains to be seen. The format seems to promote people presenting well-formed ideas. This seems valuable, but I am also interested in finding a space to explore ideas which are not well-formed. It isn’t clear to me that this is intended to be such a space. This may simply be due to my ignorance of the mechanics around here. That said, this thread seems to be inviting poorly formed ideas and I aim to oblige.
There seem to be some writings around here which speak of instrumental rationality, or “Rationality Is Systematized Winning”. However, this seems to beg...
Hello everyone!
My name is José, 23 years old, brazilian and finishing (in July) an weird interdisciplinary undergraduate in University of Sao Paulo (2 years of math, physics, computer science, chem and bio + 2 years of do whatever you want - I did things like optimization, measure theory, decision theory, advanced probability, bayesian inference, algorithms, etc.)
I've been reading stuff in LW about AIS for a while now, and took some steps to change my career to AIS. I met EA/AIS via a camp focused on AIS for brazilian students called Condor Camp in 2022 and since then participated in a bunch of those camps, created a uni group, ML4Good, bunch of EAGs/Xs.
I recently started an Agent Foundations fellowship by Alex Altair and am writing a post about Internal Model Principle. I expect to release it soon!
Hope you all enjoy it!
I think there is a 10-20 per cent chance we get digital agents in 2025 that produce a holy shit moment as big as the launch of chatgpt.
If that happens I think that will produce another round of questions that sounds approximately like “how were we so unprepared for this moment”.
Fool me once, shame on you…
I don't know exactly when this was implemented, but I like how footnotes appear to the side of posts.
I am a university dropout that wants to make an impact in the AI safety field. I am a complete amateur in the field, just starting out, but I want to learn as much as possible in order to make an impact. I studied software engineering for a semester and a half before realizing that there was a need for more people in the AI safety field, and that's where I want to give all my attention. If you are interested in connecting DM me, if you have any advice for a newcomer post a comment below. I am located in Hønefoss, Norway.
Site update: the menu bar is shorter!
Previously I found it overwhelming when I opened it, and many of the buttons were getting extremely little use. It now looks like this.
If you're one of the few people who used the other buttons, here's where you can find them:
Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)
In other words, what is the size of the LessWrong community relative to the number of active contributors?
You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest
In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.
If spaced repetition is the most efficient way of remembering information, why do people who learn a music instrument practice every day instead of adhering to a spaced repetition schedule?
Spaced repetition is the most efficient way in terms of time spent per item. That doesn't make it the most efficient way to achieve a competitive goal. For this reason, SRS systems often include a 'cramming mode', where review efficiency is ignored in favor of maximizing memorization probability within X hours. And as far as musicians go - orchestras don't select musicians based on who spent the fewest total hours practicing but still manage to sound mostly-kinda-OK, they select based on who sounds the best; and if you sold your soul to the Devil or spent 16 hours a day practicing for the last 30 years to sound the best, then so be it. If you don't want to do it, someone else will.
That said, the spaced repetition research literature on things like sports does suggest you still want to do a limited form of spacing in the form of blocking or rotating regularly between each kind of practice/activity.
Declarative and procedural knowledge are two different memory systems. Spaced repetition is good for declarative knowledge, but for procedural (like playing music) you need lots of practice. Other examples include math and programming - you can learn lots of declarative knowledge about the concepts involved, but you still need to practice solving problems or writing code.
Edit: as for why practice every day - the procedural system requires a lot more practice than the declarative system does.
Hello! I've just found out about Lesswrong and I immediately feel at home. I feel this is what I was looking for in medium.com and I never found there; a website to learn about things, about improving oneself and about thinking better. Medium proved to be very useful at reading about how people made 5 figures using AI to write articles for them, but not so useful at providing genuinely valuable information.
One thing I usually say about myself is that I have "learning" as a hobby. I have only very recently given a name to things and now I know that it's ADH...
Hello Everyone!
I am a Brazilian AI/ML engineer and data scientist, I have been following the rationalist community for around 10 years now, originally as a fan of Scott Alexander's Slate Star Codex where I came to know of Eliezer and Lesswrong as a community, along with the rationalist enterprise.
I only recently created my user and started posting here, currently, I’m experiencing a profound sense of urgency regarding the technical potential of AI and its impact on the world. With seven years of experience in machine learning, I’ve witnessed how the ...
Re: the new style (archive for comparision)
Not a fan of
1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.
2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.
Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.
Once upon a time, there were Rationality Quotes threads, but they haven't been done for years. I'm curious if there's enough new, quotable things that have been written since the last one to bring back the quote posts. If you've got any good lines, please come share them :) If there's a lot of uptake, maybe they could be a regular thing again.
Possible bug report: today I've been seeing errors of the form
Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?
that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.
Hi! My name is Clovis. I'm an PhD student studying distributed AI. In my spare time, I work on social science projects.
One of my big interests is mathematically modelling dating and relationship dynamics. I study how well people's stated and revealed preferences align. I'd love to chat about experimental design and behavioral modeling! There are a couple of ideas around empirically differentiating models of people's preferences that I'd love to vet in particular. I've only really read the Sequences though, and I know that there's a lot of prior discussion ...
Hi everyone,
I have been a lurker for a considerable amount of time but have finally gotten around to making an account.
By trade I am a software engineer, primarily interested in PL, type systems, and formal verification.
I am currently attempting to strengthen my historical knowledge of pre-facist regimes with a focus on 1920s/30s Germany & Italy. I would greatly appreciate either specific book recommendations or reading lists for this topic - while I approach this topic from a distinctly “not a facist” viewpoint, I am interested in books from both side...
I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.
Hello.
I have been adjacent to but not participating in rationality related websites and topics since at least Middle School age (homeschooled and with internet) and had a strong interest in science and science fiction long before that. Relevant pre-Less Wrong readings probably include old StarDestroyer.Net essays and rounds of New Atheism that I think were age and time appropriate. I am a very long term reader of Scott Alexander and have read at least extensive chunks of the Sequences in the past.
A number of factors are encouraging me to become more active...
I've been lurking for years. I'm a lifelong rationalist who was hesitant to join because I didn't like HPMOR. (Didn't have a problem with the methods of rationality; I just didn't like how the characters' personalities changed, and I didn't find them relatable anymore.) I finally signed up due to an irrepressible urge to upvote a particular comment I really liked.
I struggle with LW content, tbh. It takes so long to translate it into something readable, something that isn't too littered with jargon and self-reference to be understandable for a generalist wi...
Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.
If any actor with or without AGI can quickly gain lots of ...
Are there any mainstream programming languages that make it ergonomic to write high level numerical code that doesn't allocate once the serious calculation starts? So far for this task C is by far the best option but it's very manual, and Julia tries and does pretty well but you have to constantly make sure that the compiler successfully optimized away the allocations that you think it optimized away. (Obviously Fortran is also very good for this, but ugh)
What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?
Hey, everyone! Pretty new here and first time posting.
I have some questions regarding two odd scenarios. Let's assume there is no AI takeover to the Yudkowsky-nth degree and that AGI and ASI goes just fine. (Yes, that's are already a very big ask).
Scenario 1: Hyper-Realistic Humanoid Robots
Let's say AGI helps us get technology that allows for the creation of humanoid robots that are visually indistinguishable from real humans. While the human form is suboptimal for a lot of tasks, I'd imagine that people still want them for a number of reasons. If there's ...
Is anyone from LW going to the Worldcon (World Science Fiction Convention) in Seattle next year?
ETA: I will be, I forgot to say. I also notice that Burning Man 2025 begins about a week after the Worldcon ends. I have never been to BM, I don't personally know anyone who has been, and it seems totally impractical for me, but the idea has been in the back of my mind ever since I discovered its existence, which was a very long time ago.
I'm really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!
I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.
Hi! New to the forums and excited to keep reading.
Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?
...For a more benign example, say one wanted to create multiple "personas" here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a "disagreeable" persona, one neutral, and one "agreeable".
A malicious example would be if someone
I think there might be a lesswrong editor feature that allows you to edit a post in such a way that the previous version is still accessible. Here’s an example—there’s a little icon next to the author name that says “This post has major past revisions…”. Does anyone know where that option is? I can’t find it in the editor UI. (Or maybe it was removed? Or it’s only available to mods?) Thanks in advance!
I am very interested in mind uploading
I want to do a PhD in a related field and comprehensively go through "whole brain emulation: a roadmap" and take notes on what has changed since it was published
If anyone knows relevant papers/researchers that would be useful to read for that or so I can make an informed decision on where to apply to gradschool next year, please let me know
Maybe someone has already done a comprehenisve update on brain emulation I would like to know and I would still like to read more papers before I apply to grad school
Are there good and comprehensive evaluations of covid policies? Are there countries who really tried to learn, also for the next pandemic?
When rereading [0 and 1 Are Not Probabilities], I thought: can we ever specify our amount of information in infinite domains, perhaps with something resembling hyperreals?
I've noticed that when writing text on LessWrong, there is a tendency for the cursor to glitch out and jump to the beginning of the text. I don't have the same problem on other websites. This most often happens after I've clicked to try to insert the cursor in some specific spot. The cursor briefly shows where I clicked, but then the page lags slightly, as if loading something, and the cursor jumps to the beginning.
The way around this I've found is to click once. Wait to see if the cursor jumps away. If so, click again and hope. Only start typing once you've seen multiple blinks at the desired location. Annoying!
Hello,
Longtime lurker, more recent commenter. I see a lot of rationality-type posters on Twitter and in the past couple of years became aware of "post-rationalists." It's somewhat ill-defined but essentially they are former rationalists who are more accepting of "woo" to be vague about it. My question is: 1) What level of engagement is there (if any) between rationalists and post-rationalists and 2) Is there anyone who dabbled or full on claimed post-rationalist positions and then reverted back to rationalists positions? What was that journey like and what made you switch between these beliefs?
In Fertility Rate Roundup #1, Zvi wrote
"This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree."
Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; t...
Is there an explanation somewhere how the recommendations algorithm on the homepage works, i.e. how recency and karma or whatever are combined?
Quick note: there's a bug I'm sorting out for some new LessWrong Review features for this year, hopefully will be fixed soon and we'll have the proper launch post that explains new changes.
Possible bug: Whenever I click the vertical ellipsis (kebab) menu option in a comment, my page view jumps to the top of the page.
This is annoying, since if I've chosen to edit a comment I then need to scroll back down to the comment section and search for my now-editable comment.
[Bug report]: The Popular Comments section's comment preview ignores spoiler tags
As seen on Windows/Chrome
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.