If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
148 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Sage500

Hello everyone!

I am new here and I thought I should introduce myself, I am currently reading the highlights of the sequences and it has been giving me a sharper worldview and  I do feel myself being more rational, I think a lot of people who call themselves rational are motivated by biases and emotions more than they think, but it is good to be aware of that and try to work to be better, so I am doing that.

I am 17 years old from Iraq, I found the forum through Daniel Schmachtenberger, I am not sure how known he is here.

I am from a very Muslim country, I was brainwashed by it growing up like most people, but at 11 I started questioning and reading books as well, which was very hard, since the fear of "hell" is imprinted in someone growing up in this environment, but by 14 I broke free, I had a three months existential crisis as a result where I felt like I don't exist, and was in anxiety 24/7.

At that point I got interested in the new age movement, eastern religions and spirituality, especially Buddhism and certain strands of Hinduism, I wasn't interested in them to take as dogmas or as absolute views, I also got into western philosophy later, especially the Idealism vs. Realism... (read more)

5Ruby
Welcome! Sounds like you're on the one hand at start of a significant journey but also you've come a long distance already. I hope you find much helpful stuff on LessWrong. I hadn't heard of Daniel Schmachtenberger, but I'm glad to have learend of him and his works. Thanks.
6MalcolmOcean
Daniel Schmachtenberger has lots of great stuff.  Two pieces I recommend: 1. this article Higher Dimensional Thinking, the End of Paradox, and a More Adequate Understanding of Reality, which is about how just because two people disagree doesn't mean either is wrong 2. this Stoa video Converting Moloch from Sith to Jedi w/ Daniel Schmachtenberger, which is about races-to-the-bottom eating themselves Also hi, welcome Sage!  I dig the energy you're coming from here.
1Sage
Thank you! I hope I do yes, I am still learning how the forum works :) And you are welcome as well.

Hi! I joined LW in order to post a research paper that I wrote over the summer, but I figured I'd post here first to describe a bit of the journey that led to this paper.

I got into rationality around 14 years ago when I read a blog called "you are not so smart", which pushed me to audit potential biases in myself and others, and to try and understand ideas/systems end-to-end without handwaving.

I studied computer science at university, partially because I liked the idea that with enough time I could understand any code (unlike essays, where investigating bibliographies for the sources of claims might lead to dead ends), and also because software pays well. I specialized in machine learning because I thought that algorithms that could make accurate predictions based on patterns in the world that were too complex for people to hardcode were cool. I had this sense that somewhere, someone must understand the "first principles" behind how to choose a neural network architecture, or that there was some way of reverse-engineering what deep learning models learned. Later I realized that there weren't really first principles regarding optimizing training, and that spending time trying to har... (read more)

2julius vidal
Hi! I think I'm probably in a pretty similar position to where you were maybe a few months/a year ago in that I am a CS grad (though sadly no ML specialisation) working in industry who recently started reading a lot of mechanistic intepretability research, and is starting to seriously consider pursuing a PHD in that area (and also am looking at how I could get some initial research done in the meantime).  Could I DM you to maybe get some advice?
1bensenberner
Sure!

I'm planning to run the unofficial LessWrong Community Census again this year. There's a post with a link to the draft and a quick overview of what I'm aiming for here, and I'd appreciate comments and feedback. In particular, if you

  • Have some political questions you want to get into detail with or
  • Have experience or opinions on the foundational skills of rationality and how to test them on a survey

then I want to hear from you. I care a lot about rationality skills but don't know how to evaluate them in this format, but I have some clever ideas if I had a signal I could sift out of the survey. I don't care about politics, but lots of people do and I don't want to spoil their fun.

You can also propose other questions! I like playing with survey data :) 

[-]Lerk161

I found the site a few months ago due to a link from an AI themed forum.  I read the sequences and developed the belief that this was a place for people who think in ways similar to me.  I work as a nuclear engineer.  When I entered the workforce, I was surprised to find that there weren’t people as dispositioned toward logic as I was.  I thought perhaps there wasn’t really a community of similar people and I had largely stopped looking.

 

This seems like a good place for me to learn, for the time being.  Whether or not this is a place for me to develop community remains to be seen. The format seems to promote people presenting well-formed ideas.  This seems valuable, but I am also interested in finding a space to explore ideas which are not well-formed.  It isn’t clear to me that this is intended to be such a space.  This may simply be due to my ignorance of the mechanics around here. That said, this thread seems to be inviting poorly formed ideas and I aim to oblige.

 

There seem to be some writings around here which speak of instrumental rationality, or “Rationality Is Systematized Winning”.  However, this seems to beg... (read more)

8gilch
I mean, what do you think we've been doing all along? I'm at like 90% in 20 years, but I'm not claiming even one significant digit on that figure. My drastic actions have been to get depressed enough to be unwilling to work in a job as stressful as my last one. I don't want to be that miserable if we've only got a few years left. I don't think I'm being sufficiently rational about it, no. It would be more dignified to make lots of money and donate it to the organization with the best chance of stopping or at least delaying our impending doom. I couldn't tell you which one that is at the moment though. Some are starting to take more drastic actions. Whether those actions will be effective remains to be seen. In my view, technical alignment is not keeping up with capabilities advancement. We have no alignment tech robust enough to even possibly survive the likely intelligence explosion scenario, and it's not likely to be developed in time. Corporate incentive structure and dysfunction makes them insufficiently cautious. Even without an intelligence explosion, we also have no plans for the likely social upheaval from rapid job loss. The default outcome is that human life becomes worthless, because that's already the case in such economies. Our best chance at this point is probably government intervention to put the liability back on reckless AI labs for the risks they're imposing on the rest of us, if not an outright moratorium on massive training runs. Gladstone has an Action Plan. There's also https://www.narrowpath.co/.
4Lerk
  So, the short answer is that I am actually just ignorant about this.  I’m reading here to learn more but I certainly haven’t ingested a sufficient history of relevant works.  I’m happy to prioritize any recommendations that others have found insightful or thought provoking, especially from the point of view of a novice.   I can answer the specific question “what do I think” in a bit more detail.  The answer should be understood to represent the viewpoint of someone who is new to the discussion and has only been exposed to an algorithmically influenced, self-selected slice of the information.   I watched the Lex Fridman interview of Eliezer Yudkowsky and around 3:06 Lex asks about what advice Eliezer would give to young people.  Eliezer’s initial answer is something to the extent of “Don’t expect a long future.”  I interpreted Eliezer’s answer largely as trying to evoke a sense of reverence for the seriousness of the problem.  When pushed on the question a bit further, Eliezer’s given answer is “…I hardly know how to fight myself at this point.”  I interpreted this to mean that the space of possible actions that is being searched appears intractable from the perspective of a dedicated researcher.  This, I believe, is largely the source of my question.  Current approaches appear to be losing the race, so what other avenues are being explored?   I read the “Thomas Kwa's MIRI research experience” discussion and there was a statement to the effect that MIRI does not want Nate’s mindset to be known to frontier AI labs.  I interpreted this to mean that the most likely course being explored at MIRI is to build a good AI to preempt or stop a bad AI.  This strikes me as plausible because my intuition is that the LLM architectures being employed are largely inefficient for developing AGI.  However, the compute scaling seems to work well enough that it may win the race before other competing ideas come to fruition.   An example of an alternative approach that I read
5gilch
Besides STOP AI, there's also the less extreme PauseAI. They're interested in things like lobbying, protests, lawsuits, etc.
4gilch
Yep, most of my hope is on our civilization's coordination mechanisms kicking in in time. Most of the world's problems seem to be failures to coordinate, but that's not the same as saying we can't coordinate. Failures are more salient, but that's a cognitive bias. We've achieved a remarkable level of stability, in the light of recent history. But rationalists can see more clearly than most just how mad the world still is. Most of the public and most of our leaders fail to grasp some of the very basics of epistemology. We used to think the public wouldn't get it (because most people are insufficiently sane), but they actually seem appropriately suspicious of AI. We used to think a technical solution was our only realistic option, but progress there has not kept up with more powerful computers brute-forcing AI. In desperation, we asked for more time. We were pleasantly surprised at how well the message was received, but it doesn't look like the slowdown is actually happening yet. As a software engineer, I've worked in tech companies. Relatively big ones, even. I've seen the pressures and dysfunction. I strongly suspected that they're not taking safety and security seriously enough to actually make a difference, and reports from insiders only confirm that narrative. If those are the institutions calling the shots when we achieve AGI, we're dead. We desperately need more regulation to force them to behave or stop. I fear that what regulations we do get won't be enough, but they might. Other hopes are around a technical breakthrough that advances alignment more than capabilities, or the AI labs somehow failing in their project to produce AGI (despite the considerable resources they've already amassed), perhaps due to a breakdown in the scaling laws or some unrelated disaster that makes the projects too expensive to continue. I have a massive level of uncertainty around AGI timelines, but there's an uncomfortably large amount of probability mass on the possibility tha
5Lerk
This is where most of my anticipated success paths lie as well. I do not really understand how technical advance in alignment realistically becomes a success path.  I anticipate that in order for improved alignment to be useful, it would need to be present in essentially all AI agents or it would need to be present in the most powerful AI agent such that the aligned agent could dominate other unaligned AI agents.  I don’t expect uniformity of adoption and I don’t necessarily expect alignment to correlate with agent capability.  By my estimation, this success path rests on the probability that the organization with the most capable AI agent is also specifically interested in ensuring alignment of that agent.  I expect these goals to interfere with each other to some degree such that this confluence is unlikely.  Are your expectations different? I have not been thinking deeply in the direction of a superintelligent AGI having been achieved already.  It certainly seems possible.  It would invalidate most of the things I have thus far thought of as plausible mitigation measures. Assuming a superintelligent AGI does not already exist, I would expect someone with a high P(doom) to be considering options of the form: Use a smart but not self-improving AI agent to antagonize the world with the goal of making advanced societies believe that AGI is a bad idea and precipitating effective government actions.  You could call this the Ozymandias approach. Identify key resources involved in AI development and work to restrict those resources.  For truly desperate individuals this might look like the Metcalf attack, but a tamer approach might be something more along the lines of investing in a grid operator and pushing to increase delivery fees to data centers. I haven’t pursued these thoughts in any serious way because my estimation of the threat isn’t as high as yours.  I think it is likely we are unintentionally heading toward the Ozymandias approach anyhow.
4gilch
ChaosGPT already exists. It's incompetent to the point of being comical at the moment, but maybe more powerful analogues will appear and wreak havoc. Considering the current prevalence of malware, it might be more surprising if something like this didn't happen. We've already seen developments that could have been considered AI "warning shots" in the past. So far, they haven't been enough to stop capabilities advancement. Why would the next one be any different? We're already living in a world with literal wars killing people right now, and crazy terrorists with various ideologies. It's surprising what people get used to. How bad would a warning shot have to be to shock the world into action given that background noise? Or would we be desensitized by then by the smaller warning shots leading up to it? Boiling the frog, so to speak. I honestly don't know. And by the time a warning shot gets that bad, can we act in time to survive the next one? Intentionally causing earlier warning shots would be evil, illegal, destructive, and undignified. Even "purely" economic damage at sufficient scale is going to literally kill people. Our best chance is civilization stepping up and coordinating. That means regulations and treaties, and only then the threat of violence to enforce the laws and impose the global consensus on any remaining rogue nations. That looks like the police and the army, not terrorists and hackers.
4gilch
The instrumental convergence of goals implies that a powerful AI would almost certainly act to prevent any rivals from emerging, whether aligned or not. In the intelligence explosion scenario, progress would be rapid enough that the first mover achieves a decisive strategic advantage over the entire world. If we find an alignment solution robust enough to survive the intelligence explosion, it will set up guardrails to prevent most catastrophes, including the emergence of unaligned AGIs. Alignment and capabilities don't necessarily correlate, and that accounts for lot of why my p(doom) is so high. But more aligned agents are, in principle, more useful, so rational organizations should be motivated to pursue aligned AGI, not just AGI. Unfortunately, alignment research seems barely tractable, capabilities can be brute-forced (and look valuable in the short term) and corporate incentive structures being what they are, in practice, what we're seeing is a reckless amount of risk taking. Regulation could alter the incentives to balance the externality with appropriate costs.
3gilch
We have already identified some key resources involved in AI development that could be restricted. The economic bottlenecks are mainly around high energy requirements and chip manufacturing. Energy is probably too connected to the rest of the economy to be a good regulatory lever, but the U.S. power grid can't currently handle the scale of the data centers the AI labs want for model training. That might buy us a little time. Big tech is already talking about buying small modular nuclear reactors to power the next generation of data centers. Those probably won't be ready until the early 2030s. Unfortunately, that also creates pressures to move training to China or the Middle East where energy is cheaper, but where governments are less concerned about human rights. A recent hurricane flooding high-purity quartz mines made headlines because chip producers require it for the crucibles used in making silicon wafers. Lower purity means accidental doping of the silicon crystal, which means lower chip yields per wafer, at best. Those mines aren't the only source, but they seem to be the best one. There might also be ways to utilize lower-purity materials, but that might take time to develop and would require a lot more energy, which is already a bottleneck. The very cutting-edge chips required for AI training runs require some delicate and expensive extreme-ultraviolet lithography machines to manufacture. They literally have to plasmify tin droplets with a pulsed laser to reach those frequencies. ASML Holdings is currently the only company that sells these systems, and machines that advanced have their own supply chains. They have very few customers, and (last I checked) only TSMC was really using them successfully at scale. There are a lot of potential policy levers in this space, at least for now.
6Morpheus
For not well-formed ideas, you can write a Quick Take (can be found by clicking on your profile name in the top right corner) or starting a dialogue if you want to develop the idea together with someone (can be found in the same corner).
6gilch
This is a question for game theory. Trading a state of total anarchy for feudalism, a family who will avenge you is a great deterrent to have. It could even save your life. Revenge is thus a good thing. A moral duty, even. Yes, really. For a smaller scope, being quick to anger and vindictive will make others reluctant to mess with you. Unfortunately, this also tends to result in endless blood feuds as families get revenge for the revenge for the revenge, at least until one side gets powerful enough to massacre the other. In the smaller scope, maybe you exhaust yourself or risk getting killed fighting duels to protect your honor. We've found that having a central authority to monopolize violence rather than vengeance and courts to settle disputes rather than duels works better. But the instincts for anger and revenge and taking offense are still there. Societies with the better alternatives now consider such instincts bad. Unfortunately, this kind of improved dispute resolution isn't available at the largest and smallest scales. There is no central authority to resolve disputes between nations, or at least not ones powerful enough to prevent all wars. We still rely on the principle of vengeance (second strike) to deter nuclear wars. This is not an ideal situation to be in. At the smaller scale, poor inner-city street kids join gangs that could avenge them, use social media show off weapons they're not legally allowed to have, and have a lot of anger and bluster, all to try to protect themselves in a system that can't or won't do that for them. So, to answer the original question, the optimal balance really depends on your social context.
5gilch
Yes. And yes. See You Need More Money for the former, Effective Altruism for the latter, and Earning to give for a combination of the two. As for which to focus on, well, Rationality doesn't decide for you what your utility function is. That's on you. (surprise! you want what you want) My take is that maybe you put on your own oxygen mask first, and then maybe pay a tithe, to the most effective orgs you can find. If you get so rich that even that's not enough, why not invest in causes that benefit you personally, but society as well? (Medical research, for example.) I also don't feel the need to aid potential future enemies just because they happen to be human. (And feel even less obligation for animals.) Folks may legitimately differ on what level counts as having taken care of themselves first. I don't feel like I'm there yet. Some are probably worse off than me and yet giving a lot more. But neglecting one's own need is probably not very "effective" either.
3Raemon
I'm interested in knowing which AI forum you came from.
3Lerk
I believe it was the Singularity subreddit in this case.  I was more or less passing through while searching for places to learn more about principles of ANN for AGI.

Hello everyone! 

My name is José, 23 years old, brazilian and finishing (in July) an weird interdisciplinary undergraduate in University of Sao Paulo (2 years of math, physics, computer science, chem and bio + 2 years of do whatever you want - I did things like optimization, measure theory, decision theory, advanced probability, bayesian inference, algorithms, etc.)

I've been reading stuff in LW about AIS for a while now, and took some steps to change my career to AIS. I met EA/AIS via a camp focused on AIS for brazilian students called Condor Camp in 2022 and since then participated in a bunch of those camps, created a uni group, ML4Good, bunch of EAGs/Xs. 

I recently started an Agent Foundations fellowship by Alex Altair and am writing a post about Internal Model Principle. I expect to release it soon! 

Hope you all enjoy it!

I think there is a 10-20 per cent chance we get digital agents in 2025 that produce a holy shit moment as big as the launch of chatgpt.

If that happens I think that will produce another round of questions that sounds approximately like “how were we so unprepared for this moment”.

Fool me once, shame on you…

2Sherrinford
Expecting that, how do you prepare?
8ChristianKl
One way would be to now think up ways how you can monetize the existing of effective digital agents. There are probably a bunch of business opportunities.
[-]lsusr1119

I don't know exactly when this was implemented, but I like how footnotes appear to the side of posts.

[-]derzhi111

I am a university dropout that wants to make an impact in the AI safety field. I am a complete amateur in the field, just starting out, but I want to learn as much as possible in order to make an impact. I studied software engineering for a semester and a half before realizing that there was a need for more people in the AI safety field, and that's where I want to give all my attention. If you are interested in connecting DM me, if you have any advice for a newcomer post a comment below. I am located in Hønefoss, Norway.

7Screwtape
I'm not in AI Safety so if someone who is in the field has better suggestions, assume they're right and I'm wrong. Still, I hang out adjacent to AI Safety a lot. The best, easily accessible on-ramp I'm aware of is AiSafety.Quest. The best program I'm aware of is probably AI Safety Fundamentals, though I think they might get more applications than they can take.  Best of luck and skill, and I'm glad to have people working on the problem.

Site update: the menu bar is shorter!

Previously I found it overwhelming when I opened it, and many of the buttons were getting extremely little use. It now looks like this.

If you're one of the few people who used the other buttons, here's where you can find them:

  • New Question: If you click on "New Post", it's one of the types of post available at the top of the page.
  • New Dialogue: If you go to any user page, you can see an option to invite them to a dialogue at the top, next to the option to send them a message or subscribe to their posts.
  • New Sequence: You can make your first when you scroll down the Library page, and once you've got one you can also make a new one from your profile page.
  • Your Quick Takes: Your shortform post is pinned to the top of your posts on your profile page.
  • Bookmarks: This menu icon will re-appear as soon as you have any bookmarks, which you set in the same way (from the triple-dot menu on posts.
4Yoav Ravid
Maybe it can be good to have a "add post to sequence" option when you click the context menu on a post. That's more intuitive than going to the library page.
3papetoast
I want to use this chance to say that I really want to be able to bookmark a sequence
[-]Embee102

Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)

In other words, what is the size of the LessWrong community relative to the number of active contributors?

You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest 

In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.

3selador
That link appears not to work. I'd be quite interested in what those numbers were 10 years ago when Deepmind and the like were getting excited about DNNs, but it wasn't that interesting to the wider world, who generally didn't believe that something like what is happening could happen.  (I believed in the theory of superintelligence, like there's an exponential that's going to go past this arbitary point eventually, and IIRC had wildly wildly too distant expectations of when it might begin to happen in any meaningful way. Just thinking back to that time makes the last couple of years shocking to comprehend.)
3Embee
Thank you so much.
1lesswronguser123
It would be an interesting meta post if someone did a analysis of each of those traction peaks due to various news or other articles.
8Stephen McAleese
There's a rule of thumb called the "1% rule" on the internet that 1% of users contribute to a forum and 99% only read the forum.
3gilch
The mods probably have access to better analytics. I, for one, was a long-time lurker before I said anything.

If spaced repetition is the most efficient way of remembering information, why do people who learn a music instrument practice every day instead of adhering to a spaced repetition schedule?

[-]gwern3611

Spaced repetition is the most efficient way in terms of time spent per item. That doesn't make it the most efficient way to achieve a competitive goal. For this reason, SRS systems often include a 'cramming mode', where review efficiency is ignored in favor of maximizing memorization probability within X hours. And as far as musicians go - orchestras don't select musicians based on who spent the fewest total hours practicing but still manage to sound mostly-kinda-OK, they select based on who sounds the best; and if you sold your soul to the Devil or spent 16 hours a day practicing for the last 30 years to sound the best, then so be it. If you don't want to do it, someone else will.

That said, the spaced repetition research literature on things like sports does suggest you still want to do a limited form of spacing in the form of blocking or rotating regularly between each kind of practice/activity.

4Sheikh Abdur Raheem Ali
Thank you, this was informative and helpful for changing how I structure my coding practice.
3herschel
this feels like a simplistic model of what's going on with learning an instrument. iirc in the "principles of SR" post from 20 years ago wozniak makes a point that you essentially can't start doing SR until you've already learned an item, this being obviously for purely sort of "fact" based learning. SR doesn't apply in the way you've described for all of the processes of tuning, efficiency, and accuracy gains that you need for learning an instrument. my sloppy model here is that formal practice eg for music is something like priming the system to spend optimization cycles on that etc--I assume cognitive scientists claim to have actual models here which I suppose are >50% fake lol.   also, separately, professional musicians in fact do a cheap SR for old repertoire, where they practice only intermittently to keep it in memory once it's been established.
3Bohaska
What about a goal that isn't competitive, such as "get grade 8 on the ABRSM music exam for <instrument>"? Plenty of Asian parents have that particular goal and yet they usually ask/force their children to practice daily. Is this irrational, or is it good at achieving this goal? Would we be able to improve efficiency by using spaced repetition in this scenario as opposed to daily practice?
7gwern
The ABRSM is in X days. It too does not care how efficient you were time-wise in getting to grade-8 competency. There are no bonus points for sample-efficiency. (And of course, it's not like Asian parents are doing their kids much good in the first place with that music stuff, so there's even less of an issue there.)

Declarative and procedural knowledge are two different memory systems. Spaced repetition is good for declarative knowledge, but for procedural (like playing music) you need lots of practice. Other examples include math and programming - you can learn lots of declarative knowledge about the concepts involved, but you still need to practice solving problems or writing code.

Edit: as for why practice every day - the procedural system requires a lot more practice than the declarative system does.

6cubefox
Do we actually know procedural knowledge is linear rather than logarithmic, unlike declarative knowledge?
4ChristianKl
I'm not sure that linear vs. logarithmic is the key.  With many procedural skills learning to apply the skill in the first place is a lot more central than not forgetting the skill. If you want to learn to ride a bike, a little of the practice is about repeating what you already know to avoid forgetting what you already know.  "How can we have the best deliberate practice?" is the key question for most procedural skills and you don't need to worry much about forgetting. With declarative knowledge forgetting is a huge deal and you need strategies to counteract it. 
8ZY
(Like the answer on declarative vs procedural). Additionally, reflecting on practicing Hanon for piano (which is almost a pure finger strength/flexibility type of practice) - might be also for physical muscle development and control.
[-][anonymous]90

Hello! I've just found out about Lesswrong and I immediately feel at home. I feel this is what I was looking for in medium.com and I never found there; a website to learn about things, about improving oneself and about thinking better. Medium proved to be very useful at reading about how people made 5 figures using AI to write articles for them, but not so useful at providing genuinely valuable information.

One thing I usually say about myself is that I have "learning" as a hobby. I have only very recently given a name to things and now I know that it's ADH... (read more)

Hello Everyone!

I am a Brazilian AI/ML engineer and data scientist, I have been following the rationalist community for around 10 years now, originally as a fan of Scott Alexander's Slate Star Codex where I came to know of Eliezer and Lesswrong as a community, along with the rationalist enterprise. 

I only recently created my user and started posting here, currently, I’m experiencing a profound sense of urgency regarding the technical potential of AI and its impact on the world. With seven years of experience in machine learning, I’ve witnessed how the ... (read more)

Re: the new style (archive for comparision)

Not a fan of

1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.

2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.

 

Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.

4habryka
Are you on Windows? Probably an OS-level font-rendering issue which we can hopefully fix. I did some testing on Windows (using Browserstack) but don't have a Windows machine for detailed work. We'll look into it in the next few days.
3papetoast
I overlayed my phone's display (using scrcpy) on top of the website rendered on Windows (Firefox). Image 1 shows that they indeed scaled to align. Image 2 (Windows left, Android right) shows how the font is bolder on Windows and somewhat blurred. The monitor is 2560x1440 (website at 140%) and the phone is 1440x3200 (100%) mapped onto 585x1300.
3papetoast
I am on Windows. This reply is on Android and yeah definitely some issue with Windows / my PC
3kave
I don't think we've changed how often we use serifs vs sans serifs. Is there anything particular you're thinking of?
2papetoast
I hallucinated

Once upon a time, there were Rationality Quotes threads, but they haven't been done for years. I'm curious if there's enough new, quotable things that have been written since the last one to bring back the quote posts. If you've got any good lines, please come share them :) If there's a lot of uptake, maybe they could be a regular thing again.

2Gunnar_Zarncke
Maybe create a Quotes Thread post with the rule that quotes have to be upvoted and if you like them you can add a react.
0Screwtape
Listen people, I don't want your upvotes on that post, I want your quotes. Well, not your quotes, you can't quote yourself, but for you to submit posts other people have made. XD

Possible bug report: today I've been seeing errors of the form

Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?

that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.

Hi! My name is Clovis. I'm an PhD student studying distributed AI. In my spare time, I work on social science projects.

One of my big interests is mathematically modelling dating and relationship dynamics. I study how well people's stated and revealed preferences align. I'd love to chat about experimental design and behavioral modeling! There are a couple of ideas around empirically differentiating models of people's preferences that I'd love to vet in particular. I've only really read the Sequences though, and I know that there's a lot of prior discussion ... (read more)

1Sodium
Hi Clovis! Something that comes to mind is Zvi's dating roundup posts in case you haven't seen them yet. 

Hi everyone,

I have been a lurker for a considerable amount of time but have finally gotten around to making an account.

By trade I am a software engineer, primarily interested in PL, type systems, and formal verification.

I am currently attempting to strengthen my historical knowledge of pre-facist regimes with a focus on 1920s/30s Germany & Italy. I would greatly appreciate either specific book recommendations or reading lists for this topic - while I approach this topic from a distinctly “not a facist” viewpoint, I am interested in books from both side... (read more)

4Kieran Knight
Hi there, I have some background in history, though mostly this is from my own study. There are some big ones on Nazi Germany in particular. William L Shirer's "The Rise and Fall of the Third Reich" is an obvious choice. Worth bearing in mind that his background was journalism and his thesis of Sonderweg (the idea that German history very specifically had an authoritarian tendency that inevitably prefigured the Nazis) is not considered convincing by most of the great historians. Anything by Richard J Evans is highly recommended, particuarly his trilogy on the Third Reich. He also regularly appears on many documentaries on the Nazis. As regards to Russia you would have to ask someone else. Serhii Plokhy is well regarded though he mostly focuses on Ukraine and the Soviet period.

I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.

Hello.

I have been adjacent to but not participating in rationality related websites and topics since at least Middle School age (homeschooled and with internet) and had a strong interest in science and science fiction long before that. Relevant pre-Less Wrong readings probably include old StarDestroyer.Net essays and rounds of New Atheism that I think were age and time appropriate. I am a very long term reader of Scott Alexander and have read at least extensive chunks of the Sequences in the past.

A number of factors are encouraging me to become more active... (read more)

5Screwtape
Hello, and welcome! I'm also a habitual roleplayer (mostly tabletop rpgs for me, with the occasional LARP) and I'm a big fan of Alexander and Yudkowsky's fiction. Any particular piece of fiction stand out as your favourite? It isn't one of theirs, but I love The Cambist and Lord Iron. I've been using Zvi's articles on AI to try and keep track of what's going on, though I tend to skim them unless something catches my eye. I'm not sure if that's what you're looking for in terms of resources.

I've been lurking for years. I'm a lifelong rationalist who was hesitant to join because I didn't like HPMOR. (Didn't have a problem with the methods of rationality; I just didn't like how the characters' personalities changed, and I didn't find them relatable anymore.) I finally signed up due to an irrepressible urge to upvote a particular comment I really liked.

I struggle with LW content, tbh. It takes so long to translate it into something readable, something that isn't too littered with jargon and self-reference to be understandable for a generalist wi... (read more)

6niplav
The obvious advice is of course "whatever thing you want to learn, let an LLM help you learn it". Throw that post in the context window, zoom in on terms, ask it to provide examples in the way the author intended it, let it generate exercises, let it rewrite it for your reading level. If you're already doing that and it's not helping, maybe… more dakka? And you're going to have to expand on what your goals are and what you want to learn/make.
6lsusr
No idea. My favorite stuff is cryptic and self-referential, and I think IQ is a reasonable metric for assessing intelligence statistically, for a group of people.
1Aristotelis Kostelenos
I've been lurking for not years. I also have ADHD and I deeply relate to your sentiment about the jargon here and it doesn't help that when I manage to concentrate enough to get through a post and read the 5 substack articles it links to and skim the 5 substack articles they link to, it's... pretty hit or miss. I remember reading one saying something about moral relativism not being obviously true and it felt like all the jargon and all the philosophical concepts mentioned only served to sufficiently confuse the reader (and I guess the writer too) so that it's not. I will say though that I don't get that feeling reading the sequences. Or stuff written by other rationalist GOATs. The obscure terms there don't serve as signals of the author's sophistication or ways to make their ideas less accesible. They're there because there are actually useful bundles of meaning that are used often enough to warrant a shortcut.

Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.

If any actor with or without AGI can quickly gain lots of ... (read more)

1ZY
I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for "AI Safety" people. For funding, I am not very familiar and want to ask for some clarification: by "(especially cyber-and bio-)security", do you mean generally, or "(especially cyber-and bio-)security" caused by AI specifically?

Are there any mainstream programming languages that make it ergonomic to write high level numerical code that doesn't allocate once the serious calculation starts? So far for this task C is by far the best option but it's very manual, and Julia tries and does pretty well but you have to constantly make sure that the compiler successfully optimized away the allocations that you think it optimized away. (Obviously Fortran is also very good for this, but ugh)

What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?

Hey, everyone! Pretty new here and first time posting.

I have some questions regarding two odd scenarios. Let's assume there is no AI takeover to the Yudkowsky-nth degree and that AGI and ASI goes just fine. (Yes, that's are already a very big ask).

Scenario 1: Hyper-Realistic Humanoid Robots

Let's say AGI helps us get technology that allows for the creation of humanoid robots that are visually indistinguishable from real humans. While the human form is suboptimal for a lot of tasks, I'd imagine that people still want them for a number of reasons. If there's ... (read more)

8gilch
The questions seem underspecified. You're haven't nailed down a single world, and different worlds could have different answers. Many of the laws of today no longer make sense in worlds like you're describing. They may be ignored and forgotten or updated after some time. If we have the technology to enhance human memory for perfect recall, does that violate copyright, since you're recording everything? Arguably, it's fair use to remember your own life. Sharing that with others gets murkier. Also, copyright was originally intended to incentivize creation. Do we still need that incentive when AI becomes more creative than we are? I think not. You can already find celebrity deepfakes online, depicting them in situations they probably wouldn't approve of. I imagine the robot question has similar answers. We haven't worked that out yet, but there seem to be legal trends towards banning it, but without enough teeth to actually stop it. I think culture can adapt to the situation just fine even without a ban, but it could take some time.
1daijin
TL;DR I think increasing the fidelity of partial reconstructions of people is orthogonal to legality around the distribution of such reconstructions, so while your scenario describes an enhancement of fidelity, there would be no new legal implications. --- Scenario 1: Hyper-realistic Humanoid robots CMIIW, I would resummarise your question as 'how do we prevent people from being cloned?' Answer: A person is not merely their appearance + personality; but also their place-in-the-world. For example, if you duplicated Chris Hemsworth but changed his name and popped him in the middle of London, what would happen? - It would likely be distinctly possible to tell the two Chris Hemsworths' apart based on their continuous stream of existence and their interaction with the world - The current Chris Hemsworth would likely order the destruction of the duplicated Chris Hemsworth (maybe upload the duplicate's memories to a databank) and I think most of society would agree with that. This is an extension of the legal problem of 'how do we stop Bob from putting Alice's pictures on his dorm room wall' and the answer is generally 'we don't put in the effort because the harm to Alice is minimal and we have better things to do.' Scenario 2: Full-Drive Virtual Reality Simulations 1. Pragmatically: They would unlikely be able to replicate the Beverly hills experience by themselves - even as technology improves, its difficult for a single person to generate a world. There would likely be some corporation behind creating beverly-hills-like experiences, and everyone can go and sue that corporation. 1. Abstractly: Maybe this happens and you can pirate beverly hills off Piratebay. That's not significantly different to what you can do today. 2. I can't see how what you're describing is significantly different to keeping a photo album, except technologically more impressive. I don't need legal permission to take a photo of you in a public space. Perplexity AI gives: ``` In the United States, y

Is anyone from LW going to the Worldcon (World Science Fiction Convention) in Seattle next year?

ETA: I will be, I forgot to say. I also notice that Burning Man 2025 begins about a week after the Worldcon ends. I have never been to BM, I don't personally know anyone who has been, and it seems totally impractical for me, but the idea has been in the back of my mind ever since I discovered its existence, which was a very long time ago.

2lsusr
I didn't know about that. That sounds like fun!

Man, politics really is the mind killer

I'm really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!

I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.

Hi! New to the forums and excited to keep reading. 

Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?

For a more benign example, say one wanted to create multiple "personas" here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a "disagreeable" persona, one neutral, and one "agreeable".

A malicious example would be if someone

... (read more)
5Screwtape
Not a member of the LessWrong team, but historically the site had a lot of sockpuppetting problems that they (as far as I know) solidly fixed and keep an eye out for.
1halinaeth
Makes sense, thanks for the new vocab term!

I think there might be a lesswrong editor feature that allows you to edit a post in such a way that the previous version is still accessible. Here’s an example—there’s a little icon next to the author name that says “This post has major past revisions…”. Does anyone know where that option is? I can’t find it in the editor UI. (Or maybe it was removed? Or it’s only available to mods?) Thanks in advance!

4Raemon
It's available for admins at the moment. What post do you wanna change?
2Steven Byrnes
Actually never mind. But for future reference I guess I’ll use the intercom if I want an old version labeled. Thanks for telling me how that works.  :) (There’s a website / paper going around that cites a post I wrote way back in 2021, when I was young and stupid, so it had a bunch of mistakes. But after re-reading that post again this morning, I decided that the changes I needed to make weren’t that big, and I just went ahead and edited the post like normal, and added a changelog to the bottom. I’ve done this before. I’ll see if anyone complains. I don’t expect them to. E.g. that same website / paper cites a bunch of arxiv papers while omitting their version numbers, so they’re probably not too worried about that kind of stuff.)
4Raemon
I think probably we don't have that great a reason not to roll this out to more users, it's mostly a matter of managing UI complexity

I am very interested in mind uploading

I want to do a PhD in a related field and comprehensively go through "whole brain emulation: a roadmap" and take notes on what has changed since it was published

If anyone knows relevant papers/researchers that would be useful to read for that or so I can make an informed decision on where to apply to gradschool next year, please let me know

Maybe someone has already done a comprehenisve update on brain emulation I would like to know and I would still like to read more papers before I apply to grad school

4Garrett Baker
Those invited to the foresight workshop (also the 2023 one) are probably a good start, as well as foresight’s 2023 and 2024 lectures on the subject.
2Steven Byrnes
Good luck! I was writing about it semi-recently here. General comment: It’s also possible to contribute to mind uploading without getting a PhD—see last section of that post. There are job openings that aren’t even biology, e.g. ML engineering. And you could also earn money and donate it, my impression is that there’s desperate need.

Are there good and comprehensive evaluations of covid policies? Are there countries who really tried to learn, also for the next pandemic?

When rereading [0 and 1 Are Not Probabilities], I thought: can we ever specify our amount of information in infinite domains, perhaps with something resembling hyperreals?

  1. An uniformly random rational number from  is taken. There's an infinite number of options meaning that prior probabilities are all zero (), so we need infinite amount of evidence to single out any number.
    (It's worth noting that we have codes that can encode any specific rational number with a finite word - for instance, first apply bijection of rationals to nat
... (read more)

I've noticed that when writing text on LessWrong, there is a tendency for the cursor to glitch out and jump to the beginning of the text. I don't have the same problem on other websites. This most often happens after I've clicked to try to insert the cursor in some specific spot. The cursor briefly shows where I clicked, but then the page lags slightly, as if loading something, and the cursor jumps to the beginning.

The way around this I've found is to click once. Wait to see if the cursor jumps away. If so, click again and hope. Only start typing once you've seen multiple blinks at the desired location. Annoying!

7habryka
We used to have a bug like this a long time ago, it was related to a bug at the intersection of our rich-text editor library and our upgrade from React 17 to React 18 (our front-end framework). I thought that we had fixed it, and it's definitely much less frequent than it used to be, but it's plausible we are having a similar bug.  It's annoyingly hard to reproduce, so if you or anyone else finds a circumstance where you can reliably trigger it, that would be greatly appreciated.

Hello,

Longtime lurker, more recent commenter. I see a lot of rationality-type posters on Twitter and in the past couple of years became aware of "post-rationalists." It's somewhat ill-defined but essentially they are former rationalists who are more accepting of "woo" to be vague about it. My question is: 1) What level of engagement is there (if any) between rationalists and post-rationalists and 2) Is there anyone who dabbled or full on claimed post-rationalist positions and then reverted back to rationalists positions? What was that journey like and what made you switch between these beliefs?

2ChristianKl
One aspect of LessWrongers is that they often tend to hold positions that are very complex. If you think that there are a bunch of positions that are rationalist and a bunch of positions that are post-rationalist and there are two camps that each hold the respective positions, you miss a lot of what rationalism is about. You will find people at LessWrong from whom doing rituals like the Solstice events or doing Circling (which for example people at CFAR did a lot) feels too woo. Yet, CFAR was the primer organization for the development of rationality and for the in person community the Winter Solstice event is a central feature. In the recent LessWrong Community Weekend in Europe, Anna Riedl give the keynote speech about 4E-rationality. You could call 4E-rationality post-rational, in the sense that it moves past the view of rationality you find in the sequences on LessWrong. 
2Screwtape
From my observations it's fairly common for post-rationalists to go to rationalist events and vice-versa, so there's at least engagement on the level of waving hello in the lunchroom. There's enough overlap in identification that some people people in both categories read each other's blogs, and the essays that wind up at the intersection of both interests will have some back and forth in the comments. Are you looking for something more substantial than that? I can't think of any reverting rationalists off the top of my head, though they might well be out there.
1ZY
I am interested in learning more about this, but not sure what "woo" means; after googling, is it right to interpret as "unconventional beliefs" of some sort?
4gilch
It's short for "woo-woo", a derogatory term skeptics use for magical thinking. I think the word originates as onomatopoeia from the haunting woo-woo Theremin sounds played in black-and-white horror films when the ghost was about to appear. It's what the "supernatural" sounds like, I guess. It's not about the belief being unconventional as much as it being irrational. Just because we don't understand how something works doesn't mean it doesn't work (it just probably doesn't), but we can still call your reasons for thinking so invalid. A classic skeptic might dismiss anything associated categorically, but rationalists judge by the preponderance of the evidence. Some superstitions are valid. Prescientific cultures may still have learned true things, even if they can't express them well to outsiders.
3ZY
Ah thanks. Do you know why these former rationalists were "more accepting" of irrational thinking? And to be extremely clear, does "irrational" here mean not following one's preference with their actions, and not truth seeking when forming beliefs?

In Fertility Rate Roundup #1, Zvi wrote   

"This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree." 

Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; t... (read more)

-1Richard_Kennaway
Look up anti-natalism, and the Voluntary Human Extinction Movement. And random idiots everywhere saying "well maybe we all deserve to die", "the earth would be better off without us", "evolution made a huge mistake in inventing consciousness", etc.
1Sherrinford
So you think that looking up "random idiots" helps me find "arguments related to or a more detailed discussion about this disagreement"?
1Richard_Kennaway
No, I just threw that in. But there is the VHEM, and apparently serious people who argue for anti-natalism. Short of those, there are also advocates for "degrowth". I suspect the reason that Zvi declined to engage with such arguments is that he thinks they're too batshit insane to be worth giving house room, but these are a few terms to search for.
0Sherrinford
I appreciate that you posted a response to my question. However, I assume there is some misunderstanding here. Zvi notes that he will not "be engaging with any of the arguments against this, of any quality" (which suggests that there are also good or relevant arguments). Zvi includes the statement that "AI is going to kill everyone", and notes that he "strongly disagrees".  As I asked for "arguments related to or a more detailed discussion" of these issues, you mention some people you call "random idiots" and state that their arguments are "batshit insane". It thus seems like a waste of time trying to find arguments relevant to my question based on these keywords.  So I wonder: was your answer actually meant to be helpful?

Why are comments on older posts sorted by date, but comments on newer posts are sorted by top scoring?

[-]Raemon145

The oldest posts were from before we had nested comments, so the comments there need to be in chronological order to make sense of the conversation.

Is there an explanation somewhere how the recommendations algorithm on the homepage works, i.e. how recency and karma or whatever are combined?

4kave
The "latest" tab works via the hacker news algorithm. Ruby has a footnote about it here. I think we set the "starting age" to 2 hours, and the power for the decay rate to 1.15.
1Sherrinford
Very helpful, thanks! So I assume the parameter b is what you call starting age? I ask because I am a bit confused about the following:  * If you apply this formula, it seems to me that all posts with karma = 0 should have the same score, that score should be higher than the score of all negative-karma posts and negative-karma posts should get a higher score if they are older. * All karma>0 posts should appear before all karma=0 posts and those should appear before all negative-karma posts. However, when I expand my list a lot until it íncludes four posts with negative karma (one of them is 1 month old), I still do not see any post with zero karma. (With "enriched" sorting, I found two recent ones with 0 karma.) Moreover, this kind of sorting seems to give really a lot of power to the first one or two people who vote on a post if their votes can basically let a post disappear?
2kave
A quick question re: your list: do you have any tag filters set?
1Sherrinford
I don't think so. But where could I check that?
2kave
Click on the gear icon next to the feed selector 
1Sherrinford
No, all tags are on default weight.
2habryka
Could you send me a screenshot of your post list and tag filter list? What you are describing sounds really very weird to me and something must be going wrong.
1Sherrinford
The list is very long, so it is hard to make a screenshot. Now with some hours of distance, I reloaded the homepage, tried again, and one 0 karma post appeared. (Last time, it did definitely not, I searched very rigorously.) However, according to the mathematical formula, it still seems to me that all 0 karma post should appear at the same position, and negative karma posts below them?
2habryka
We have a few kinds of potential bonus a post could get, but yeah, something seems very off about your sort order, and I would really like to dig into it. A screenshot would still be quite valuable.

Quick note: there's a bug I'm sorting out for some new LessWrong Review features for this year, hopefully will be fixed soon and we'll have the proper launch post that explains new changes.

Possible bug: Whenever I click the vertical ellipsis (kebab) menu option in a comment, my page view jumps to the top of the page. 

This is annoying, since if I've chosen to edit a comment I then need to scroll back down to the comment section and search for my now-editable comment.

[Bug report]: The Popular Comments section's comment preview ignores spoiler tags

As seen on Windows/Chrome