If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
51 comments, sorted by Click to highlight new comments since:
[-][anonymous]232

My interest in AI and AI alignment has converged from multiple angles and led me here.

I also make chatbots for call centers, but I'm not exactly proud of it.

Have you considered reading HPMOR? The first 10 chapters are a bit slow, but after ch. 20 it really takes off, and it comes highly recommended as one of the best works of science fiction ever; the kind of thing that would transform the world if it replaced Great Gatsby and Romeo and Juliet in most high schools. 

There's some other texts that are also pretty well-regarded for skilling up too. Scott's blog (the codex) is like cocaine for sci-fi lovers, and the Superintelligence FAQ is currently the main resource on the current situation with AI.

[+][comment deleted]10

Request to the LW team: Clean up the list of open Github issues. In the past I've frequently looked at that page and declined to submit an issue because that seemed pointless with 600+ open ones. Eventually I did begin to submit issues and even got quick responses, which I found surprising given the backlog. But now that I've actually begun skimming the backlog (oldest first), it appears that a significant fraction of old open issues are resolved but unclosed.

In any case, a stale Github backlog makes it hard for outside contributors a) before submitting a new issue, to check whether there's already an open issue on the topic; b) to contribute their own pull requests (as they can't be sure that the issue is actually unresolved, rather than merely marked as open; example); c) to judge whether the state of development is healthy or not (e.g. there's no point in submitting bug reports to a Github repository that looks unmaintained); etc.

If it's infeasible or prohibitively expensive to identify these issues by hand, presumably either AI or outside contributors could assist in that. For example, I just skimmed a few pages of Github issues and made suggestions to close 12 of them.

I've now also posted this request as a Github issue here.

Addendum: Each Github issue gets synced to a non-public Asana card (previously to Trello, but the same logic applies) via the automation service Unito. So as a comparatively efficient solution, one which hopefully wouldn't take too much time, you could presumably set up a reverse Unito automation which closes Github issues when their corresponding Asana card is concluded. Ideally including an automatic Github comment (maybe incl. automatic tags) to explain why and how the issue was closed (e.g. resolved, not planned).

This sounds relatively straightforward to do: Unito allows setting up two-way sync workflows (rather than the current one-way sync workflow), and neither of their pages on Asana and Github imply any significant limitation to this two-way sync functionality.

They even have a full help article on this exact integration here, and its motivation matches my original request:

create tasks in Asana based on specific GitHub issues that will then be synced together in real-time, so that as you make changes in one place, you’ll see them reflected in the other.

You can use this workflow to streamline software development management; the ticketing progress; or to align comments, assignees, custom fields, and more.

EDIT: Possibly this 2-way sync with Asana already exists, and what's needed instead is to manually or automatically update all the old stale tasks which were still synced to Trello. Unito has a help article for the Github-Trello integration here.

[-]kave10

(As far as I know, that integration is a zombie and no one uses the Asana)

I think the EA Forum uses it it actively. But the LW team doesn't at all.

[-]kave10

I personally never check the Github issues (though I am not the person on the team who would be most likely to do so). I think the Intercom is the most reliable channel for reporting bugs, though the Open Thread is also good.

If the Github issue tracker is indeed not in use, then I find that very disappointing.

Intercom may be a more reliable channel for reporting bugs than the alternatives (though even on Intercom, I've still had things slip through the cracks), but it can't replace an issue tracker. Besides, not all feedback constitutes bug reports; others require a back-and-forth, or input from multiple people, or follow-up to ask "hey, what's the status here?", and all of that works much better when it's asynchronous and in public, not in a private chat interface.

And this very comment thread is a good illustration for why open thread comments also don't work for this purpose: they might not get noticed; the threads are kind of ephemeral; feedback is mixed with non-feedback; the original poster has no way to keep track of their feedback (I had to skim through all my recent comments to find the ones that were feedback); not everyone related to an issue gets notified when someone comments on the issue; if issues are discussed in disparate threads, there's no bi-directional crosslinking (if Github issue A links to B, then B displays the link, too); etc.

Ultimately whatever tools the LW team use to manage the website development may work well for them. But when I want to help as an outsider, I feel like the tools I'm given are not up to snuff.

It seems to me like a public issue tracker is an obvious solution to this problem, so I'm kind of incredulous that there isn't really one. What gives?

[-]kave10

It's (as a descriptive fact) not a priority to support external contributions to the codebase. My guess is that it's also correct not to prioritise that.

I understand that that's obviously the counter-perspective, it just seems so wild to me. I'd love to see or do a dialogue on this, with anyone on the team where it would matter if they changed their mind on deprioritising this topic.

What's the best resource to learn about the various site features on LW and/or the EA Forum? I.e. not LW/EA content, but neat semi-hidden features like the Bookmarks page; or e.g. the "Enter focus mode" option to hide everything except the comment box, which I can't see on normal posts but which is available on Q&A posts. (I guess there's the FAQ, but that's from 2019 and Ruby recently described it as getting a bit out of date.)

If there's no such thing, is there any interest among the LW / EA teams to commission something like this? I'm thinking of writing a post like "An Overview of LW Features" which would provide an overview of the various features, including screenshots or gif recordings where that makes sense.

I notice that I'm not aware of either the bookmarks page nor the "Enter focus mode" option. Given that I might be the person who spent the most time on LW (I wrote the most comments) I would expect most people to know fewer features of LW than me. 

That means that most people would likely learn something new through a "An Overview of LW Features" post.

The Bookmarks page is apparently quite new; I also only learned about it a few days ago. Whereas I discovered the "Enter focus mode" feature on Q&A posts by accident; I don't know how old it is, or why it's only available for Q&A posts. Maybe it's an experimental feature.

I have two three year old booksmarks, so while some aspects of the page might be new (like being able to see the post on which you voted), bookmarks seem to exist for longer.

Hello! I'm not really sure which facts about me are useful in this introduction, but I'll give it a go:
I am a Software QA Specialist / SDET, I used to write songs as a hobby, and my partner thinks I look good in cyan.

I have found myself drawn to LessWrong for at least three reasons:

  1. I am very concerned about existential and extinction risk from advanced AI
  2. I enjoy reading about interesting topics and broadening and filling out my world model
  3. I would very much like to be a more rational person

Lots of words about thing 1: In the past few months, I have deliberately changed how I spend my productive free time, which I now mostly occupy by trying to understand and communicate about AI x-risk, as well as helping with related projects.
I have only a rudimentary / layman's understanding of Machine Learning, and I have failed pretty decisively in the past when attempting mathematical research, so I don't see myself ever being in an alignment research role. I'm focused on helping in small ways with things like outreach, helping build part of the alignment ecosystem, and directing a percentage of my income to related causes.
(If I start writing music again, it will probably either be because I think alignment succeeded or because I think that we are already doomed. Either way, I hope I make time for dancing. ...Yeah. There should be more dancing.)

Some words about thing 2: I am just so glad to have found a space on the internet that holds its users to a high standard of discourse. Reading LessWrong posts and comments tends to feel like I have been prepared a wholesome meal by a professional chef. It's a welcome break from the home-cooking of my friends, my family, and myself, and especially from the fast-food (or miscellaneous hard drugs) of many other platforms.

Frankly just a whole sack of words about thing 3: For my whole life until a few short years ago, I was a conservative evangelical Christian, a creationist, a wholesale climate science denier, and generally a moderately conspiratorial thinker. I was sincere in my beliefs and held truth as the highest virtue. I really wanted to get everything right (including understanding and leaving space for the fact that I couldn't get everything right). I really thought that I was a rational person and that I was generally correct about the nature of reality.
Some of my beliefs were updated in college, but my religious convictions didn't begin to unravel until a couple years after I graduated. It wasn't pretty. The gradual process of discovering how wrong I was about an increasingly long list of things that were important to me was roughly as pleasant as I imagine a slow death to be. Eventually coming out to my friends and family as an atheist wasn't a good time, either. (In any case, here I still am, now a strangely fortunate person, all things considered.)
The point is, I have often been caught applying my same old irrational thought patterns to other things, so I have been working to reduce the frequency of those mistakes. If AI risk didn't loom large in my mind, I would still greatly appreciate this site and its contributors for the service they are doing for my reasoning. I'm undoubtedly still wrong about many important things, and I'm hoping that over time and with effort, I can manage to become slightly less wrong. (*roll credits)

What % of alignment is crucial to get right?

Most alignment plans involve getting the AI to a point where it cares about human values, then uses its greater intelligence to solve problems in ways we didn't think of.

Some alignment plans literally involve finding clever ways to get the AI to solve alignment itself in some safe way. [1]

This suggests something interesting: Every alignment plan, explicitly or not, is leaving some amount of "alignment work" for the AI (even if that amount is "none"), and thus leaving the remainder for humans to work out. Generally (but not always!), the idea is that humans must get X% of alignment knowledge right before launching the AI, lest it become misaligned.

I don't see many groups lay out explicit reasons for selecting which "built-in-vs-learned alignment-knowledge-mix" their plan aims for. Of course, most (all?) plans already have this by default, and maybe this whole concept is sorta trivial anyway. But I haven't seen this precise consideration expressed-as-a-ratio anywhere else.

(I got some feedback on this as a post, but they noted that the idea is probably too-abstract to be useful for many plans. Sure enough, when I helped the AI-plans.com critique-a-thon, most "plans" were actually just small things that could "slot into" a larger alignment plan. Only certain kinds of "full stack alignment" plans could be usefully compared with this idea.)


  1. For a general mathematization of something like this, see the "QACI" plan by Tamsin Leake at Orthogonal. ↩︎

I like your observation. I didn't realize at first that I had seen it before, from you during the critique-a-thon! (Thank you for helping out with that, by the way!)

A percentage or ratio of the "amount" of alignment left to the AI sounds useful as a fuzzy heuristic in some situations, but I think it is probably a little too fuzzy to get at the the failures mode(s) of a given alignment strategy. My suspicion is that which parts of alignment are left to the AI will have much more to say about the success of alignment than how many of those checkboxes are checked. Where I think this proposed heuristic succeeds is when the ratio of human/AI responsibility in solving alignment is set very low. By my lights, that is an indication that the plan is more holes than cheese.

(How much work is left to a separate helper AI might be its own category. I have some moderate opinions on OpenAI's Superalignment effort, but those are very tangential thoughts.)

I'm an AI alignment researcher who will be moving to the bay area soon (the two facts are unconnected--I'm moving due to my partner getting a shiny new job). I'm interested in connecting with other folks in the field, and feeling like I have coworkers. My background is academic mathematics.

1) In academia, I could show up at a department (or just look at their website) and find a schedule of colloquia/seminars/etc. ranging from 1-month to 10+/week, (depending on department size etc.). Are there similar things I should be aware of for AI folks near the bay?

2) Where (if anywhere) do independent alignment people tend to work (and how could I go about renting an office there?) I've heard of Constellation and Rose Garden Inn as the locations for several alignment organizations/events--do they also have office spaces for independent researchers?

3) Anything else I should know about?

How many LessWrong users/readers are there total?

If you go to the user tab on the search page with no search term you can see there's currently 113,654 users (of course, how many of those are active or are 'readers' is a completely different question).

I stumbled upon LessWrong via AI & SlateStarCodex, but quickly noticed the rationality theme. My impressions on rationality were that of Sheldon Cooper-esque (the Big Bang theory) and I had put it aside as entertainment. I had read some of Eliezer's work earlier, such as staring into the singularity and saw these things called "sequences" under the library section. The first few posts I read made me think "Oh! Here are my people! I get to improve!"

But of course the library made it easy to notice HPMOR, and that's where I actually "began". I've listened to it twice so far. I have begun suggesting friends to give it to their kids in the rare cases that is possible (language barriers and general orientation being the primary barriers).

I grew up in Kutch. Looking back, I might have been an outlier as a kid, but then again, maybe not. I don't meet many "rationally oriented" people around here, and among the few I do know, I'd say I'm well-acquainted with  many of them.

It is great to have the sequences, and the posts from all of you. I feel it is one of the rare places where I get to refine my thinking. I am going through the Sequences slowly. I noticed that if I actually talked the way sequences talk, when reasoning with people (ie, practicing rationality), they feel awkward. This has led me to a search space of sentences and analogies to use when I am talking to friends. Well, it has not been tough in past, but there are some more focused updates going on in my brain and talking like "Is this discussion availability heuristic?" seem to make people feel a bit off-putting. The process is great fun!

Thanks! And Hi!

Hello there! What are you reading now?

Hey! Reading Lawful Uncertainty, Replacing Guilt, once again listening to HPMOR. I started out reading Meditations on Moloch this weekend but got steered to Replacing Guilt. Replacing Guilt is something I have not been able to help others with. So far, the tools suggested fit quite well with what I have figured out, but I have never been so clear as to be able to say "refinement, internalisation, realism". Given Nate's clarity, there are many things I had not thought about. I am having fun thinking about guilt with this much concreteness :D

What about you?

FYI, if you like the Replacing Guilt sequence but find it doesn't fully resonate with you, the recent Replacing Fear sequence is a complementary take on motivation.

Thank you!

Since I've been reading so much about guilt, I have been thinking about how many emotions I feel at once when something undesirable happens. It is no simple task for a human to handle such a huge set of variables. And yet somehow, these sequences are helpful.

I've actually been working to start writing down some notes about my version of a lot of these ideas (as well as my version of ideas I've not seen floating around yet). I think it would be a good opportunity to solidify my thoughts on things, notice new connections, and give back to the intellectual culture.

You get to cheat on your first readthrough and leave fun comments :)

[+][comment deleted]10

Noticed you (the LW team) updated the search page. Thank you! It is much nicer now. Are you going to add more features to it like filtering by people, searching within specific time frames, excluding tags, etc?

What's the minimum one would have to learn to productively contribute to open Github issues in the LW / EA Forum codebase?

I've noticed that the EA Forum turns "->" into an arrow symbol, same with "<->". Notion does the same, and I like the behavior there, too.

a) Is there any reason why the LW editor doesn't do that?

b) Does the EA Forum editor have some other cool general features which the LW editor lacks?

Oh, huh, it also only does this in the comment editor and not the post editor. My guess is that it's actually a feature in their font and has nothing to do with the editor. My guess is their font has some cool ligatures that make these things look nicer. We use a system font stack which doesn't currently have those ligatures (seems like a very font-specific thing).

I'm looking into diving deep into the Sequences (already finished HPMoR), and I'm the type of person who cannot stand missing even one part of something. Weird bug in my broken hardware. I'm looking for two things:

1) The full Sequences, in chronological order (I poked around a bit and got some dead links). An EPUB would be ideal but is icing, not the cake.

2) The full Sequences, organized by subject matter (ie. Map and Territory, Quantum Stuff, etc.). Again, an EPUB would be preferred but not crucial. The versions I saw all cut out at least some parts. 
 

I looked at this, but it seemed unclear to me whether, for example, Fun Theory was included under Quantum Physics.

I saw the MIRI-curated from A-Z version, but I want the whole text, without anything being excluded. 

Are these already available?

Also, side question #1, how much does LessWrong still discuss things other than AI? (not bashing it, but it seemed to me like the community was most active several years in the past)

Also, side question #2, how do you bookmark something on LW?

how much does LessWrong still discuss things other than AI?

There's a huge focus on AI nowadays. If, like me, you aren't interested in random AI posts, and want to lower the relative frequency of the stuff, you can set a karma filter on the "AI" tag in the "Latest Posts" feed on the homepage. I have it set to -96, so I barely see AI posts, i.e. only ones with 96++ karma.

how do you bookmark something on LW?

Besides bookmarking posts in your browser, you can use the context menu below a post title (the ellipsis in this screenshot). Then you can find your bookmarked posts by hovering your username in the top right (see this screenshot).

I'm wondering if we want "private notes about users within the app", like discord.

Use case: I don't always remember the loose "weight assignments" over time, for different people. If someone seems like they're preferring to make mistake class A over B in one comment, then that'll be relevant a few months later if they're advocating a position on A vs B tradeoffs (I'll know they practice what they preach). Or maybe they repeated a sloppy or naive view about something that I think should be easy enough to dodge, so I want to just take them less seriously in general going forward. Or maybe they mentioned once in a place that most people didn't see that they worked for 10 years doing a particularly niche technical subtopic and they're guaranteed to be the only one any of us knows who has a substantial inside view on, I'll want to remember that later in case it comes up and I need their advice.

I would like to have that feature (though I don't see it as high priority)

Does anyone know about an update to GPT-4 that was deployed to the https://chat.openai.com/ version in the last 1-3 weeks? It seems to have gotten significantly better.

[-]Ann20

Not sure about the model but they might've fixed something important around the 9th-10th, haven't gotten any notices of issues since then and it stopped crashing on long code blocks (which was a brief issue after the update on the 3rd).

One particular application of GPT-4 (or other LLMs) that seems really valuable to me is as a fact check on claims about what other people think (e.g., "what's the scientific consensus on whether exercise lowers the risk of dementia?") As long as the topic isn't about political correctness, I pretty much trust GPT-4 to represent the consensus fairly, and that's a pretty amazing tool to have. Like, it's not uncommon that people disagree about what the scientific consensus is, and we didn't really have a way to answer these questions before.

Sometimes I even feel like it should be an epistemic norm that you fact check important claims with GPT-4 when applicable. E.g., whenever you say "this is considered a fringe idea" or "these two books are the most widely known on the subject", or even "the argument this person makes is only xyz", if it's in the context of a serious post, you should check with GPT-4 whether that's actually true and perhaps link to a transcript. Maybe that's going too far but like, restricting the extent to which people can bend reality to fit their needs seems pretty great.

What GPT-4 does is to give you the establishment consensus. I would expect that there are many scientific questions where there are multiple scientific paradigms that touch the question. If one of the paradigms is more high status then I think there's a good chance that GPT-4 will give you the establishment position.

If you for example ask GPT-4 some questions about hypnosis, I think you often get something that's more of an establishment position back than what you would get if you would read the Oxford Handbook of Hypnosis that describes what's scientific consensus among the scientists who study hypnosis.

Fair point, but there's still a strong correlation between established consensus and expert consensus. In most cases, they're probably gonna be similar.

I wrote the previous comment mainly on the experience of a few months ago. I just tested it by asking about Post-Treatment Lyme Disease Syndrome and GPT4 seems to be more open now. Part of the response was:

Differing Views among Medical Professionals: Some healthcare professionals and researchers question whether PTLDS is a distinct medical condition, while others argue that it is a legitimate syndrome requiring further study and a specific approach to treatment.

Patient Advocacy: Patient advocacy groups, particularly those representing individuals who have experienced persistent symptoms after Lyme disease treatment, may have views that differ from some medical professionals, adding to the complexity of the conversation around PTLDS.

This seems to be a good summary that does not ignore the views of patient advocates when they disagree with the orthodox CDC position. 

Can anyone poke Jai (of blog.jaibot.com) to post the "500 Million But Not A Single One More" essay on LessWrong? I just noticed the original link has lapsed and been purchased or something: https://blog.jaibot.com/500-million-but-not-a-single-one-more/

[This comment is no longer endorsed by its author]Reply

This is a weird question I want to ask someone with more experience on LessWrong, and I can't find a better place to ask it than here: there are some comments out there on certain posts, usually replies to previous comments, that are juts *blank*. These tend to have a huge number of downvotes. What are these comments? Accidents? Intentional spam? Retracted comments that avoid outright deleting or "retracting" a message? 

Retracted comments that avoid outright deleting or "retracting" a message? 

Yep, that's approximately it. We don't allow deletion of comments if there are replies to your comment (since that would hide the replies). So some people work around that by blanking out their comments.

Visiting London and kinda surprised by how there isn't much of a rationality community there relative to the bay area (despite there being enough people in the city who read LessWrong, are aware of the online community, etc.?) Especially because the EA community seems pretty active there. The rationality meetups that do happen seem to have a different vibe. In the bay, it is easy to just get invited to interesting rationalist-adjacent events every week by just showing up. Not so in London. 

Not sure how much credit to give to each of these explanations:

  • Berkeley just had a head start and geography matters more than I expected for communities
  • Berkeley has lightcone infrastructure but the UK doesn't have a similar rationalist organisation (but has a bunch of EA orgs)
  • The UK is just different culturally from the bay area, people are less weird or differ in some other trait that makes having a good rationality community here harder 

While reading Project Lawful, I've come across concept of algorithmic commitments and can't find any other mentions of them.

https://www.glowfic.com/replies/1770000#reply-1770000

Bots for prisoner's dilemma, defined as functions (other bot -> decision):

let CooperateBot(_) = Cooperate

let DefectBot(_) = Defect

let FairBot (X) = if Provable( "X(Fairbot) == Cooperate" ) then Cooperate else Defect

let Provable-1("X") = Provable(" ~Provable(0=1) -> X")

let PrudentBot(X) = if (Provable("X(PrudentBot) == Cooperate") && Provable-1("X(DefectBot) == Defect")) then Cooperate else Defect

The topic looks pretty interesting; are there any further examples of such commitments?

UPD: I offer Manifold bounty for finding such examples!

Robust Cooperation in the Prisoner's Dilemma

and other MIRIish writing under the tags to that post