If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New to LessWrong?

New Comment
150 comments, sorted by Click to highlight new comments since: Today at 10:45 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello! I’ve mostly been lurking around on LessWrong for a little while and have found it to be a good source of AI news and other stuff. I like these posts - sometimes it feels somewhat intimidating in other parts. I hope to be commenting more on LessWrong in the future!

Welcome! Glad to have you here and am looking forward to reading your comments!

Confession: I've sometimes been getting Lesswrong users mixed up, in a very status-damaging way for them. 

Before messaging with lc, I mixed his writings and accomplishments with lsusr (e.g. I thought the same person wrote Luna Lovegood and the Chamber of Secrets and What an actually pessimistic containment strategy looks like). 

I thought that JenniferRM did demon research and used to work at MIRI, but I had I mixed her up with Jessicata. 

And, worst of all, I mixed up Thane Ruthenis with Thoth Hermes, causing me to think that Thane Ruthenis wrote Thoth's downvoted post The truth about false.

Has this happened to other people? The main thing is that I just didn't notice the mixup at all until ~a week after we first exchanged messages. It was just a funny manifestation of me not really paying much attention to some new names, and it's an easy fix on my end, but the consequences are pretty serious if this happens in general.

That's funny. When I read lc's username I think "that username looks similar to 'lsusr'" too.

Yep, happened to me too. I like LW aesthetic so I wouldn't want profile pics, but I think personal notes on users (like discord has) would be great.

7Ben Pace12d
Someone told me that they like my story The Redaction Machine.
The secret is out. Ben's secret identity is Ben Pace.
Well, that's cause I'm his alt
4Charlie Steiner3mo
There are at least two Steves, and also at least two Evans. But I don't know if anything embarassing happened, I just mixed some people up.
This happens to me too. IMO one of the best arguments for something like profile pictures or something. Not enough entropy in name space.
This is also mitigated by automatic images like gravatar or the ssh key visualization. I wonder if they can be made small enough to just add to usernames everywhere while maintaining enough distinguishable representations.
I often accidentally mix you up with the Trevor from Open Phil! More differentiation would be great, especially in the case where people share the same first name.
2Nathan Helm-Burger2mo
I have been around a long while, so the names are mostly familiar to me. I did make a minor embarrassing mistake a few months ago, thinking that Max H (on the East Coast) was the account of my friend Max H (on the west coast). East Coast Max H added a note to his profile to disambiguate. Do you read people's profiles before first messaging them?
Yes, it happened before for me as well. I think it would be good to have profile pictures to make it easier to recognize users.
5Nathan Helm-Burger2mo
Maybe to make it uniform and non-distracting it could just be small grayscale pattern icons next to names based on a hash of the name.

Hello everyone!

After several years seeing (and reading) links to LessWrong posts scattered in other areas of the internet, I decided to sign up for an account today myself and see if I can't find a new community to contribute to here :)

I look forward to reading, writing, and thinking with you all in the future!

It would save me a fair amount of time if all lesswrong posts had an "export BibTex citation" button, exactly like the feature on arxiv.  This would be particularly useful for alignment forum posts!

I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.

Hello, I came across this forum while reading an AI research paper where the authors quoted from Yudkowsky's "Hidden Complexity of Wishes." The linked source brought me here, and I've been reading some really exceptional articles ever since. 

By way of introduction, I'm working on the third edition of my book "Inside Cyber Warfare" and I've spent the last few months buried in AI research specifically in the areas of safety and security.  I view AGI as a serious threat to our future for two reasons. One, neither safety nor security have ever been prioritized over profits by corporations dating all the way back to the start of the industrial revolution.  And two, regulation has only ever come to an industry after a catastrophe or a significant loss of life has occurred, not before. 

I look forward to reading more of the content here, and engaging in what I hope will be many fruitful and enriching discussions with LessWrong's members. 

Hi Jeffrey! Glad to see more cybersecurity people taking the issue seriously.  Just so you know, if you want to introduce someone to AGI risk, the best way I know of to introduce laymen to the problem is for them to read Scott Alexander's Superintelligence FAQ. This will come in handy down the line.
Thanks, Trevor. I've bookmarked that link. Just yesterday I started creating a short list of terms for my readers so that link will come in handy. 
@Raemon is the superintelligence FAQ helpful as a short list of terms for Caruso's readers?
Welcome! Hope you have a good time!

I notice I am confused.

I have written what I think is a really cool post: Announcing that I will be using prediction markets in practice in useful ways, and asking for a little bit of help with that (mainly people betting on the markets). But apparently the internet/LessWrong doesn't feel that way. (Compare to this comment of mine which got ~4.5 times the upvotes, and is basically a gimmick—in general I'm really confused about what'll get upvoted here and what will be ignored/downvoted, even after half a decade on this site).

I'm not, like, complaining about this, but I'd like to understand why this wasn't better received. Is it:

  • The post is confusingly written, with too much exposition in the beginning (starts with a long quote)
  • The post promises to do something without having done it, so people judge it as a pipedream
  • The idea isn't actually that interesting: We've had replication markets, and the idea of using prediction markets to select experiments was proposed at least in 2013, and the idea is straightforward, as is the execution
  • The title makes it sound like just another proposal and nothing that will actually be executed in practice
  • Stuff like this doesn't matter because TA
... (read more)

Feedback from me: I started reading the post, but it had a bunch of huge blockquotes and I couldn't really figure out what the post was about from the title, so I navigated back to the frontpage without engaging. In-particular I didn't understand the opening quote, which didn't have a source, or how it was related to the rest of the post (in like the 10 seconds I spent on the page). 

An opening paragraph that states a clear thesis or makes an interesting point or generally welcomes me into what's going on would have helped a lot.

Okay, thanks for the feedback! So a more informative title would be better. I've been using quotes to denote abstracts (or opening paragraph), but maybe that's a bit confusing. I've changed the title now, and changed the abstract from a quote to bolded.
* The actual quote was also too long that I would have stopped reading if I wasn't trying to analyse your post. * The quote is also out of context, in that I am very confused about what the author was trying to say from the first paragraph. Because I was skimming, I didn't really understand the quote until the market section.
Okay, lesson learned: Don't start a blogpost with a long-ass quote from another post out of context. Put it later after the reader is in flow (apparently the abstract isn't enough). Don't do what's done here.
To be clear, I totally didn't parse the opening blockquote as an abstract. I parsed it as a quote from a different post, I just couldn't figure out from where.
FWIW I was going to start betting on Manifold, but I have no idea how to deal with meditative absorption as an end-state. Like there are worlds where -- for instance -- Vit D maybe helps this, or Vit D maybe hurts, and it might depend on you, or it depends on what kind of meditation really works for you. So it takes what is already pretty hard bet for me -- just calling whether nicotine is actually likely to help in some way -- and makes it harder -- is nicotine going to help meditation. Just have no idea.
Yeah, that makes sense. (I think I saw you bet on one of the markets! (And then maybe sell your stake?)) Thanks for trying anyway. Maybe the non-meditation related markets are easier to predict? I'd like to encourage best-guess-betting, but I understand that there are better opportunities out there.
It's good to use prediction markets in practice but most people who read the post likely don't get that much value from reading the post.  Larry McEnerney is good at explaining that good writing isn't writing that's cool or interesting but simply writing that provides value to the reader.  As far as the actual execution goes, it might have been better to create fewer markets and focus on fewer experiments, so that each one gets one attention.
I wrote two comments about why people don't read your post, but as I was betting I realized another two problems about the markets: 1. (Not your fault) The Manifold betting integration kind of sucks. Clicking "See 2 more answers" does nothing, and the options are ordered by percentage. 2. There isn't enough liquidity in your markets. It makes betting difficult because the even M5 increments changes too much. idk, maybe buy some mana to subsidize your markets? It would also make people seeing your market from Manifold more interested to bet as they will have more to gain for the prediction.
Both make sense. I spent ~all my mana on creating the markets, and as more Mana rolls in from other bets I am subsidizing them.
The title doesn't set a good expectation of the contents. If I am a person interested in "Please Bet On My Quantified Self Decision Markets", I want to bet. I won't expect to (and shouldn't be expected to) read all your lengthy experimental details. It took a while for me to find the markets.
That's funny, I've already changed the title from "Using Prediction Platforms To Select Quantified Self Experiments". I guess the problem is really the block quote, which I'll move somewhere later in the post.
4Olli Järviniemi2mo
I looked at your post and bounced off the first time. To give a concrete reason, there were a few terms I wasn't familiar with (e.g. L-Theanine, CBD Oil, L-Phenylalanine, Bupropion, THC oil), but I think it was overall some "there's an inferential distance here which makes the post heavy for me". What also made the post heavy was that there were lots of markets - which I understand makes conceptual sense, but makes it heavy nevertheless. I did later come back to the post and did trade on most of the markets, as I am a big fan of prediction markets and also appreciate people doing self-experiments. I wouldn't have normally done that, as I don't think I know basically anything about what to expect there - e.g. my understanding of Cohen's d is just "it's effect size, 1 d basically meaning one standard deviation", and I haven't even played with real numerical examples. (I have had this "this assumes a bit too much statistics for me / is heavy"problem when quickly looking at your self-experiment posts. And I do have a mathematical background, though not from statistics.) I'd guess that you believe that the statistics part is really important, and I don't disagree with that. For exposition I think it would still be better to start with something lighter. And if one could have a reasonable prediction market on something more understandable (to laypeople), I'd guess that would result in more attention and still possibly useful information. (It is unfortunate that attention is very dependent on the "attractiveness" of the market instead of "quality of operationalization".)
Thank you so much for trading on the markets! I guess I should've just said "effect size", and clarify in a footnote that I mean Cohen's d. And if the nootropics post was too statistics-heavy for someone with a math background, I probably need to tone it down/move it to an appendix. I think I can have quality of operationalization if I'm willing to be sloppy in the general presentation (as people probably don't care as much whether I use Cohen's d or Hedge's g or whatever).
4Nathan Helm-Burger2mo
The opening was off-putting to me. I think a shorter post with the details placed in a linked separate post marked as appendix would get more engagement. Also, bold and caps text is off-putting. But as of the time I checked it had 32 upvotes, which is pretty good. I usually think of less than 10 meaning nobody cares, but 10 - 20 is ok, and more than 20 is pretty good. Only really popular posts are above 50 generally.
Yeah, maybe I'll amend my comment above—after some help from the Manifold team I've gotten enough interest/engagement on my markets that I'm not as worried anymore—except maybe the LSD microdosing one, which is honestly a steal. (At the time my markets were pretty barren in terms of engagement, which was my main optimization target). I dunno about the upvote count though, two posts about the results of self-experiments have been pretty popular (if anyone wants a way to farm LW karma, that'd be a way to do it…) I think this endeavour is much cooler than ~most of my past posts, and not particularly complicated (I understand why the posts in this sequence aren't very upvoted, since people justifiedly just want to upvote what they've read and evaluated), so I was confused.
At this moment, the post has 25 karma, which is not bad. From my perspective, positive karma is good, negative karma is bad, but 4x higher karma doesn't necessarily mean 4x better -- it could also mean that more people noticed it, more people were interested, it was short to read so more people voted, etc. So I think that partially you are overthinking it, and partially you could have made the introduction shorter (basically to reduce the number of lines someone must read before they decide that they like it).
Yeah, when I posted the first comment in here, I think it had 14? I was maybe just overly optimistic about the amount of trading that'd happen on the markets.
4Nathan Helm-Burger2mo
The off-putting part about betting to me was the non-objective measure of meditative engagement. Gwern's n-back test was better for being objective and precise.
Hm, interesting. I think there's more α in investigating meditative performance, and thought it'd be not as bad through subjectivity because I randomize & blind. But I get why one would be skeptical of that. I do test for a bunch of other stuff though, e.g. flashcard performance (more objective I think?), which was surprisingly unmoved in my past two experiments. But I don't resolve the markets based on that. I very briefly considered putting up markets on every affected variable and every combination of substance, but then decided that nobody was going to forecast that.

Hey, I've been reading stuff from this community since about 2017. I'm now in the SERI MATS program where I'm working with Vanessa Kosoy. Looking forward to contributing something back after lurking for so long :P

I hope it's not too late to introduce myself, and I apologize if it is the case. I'm Miguel, a former accountant and decided to focus on researching /upskilling to help solve the AI alignment problem.

Sorry if I got people confused here, of what I was trying to do in the past months posting about my explorations on machine learning.

Can I interest you in working in AI policy if technical alignment doesn't work out? You'll want to visit DC and ask a ton of people there if you seem like a good fit (or ask them who can evaluate people). Or you can apply for advising on 80k or use the Lesswrong intercom feature in the bottom-right corner. I know that technical alignment is quant and AI policy is not, and accounting is quant, but my current understanding is that >50% of accountants can be extremely helpful in AI policy whereas <50% of accountants can do original technical alignment research.  More ML background is a huge boost in both areas, not just alignment. People good at making original discoveries in alignment will be able to reskill back to alignment research during crunch time, but right now is already crunch time for AI policy.
Hi @trevor! I appreciate the ideas you shared and yeah I agree that most accountants are probably better of helping in the AI policy route! But to point out, I'm doing some AI policy work/ help back home in the Philippines as part of the newly formed Responsible AI committee so I think I am not falling short from this end. I have looked at the AI safety problem deeply and my personal assessment is that it is difficult to create workable policies that can route to the best outcomes because we (as a society) lack the understanding of the mechanisms that make the transformer tech work. My vision of AI policies that can work will somehow capture a deep level of lab work being done by AI companies like learning rates standardization or number of epochs allowed that is associated hopefully with a robust and practical alignment theory, something that we do not have for the moment. Because of this view, I chosed to help in the pursuit of solving the alignment problem instead. The theoretical angle I am pursuing is significant enough to push me to learn machine learning and so far I was able to create RLFC and ATL through this process but yeah maybe an alternative scenario for me is doing 100% AI policy work - open for it if it will produce better results in the grand scheme of things. (Also, regarding the Lesswrong intercom feature in the bottom-right corner: I did have many discussions with the LW team, something I wished was available months ago but yeah I think one needs a certain level of karma to get access to this feature.)
Welcome! Glad to have you here.
2Charlie Steiner3mo

Some thoughts about e/acc that weren't worthy of a post:

  • E/acc is similar to early 2010s social justice, in that it's little more than a war machine; they decided that injustice was bad and that therefore they were going to fight it, and that anyone who criticized them was either weakening the coalition or opposing the coalition.
  • Likewise, E/acc decided that acceleration was good and anyone opposing them was evil luddites, and you had to use the full extent of your brain to "win" each confrontation as frequently as possible.
  • E/acc people like Beff Jezos colli
... (read more)
4Gerald Monroe3mo
The crazy thing is that e/acc, meme cult that it is, I feel is maybe a more realistic view of the world. If you assume that there's no way you can dissuade others from building AI : this includes wealthy corporations who can lobby with lots of money and demand the right to buy as many GPUs as they want, nuclear armed smaller powers, and China, what do you do? Imagine 2 simple scenarios. World A : they built AI. Someone let it get out of hand. You have only pre-AI technology to defend yourself. World B : you kept up in the arms race but did a better job on model security and quality. Some of the low hanging fruit for AI include things like self replicating factories and gigascale surveillance. Against a hostile superintelligence you may ultimately lose but do you want the ability to surveil and interpret a vast battlespace for enemy activity, and coordinate and manufacture millions or billions of automated weapons or not? You absolutely can lose but in a future world of escalating threats your odds are better if your country is strapped with the latest weapons. Do you agree or disagree? I am not saying e/acc is right, just that historically no arms agreement has ever really been successful. SALT wasn't disarmament and the treaty has effectively ended. Were Russia wealthier it would be back to another nuclear arsenal buildup. What do you estimate the probability that a global multilateral AI pause could happen? Right now based on the frequentist view that such an event has never been seen in history, should it rationally be 0 or under 1 percent? (Note with this last sentence this isn't my opinion, imagine you are a robot using an algorithm. What would the current evidence support? If you think my statement that an international agreement to ban a promising strategic technology and all equivalent alternatives has never happened is false, how do you know?)

I've been a lurker here for a long time. Why did I join?

I have a project I would like to share and discuss with the community. But first, I would like to hear from you guys. Will my project fit in here? Is there interest?

My project is: I wrote a book for my 6yo son. It is a bedtime-reading kind of book for a reasonably nerdy intelligent modern child.

Reading to young kids is known to be very beneficial to their development. There are tons of great books for any age and interests. My wife and me have read and enjoyed a lot of them with our boy.

However, I sti... (read more)

3Yoav Ravid10d
Welcome to LessWrong! Your story sounds fitting to me. I'd love to read to read it :)

I'm not a fan of @Review Bot because I think that when people are reading a discussion thread, they're thinking and talking about object-level stuff, i.e. the content of the post, and that's a good thing. Whereas the Review Bot comments draw attention away from that good thing and towards the less-desirable meta-level / social activity of pondering where a post sits on the axis from "yay" to "boo", and/or from "popular" to "unpopular".

(Just one guy's opinion, I don't feel super strongly about it.)

I think currently the bot is more noticeable than where it will when we have cleared out the 2023/2024 backlog. Usually the bot just makes a comment on a post when it reaches 100 karma, but since we are just starting it, it's leaving a lot of comments at the same time whenever older posts get voted on that don't yet have a market. The key UI component I care about is actually not the comment (which was just the most natural place to put this information), but the way the post shows up in post-lists:  The karma number gets a slightly different (golden-ish) color, and then you can see the likelihood that it ends up at the top of the review on hover as well as at the top of the post.  The central goal is to both allows us to pull forward a bunch of the benefits of the review, and to create a more natural integration of the review into the everyday experience of the site.
That's plausible. The counter hope for the markets is that they are less "yay"/"boo" because the review is (hopefully) less "yay"/"boo". Also, it will be less active in "Recent Discussion" soon; currently there's a bit of a backlog of eligible posts that it's getting triggered for.

Feature suggestion: unexplained strong downvotes have been something that bothered people for a long time, and requiring a comment to strongly downvote has been suggested several times before. I agree that this is too much to require, so I have a similar but different idea. When you strong upvote (both positive and negative), you'll have some popup with a few reasons to pick from for why you chose to strongly vote (A bit like the new reacts feature). For strong downvotes it may look like this:

  • This post is overrated, This post is hazardous, This post is fal
... (read more)

Yeah, I do think that's not super crazy. I do think that it needs some kind of "other" option, since I definitely vote for lots of complicated reasons, and I also don't want to be too morally prescriptive about the reasons for why something is allowed to be downvoted or upvoted (like, I think if someone can think of a reason something should be downvoted that I didn't think of, I think they should still downvote, and not wait until I come around to seeing the world the way they see it). 

Seems worth an experiment, I think.

4Yoav Ravid3mo
Yep, the purpose is providing the author with information, without making it too burdensome to strongvote, and without restricting when a strongvote is allowed.

Hello! I'm Andy - I've recently become very interested in AI interpretability, and am looking forward to discussing ideas here!

Apparently the most reliable way to make sure feature requests are seen is to use the Intercom.

Feature proposal: Highlights from the Comments, similar to Scott Alexander's version

You make a post containing what you judge to be the best of other people's comments on a topic or an important period like the OpenAI incident. The comments original karma isn't shown, but people can give them new votes and the positive votes will still accrue to the writer instead of the poster. 

This is because, like dialogues, writing lesswrong comments is good for prompting thought.

I don't know about highlighting other people's successful comments because the... (read more)

I'm tentatively tempted to start doing this in a shortform. I notice I feel like it's fine to highlight someone's comment? They put it on the site, so it's not private. I'd be keeping it on the same site, not taking it somewhere else without attribution. I wouldn't generally like my contributions moved between places or attributed to me on other pseudonyms, and maybe there's a stronger argument here than I'm thinking.
How do shortforms work? Doesn't virtually nobody see them?
My understanding is shortforms have next to no visibility unless people are already subscribed to a particular person's shortform feed. That seems about right for me? If I'm interested in what say, Scott thinks the best comments are but not interested in what Ray thinks the best comments are, then I subscribe to one but not the other. I'm not saying this is the best possible UX, I'm just noting I'm tempted to try this with the affordances I have.
As a quick note, I think it's pretty likely we will copy the EA Forum's Quick Takes section: https://forum.effectivealtruism.org/  I quite like how it works, and I think it gives about the right level of visibility to shortform posts.
Tangential question: I know how to view all the posts by karma or by other criteria. Is there a way to view all comments by karma or other criteria? It occurs to me that part of the reason I don't usually read comment threads except on my own posts is that I don't know where the good discussion is happening.
Oh boy, I can't wait for this.
It's done as of yesterday!
Apparently the most reliable way to make sure feature requests are seen is to use the Intercom. Apart from that, I like the suggestion. There are many LW comments that warrant being turned into full posts, and this seems like a neat complementary suggestion. If the feature was implemented, there would have to be a moderation policy requiring posters not to use this feature to pull comments you disagree with and turn them into top-level disagreements with individuals (if the original commenter wanted to do that, they could dialogue with you), nor to use it for witch hunts ("look at all the bad takes of this guy!").
2Nathan Helm-Burger2mo
Well, you can already visit the profile of someone you disagree with and just scroll through a list of all the comments they've made. So maybe if it's a public comment we don't need to worry about the privacy aspects? When I want to make private comments on a post, I private-message the author. Public comments are for everyone to read.

Hello! I'm a young accountant, studying to be a CPA. I've messed around in similar epistemic sandboxes all my life without knowing this community ever existed. This is a lovely place, reminds me of a short story Hemingway wrote called A Clean, Well-Lighted Place. 

I came from r/valueinvesting. I'm very much interested in applying LW's latticework of knowledge towards improving the accounting profession. If there are Sequences and articles you think are relevant to this, I would eat it up. Thank you! 

Maybe the series starting with You Need More Money

I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.

Thank you! I also am very excited about it, though sadly adoption hasn't been amazing. Would love to see more people organically produce dialogues!

LWCW 2024 Save The Date

tl;dr: This year’s LWCW happens 13-16th September 2024. Applications open April/May. We’re expanding to 250 attendees and looking for people interested in assisting our Orga Team.

The main event info is here:


And fragments from that post:

Friday 13th September- Monday 16th September 2024 is the 11th annual LessWrong Community Weekend (LWCW) in Berlin. This is world’s largest rationalist social gathering which brings together 250 aspiring rationalists fro... (read more)

I'm going to be in Berkeley February 8 - 25. If anyone wants to meet, hit me up!

If you watch the first episode of Hazbin Hotel (quick plot synopsis, Hell's princess argues for reform in the treatment of the damned to an unsympathetic audience) there's a musical number called 'Hell Is Forever' sung by a sneering maniac in the face of an earnest protagonist asking for basic, incremental fixes.

It isn't directly related to any of the causes this site usually champions, but if you've ever worked with the legal/incarceration system and had the temerity to question the way things operate the vibe will be very familiar.  

Hazbin Hotel Official Full Episode "OVERTURE" | Prime Video (youtube.com)

Almost all the blogs in the world seem to have switched to Substack, so I'm wondering if I'm the only one whose browser is very slow in loading and displaying comments from Substack blogs. Or is this a firefox problem?

7Said Achmiz2mo
No, it’s not just you, and it’s not just Firefox. Substack comments really are hideously slow to load. (That’s one of the reasons why they don’t all load at once—which really only makes them worse, UX-wise.)
I don't really understand why Substack became so popular, compared to eg WordPress. Is Substack writing easier to monetize?
Yes. Substack has Stripe billing built in, and a user base which both accepts monetization culturally and is probably already subscribed to another substack so it's much easier to subscribe to a second.
Getting new updates via email matters a lot of user retention. Sending emails in bulk to get around spam filters is nothing that WordPress can easily do out of the box.

Weird idea: a Uber Eats-like interface for EA-endorsed donations.

Imagine: You open the app. It looks just like Uber Eats. Except instead of seeing the option to spend $12 on a hamburger, you see the option to spend $12 to provide malaria medicine to a sick child.

I don't know if this is a good idea or not. I think evaluating the consequences of this sort of stuff is complicated. Like, maybe it ends up being a PR problem or something, which hurts EA as a movement, which has large negative consequences.

Would more people donate to charity if they could do so in one click? Maybe...

I am confused by the dialogue system. I can't quite tell whether it's telling me the truth but being maddeningly vague about it, or whether it's lying to me, or whether I'm just misunderstanding something.

Every now and then I get a notification hanging off the "bell" icon at top right saying something like "New users interested in dialoguing with you".

On the face of it, this means: at least one specific person has specifically nominated me as someone they would like to have a dialogue with.

So I click on the thing and get taken to a page which shows me (if ... (read more)

It does mean that there are real users who checked you. I think the notifications are plausibly too "scammy dating site" regardless, but they are not false.
I realise that there's another thing in this area that I'm possibly confused about. I think I'm not confused and it's just that there isn't a good way to present the relevant information. So, if I get the notification, that means that at least one person wants to talk to me. So far, so good. And then I go to the dialogue page and see a list of users. But it's not necessarily true that at least one of them wants to talk to me, right? (Because the list I see is filtered by my having upvoted things they wrote, but AIUI not symmetrically by their having upvoted things I wrote. So maybe user X liked things I wrote, went to the dialogue page, saw my name, and checked the checkbox, causing me to get notified ... but I haven't read what X wrote, or happened not to upvote it -- I don't vote all that much, either up or down -- and so X is not on the list I see. So poor X will be waiting for ever for my response, since I never get presented with the option to suggest dialogue with X.) This could be "fixed" by including people on the list I see if they've checked my box, but that's no good because then in some cases I can tell that someone's checked my box without ever having to check theirs. (I'm not sure this mechanic actually makes sense for dialogues in the way it maybe does for dating, but it's obviously a very deliberate decision.) Or it could be "fixed" by including people on the list I see if they've upvoted things I wrote, but that's also no good because that leaks information about who's upvoted me. Or it could be "fixed" by including people on the list both at random and if they've checked my box, or both at random and if they've upvoted me, or something, but that's probably no good either because it still leaks some information and many ways of doing it leak way too much information, and because it clutters up the list of potential dialogue partners, and clutters it worse the less information it leaks. None of these "fixes" seems at all attractive. But the alter
No, they will appear on the list somewhere, because the last section on the dialogue matching page is "Recently active on dialogue matching", which shows all users who have made checkboxes within some recent time interval. So if they don't appear in any of the previous lists, they will appear there.
Ah, so it is. Thanks.
Yeah, it does seem like a tricky design problem. Some discussion of it in the thread here. My current guess is that it would be better to have a casual-feeling non-anonymous "invite to dialogue" than the dating-style algorithm. I also guess it won't be implemented soon (for a combination of things like its marginal value given matching being smaller and how long I expect dialogues to be an organisational priority).
Thanks for the clarification! I think there would be some value in either putting some message to that effect on the dialogue page, or else having a page linked from there that provides more explanation of what's going on and what everything means. (The former might be tricky, since what it would be useful to see there might depend on what's in the user's notifications and maybe also on whether they got to the dialogue page by clicking on one of those notifications or by other means. Or maybe it would be bad for it to depend on that since then the contents of the page would change in not-so-predictable ways, which would be confusing in itself. But maybe a message along the lines of "At least one other user has checked the box to mark you as a user they would like to dialogue with. The most recent time this happened was about two days ago." Or something; I haven't really thought this through.)
That seems like a good idea! (I don't know exactly when we'll get to it). (Also, sorry for the brevity of my messages; I am grateful for the details in yours)
Brevity is fine. I'm sure you have other things to do besides replying to my comments.
Yeah, users can't currently see that list for themselves (unless of course you create a new account, upvote yourself, and then look at the matching page through that account!).  However, the SQL for this is actually open source, in the function getUserTopTags: https://github.com/ForumMagnum/ForumMagnum/blob/master/packages/lesswrong/server/repos/TagsRepo.ts What we show is "The tags a user commented on in the last 3 years, sorted by comment count, and excluding a set of tags that I deemed as less interesting to show to other users, for example because they were too general (World Modeling, ...), too niche (Has Diagram, ...) or too political (Drama, LW Moderation, ...)."
Just out of curiosity, is the name "ForumMagnum" an anatomical pun?
Lol, no, but that is kind of hilarious.  I think it's a reference to Francis Bacons' "Instauratio Magna" ("The Great Instauration"), though I am not sure why we would have chosen "Magnum" instead of "Magna" as the spelling.

The Latin noun “instauratio” is feminine, so “magna” uses the feminine “-a” ending to agree with it. “forum” in Latin is neuter, so “magnum” would be the corresponding form of the adjective. (All assuming nominative case.)

Huh, I learned something today about the name of my own Forum. Thank you!

Long time lurker introducing myself.

I'm a Music Video Maker who is hoping to use Instrumental Rationality towards accomplishing various creative-aesthetic goals and moving forward on my own personal Hamming Question. The Hammertime sequence has been something I've been very curious about but unsuccessful in implementing.

I'll be scribbling shortform notes which might document my grappling with goals. Most of them will be in some way related to the motion picture production or creativity in general. "Questions" as a topic may creep in, it's one of my favorit... (read more)

Welcome! Hope you have a good time. Asking good questions is quite valuable, and I think a somewhat undersupplied good on the site, so am glad to have you around!
Thank you, then I will try to ask good questions when I feel I am in possession of one.

I don't like that when you disagree with someone, as in hitting the "x" for the agree/disagree voting, the "x" appears red. It makes me feel on some level like I am saying that the comment is bad when I merely intend to disagree with it.

The new comments outline feature is great! Thanks, LW team :)

One idea for improving the floating ToC comment tree: use LLMs to summarize them. Comments can be summarized into 1-3 emoji (GPT-3 was very good at this back in 2020), and each separate thread can be given a one-sentence summary. As it is, it's rather bare and you can get some idea of the structure of the tree and eg. who is bickering with whom, but nothing else.

Curious about people's guesses in this market: 

Hey there! I just got curious while reading steven pinker's book on rationality about the "rationality community" he keeps referring about then I saw him mentioning trying to be "less wrong" and then I searched it up to stumble upon this place. You guys read and write a lot just browsing here, maybe I should focus on increasing my attention span even more.

3Yoav Ravid1mo
Welcome! I think you may be interested in a review of Steven Pinker's book on rationality.

Whatever happened to AppliedDivinityStudies, anyway? Seemed to be a promising blog adjacent to the community but I just checked back to see what the more recent posts were and it looks to have stopped posting about a year ago?

https://www.applieddivinitystudies.com/hiatus/ resumed, I would assume.

Hi LessWrong! I am Ville, I have been reading LW / ACX and other rationalish content for a while and was thinking of joining into the conversation. I have been writing on Medium previously, but have been struggling with the sheer amount of clickbait and low-effort content on the platform. I also don't really write frequently enough to justify a Substack or other dedicated personal blog.

However as LW has a very high standard for content, I am unsure if my writing would be something people here would enjoy. Most recently, I wrote a series of two fables about... (read more)

I didn't read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts. And yes some people do publish blogposts on LessWrong, jefftk being one that I follow.

I had a discussion recently where I gave feedback to Ben P. about the dialogue UI. This got my brain turning, and a few other recommendations for UI changes bubbled up to top of mind.

Vote display (for karma and agree/disagree)

Histogram of distribution of votes (tiny, like sparklines, next to the vote buttons). There should be four bars: strong negative vote count, negative vote count, positive vote count, strong positive vote count. The sum of all votes is less informative and interesting to me than the distribution. I want to know the difference between s... (read more)

2Nathan Helm-Burger2mo
Oh yeah, and the order of interacting with a post should be: read post, vote, comment. So why is the vote button at the top? We don't want to encourage people to vote before reading! So why have them read the post, scroll to the top, vote, scroll back to the bottom, comment....
Posts that have more than like 3 paragraphs of text also have vote buttons at the bottom. It's just very short posts where it looks really weird to have two vote sections right next to each other where we omit one of them.
3Nathan Helm-Burger2mo
Yes, I'm aware of that. I'm saying that they shouldn't have them at the top. Why let someone vote on a post if they haven't made it to the bottom?

Dear LW team, I have found that I can upvote/agreement-vote deleted comments and it gives karma to author of deleted comment. Is it supposed to work like this?

Seems kinda fine. Seems like a weird edge-case that doesn't really matter that much. I would consider it a bug, but not a very important one to fix.

Did anyone around here try Relationship Hero and has opinions?

Presumably @Liron but he is of course biased :P 

I am seeing new a new "Quick Takes" feature on LessWrong. However, I can't find any announcement or documentation for the feature. I tried searching for "quick takes" and looking on the FAQ. Can someone describe "Quick Takes"?

They are just a renaming of "shortform", with some new UI. "Quick Take" sort of conveyed what we were actually going for which is more like "you wrote it down quickly" than "it was literally short".
The EA Forum came up with the name when they adopted the "shortform" feature, and it seemed like a better name to me, so we copied it.

By now there are several AI policy organizations. However, I am unsure what the typical AI safety policy is that any of them would enforce if they had unlimited power. Is there a summary of that?

Surprisingly enough, this question actually has a really good answer. Given unlimited power, you create dath ilan on Earth. That's the most optimal known strategy given the premise.  Yudkowsky's model is far from perfect (other people like Duncan have thought about their own directions), but it's the one that's most fleshed out by far (particularly in projectlawful), and it's optimal state in that it allows people to work together and figure out for themselves how to make things better.
Okay, maybe I should rephrase my question: What is the typical AI safety policy they would enact if they could advise president, parliament and other real-world institutions?
Initial ask would be compute caps for training runs. In the short term, this means that labs can update their models to contain more up-to-date information but can't make them more powerful than they are now. This need only apply to nations currently in the lead (mostly U.S.A.) for the time being but will eventually need to be a universal treaty backed by the threat of force. In the longer term, compute caps will have to be lowered over time to compensate for algorithmic improvements increasing training efficiency. Unfortunately, as technology advances, enforcement would probably eventually become too draconian to be sustainable. This "pause" is only a stopgap intended to buy us more time to implement a more permanent solution. That would at least look like a lot more investment in alignment research, which unfortunately risks improving capabilities as well. Having spent a solid decade already, Yudkowsky seems pessimistic that this approach can work in time and has proposed researching human intelligence augmentation instead, because maybe then the enhanced humans could solve alignment for us. Also in the short term, there are steps that could be taken to reduce lesser harms, such as scamming. AI developers should have strict liability for harms caused by their AIs. This would discourage the publishing of the weights of the most powerful models. Instead, they would have to be accessed through an API. The servers could at least be shut down or updated if they start causing problems. Images/videos could be steganographically watermarked so abusers could be traced. This isn't feasible for text (especially short text), but servers could at least save their transcripts, which could be later subpoenaed.
Thank you very much. Why would liability for harms caused by AIs discourage the publishing of the weights of the most powerful models?

It should probably say 2023 review instead of 2022 at the top of lesswrong.

It is terribly confusing, but it should not. Each year we review the posts that are at least one year old, as such, at the end of 2023, we review all posts from 2022, hence "2022 Review".

For the voting system's point cost, that was the function that outputs the point costs (1,10,45) from the vote count (1,4,9), which is basically the same as (1,2,3)?
The first option costs 1, the second costs sum(1..4), and the third costs sum(1..9). So the idea is that every vote costs 1 more vote point, and the cost for n votes is simply cost(n)=∑nk=1k. I don't know where the formula comes from, however.
It's quadratic voting: https://vitalik.eth.limo/general/2019/12/07/quadratic.html 
My thought process on writing that comment was roughly: "This is quadratic voting, right? Let me check the Wikipedia page. Huh, that page suggests a formula where vote cost scales quadratically with vote number. Maybe I misremembered what quadratic voting is? Let me just comment with what I do remember." So the problem was that I'd only glanced at the Wikipedia article, and didn't realize that the simplified formula there, cost(n)=n2, is either an oversimplification or an outright editing error where they drop a factor of 12. The actual approximation of the quadratic voting formula (as explained in the linked Vitalik essay, which I'd apparently also read years ago but had mostly forgotten since), is n22, as per this:  cost(n)=∑nk=1k=n×(n+1)2≈n22 And @trevor, here's a quote from that essay on the motivation for this formula:
That is a surprisingly satisfying answer, thank you.
Ah, sorry for the confusion. Thanks!

I remember a Slate Star Codex post about a thought experiment that goes approximately like this:

  • In the past, AI systems have taken over the universe and colonized it completely
  • Those AI systems are in extremely strong multipolar competition with one another, and the competitive dynamics are incredibly complex
  • In fact, those dynamics are so complex and inviolable that they constitute whole new physical laws
  • So in fact we are just living "on" those competitive AI systems as a substrate, similar to how ecosystems have competing and cooperating cells as a su
... (read more)
Here you go: https://slatestarcodex.com/2014/07/13/growing-children-for-bostroms-disneyland/
Thank you!

@Habryka @Raemon I'm experiencing weird rendering behavior on Firefox on Android. Before voting, comments are sometimes rendered incorrectly in a way that gets fixed after I vote on them.

Is this a known issue?

I have not seen this! Could you post a screenshot?
before:  after:  Here the difference seems only to be spacing, but I've also seen bulleted lists appear. I think but I can't recall for sure that I've seen something similar happen to top-level posts.
Thank you! I will have someone look into this early next week, and hopefully fix it.

Hello, my name is Peter and recently I read Basics of Rationalist Discourse and iteratively checked/updated the current post based on the points stated in those basics:

I (possibly falsely) feel that moral (i.e. "what should be") theories should be reducible because I see the analogy with the demand of "what is" theories to be reducible due to Occam's razor. I admit that my feeling might be false (and I know analogy might not be a sufficient reason), and I am ready to admit that it is. However, despite reading the whole Mere Goodness from RAZ I cannot remem... (read more)

Hello! First of many comments as I dive into the AI Alignment & Safety area to start contributing.  

Very new to this forum and AI in general, about to start my AI Safety & Alignment course to get familiar. Many posts in this forum feel advanced to me but I guess that is the beginning. 

Welcome! I hope you have a good time here, and if you run into any problems, feel free to ping the admin team on the Intercom chat in the bottom right corner.