Hello! I’ve mostly been lurking around on LessWrong for a little while and have found it to be a good source of AI news and other stuff. I like these posts - sometimes it feels somewhat intimidating in other parts. I hope to be commenting more on LessWrong in the future!
Confession: I've sometimes been getting Lesswrong users mixed up, in a very status-damaging way for them.
Before messaging with lc, I mixed his writings and accomplishments with lsusr (e.g. I thought the same person wrote Luna Lovegood and the Chamber of Secrets and What an actually pessimistic containment strategy looks like).
I thought that JenniferRM did demon research and used to work at MIRI, but I had I mixed her up with Jessicata.
And, worst of all, I mixed up Thane Ruthenis with Thoth Hermes, causing me to think that Thane Ruthenis wrote Thoth's downvoted post The truth about false.
Has this happened to other people? The main thing is that I just didn't notice the mixup at all until ~a week after we first exchanged messages. It was just a funny manifestation of me not really paying much attention to some new names, and it's an easy fix on my end, but the consequences are pretty serious if this happens in general.
Yep, happened to me too. I like LW aesthetic so I wouldn't want profile pics, but I think personal notes on users (like discord has) would be great.
It would save me a fair amount of time if all lesswrong posts had an "export BibTex citation" button, exactly like the feature on arxiv. This would be particularly useful for alignment forum posts!
Hello everyone!
After several years seeing (and reading) links to LessWrong posts scattered in other areas of the internet, I decided to sign up for an account today myself and see if I can't find a new community to contribute to here :)
I look forward to reading, writing, and thinking with you all in the future!
I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.
Hello, I came across this forum while reading an AI research paper where the authors quoted from Yudkowsky's "Hidden Complexity of Wishes." The linked source brought me here, and I've been reading some really exceptional articles ever since.
By way of introduction, I'm working on the third edition of my book "Inside Cyber Warfare" and I've spent the last few months buried in AI research specifically in the areas of safety and security. I view AGI as a serious threat to our future for two reasons. One, neither safety nor security have ever been prioritized over profits by corporations dating all the way back to the start of the industrial revolution. And two, regulation has only ever come to an industry after a catastrophe or a significant loss of life has occurred, not before.
I look forward to reading more of the content here, and engaging in what I hope will be many fruitful and enriching discussions with LessWrong's members.
I notice I am confused.
I have written what I think is a really cool post: Announcing that I will be using prediction markets in practice in useful ways, and asking for a little bit of help with that (mainly people betting on the markets). But apparently the internet/LessWrong doesn't feel that way. (Compare to this comment of mine which got ~4.5 times the upvotes, and is basically a gimmick—in general I'm really confused about what'll get upvoted here and what will be ignored/downvoted, even after half a decade on this site).
I'm not, like, complaining about this, but I'd like to understand why this wasn't better received. Is it:
Feedback from me: I started reading the post, but it had a bunch of huge blockquotes and I couldn't really figure out what the post was about from the title, so I navigated back to the frontpage without engaging. In-particular I didn't understand the opening quote, which didn't have a source, or how it was related to the rest of the post (in like the 10 seconds I spent on the page).
An opening paragraph that states a clear thesis or makes an interesting point or generally welcomes me into what's going on would have helped a lot.
Hey, I've been reading stuff from this community since about 2017. I'm now in the SERI MATS program where I'm working with Vanessa Kosoy. Looking forward to contributing something back after lurking for so long :P
I hope it's not too late to introduce myself, and I apologize if it is the case. I'm Miguel, a former accountant and decided to focus on researching /upskilling to help solve the AI alignment problem.
Sorry if I got people confused here, of what I was trying to do in the past months posting about my explorations on machine learning.
Feature suggestion: unexplained strong downvotes have been something that bothered people for a long time, and requiring a comment to strongly downvote has been suggested several times before. I agree that this is too much to require, so I have a similar but different idea. When you strong upvote (both positive and negative), you'll have some popup with a few reasons to pick from for why you chose to strongly vote (A bit like the new reacts feature). For strong downvotes it may look like this:
Yeah, I do think that's not super crazy. I do think that it needs some kind of "other" option, since I definitely vote for lots of complicated reasons, and I also don't want to be too morally prescriptive about the reasons for why something is allowed to be downvoted or upvoted (like, I think if someone can think of a reason something should be downvoted that I didn't think of, I think they should still downvote, and not wait until I come around to seeing the world the way they see it).
Seems worth an experiment, I think.
I've been a lurker here for a long time. Why did I join?
I have a project I would like to share and discuss with the community. But first, I would like to hear from you guys. Will my project fit in here? Is there interest?
My project is: I wrote a book for my 6yo son. It is a bedtime-reading kind of book for a reasonably nerdy intelligent modern child.
Reading to young kids is known to be very beneficial to their development. There are tons of great books for any age and interests. My wife and me have read and enjoyed a lot of them with our boy.
However, I sti...
I'm not a fan of @Review Bot because I think that when people are reading a discussion thread, they're thinking and talking about object-level stuff, i.e. the content of the post, and that's a good thing. Whereas the Review Bot comments draw attention away from that good thing and towards the less-desirable meta-level / social activity of pondering where a post sits on the axis from "yay" to "boo", and/or from "popular" to "unpopular".
(Just one guy's opinion, I don't feel super strongly about it.)
Some thoughts about e/acc that weren't worthy of a post:
Hello! I'm Andy - I've recently become very interested in AI interpretability, and am looking forward to discussing ideas here!
Feature proposal: Highlights from the Comments, similar to Scott Alexander's version.
You make a post containing what you judge to be the best of other people's comments on a topic or an important period like the OpenAI incident. The comments original karma isn't shown, but people can give them new votes and the positive votes will still accrue to the writer instead of the poster.
This is because, like dialogues, writing lesswrong comments is good for prompting thought.
I don't know about highlighting other people's successful comments because the...
Hello! I'm a young accountant, studying to be a CPA. I've messed around in similar epistemic sandboxes all my life without knowing this community ever existed. This is a lovely place, reminds me of a short story Hemingway wrote called A Clean, Well-Lighted Place.
I came from r/valueinvesting. I'm very much interested in applying LW's latticework of knowledge towards improving the accounting profession. If there are Sequences and articles you think are relevant to this, I would eat it up. Thank you!
I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.
tl;dr: This year’s LWCW happens 13-16th September 2024. Applications open April/May. We’re expanding to 250 attendees and looking for people interested in assisting our Orga Team.
The main event info is here:
https://www.lesswrong.com/events/tBYRFJNgvKWLeE9ih/less-wrong-community-weekend-2024
And fragments from that post:
Friday 13th September- Monday 16th September 2024 is the 11th annual LessWrong Community Weekend (LWCW) in Berlin. This is world’s largest rationalist social gathering which brings together 250 aspiring rationalists fro...
If you watch the first episode of Hazbin Hotel (quick plot synopsis, Hell's princess argues for reform in the treatment of the damned to an unsympathetic audience) there's a musical number called 'Hell Is Forever' sung by a sneering maniac in the face of an earnest protagonist asking for basic, incremental fixes.
It isn't directly related to any of the causes this site usually champions, but if you've ever worked with the legal/incarceration system and had the temerity to question the way things operate the vibe will be very familiar.
Hazbin Hotel Official Full Episode "OVERTURE" | Prime Video (youtube.com)
Almost all the blogs in the world seem to have switched to Substack, so I'm wondering if I'm the only one whose browser is very slow in loading and displaying comments from Substack blogs. Or is this a firefox problem?
Weird idea: a Uber Eats-like interface for EA-endorsed donations.
Imagine: You open the app. It looks just like Uber Eats. Except instead of seeing the option to spend $12 on a hamburger, you see the option to spend $12 to provide malaria medicine to a sick child.
I don't know if this is a good idea or not. I think evaluating the consequences of this sort of stuff is complicated. Like, maybe it ends up being a PR problem or something, which hurts EA as a movement, which has large negative consequences.
I am confused by the dialogue system. I can't quite tell whether it's telling me the truth but being maddeningly vague about it, or whether it's lying to me, or whether I'm just misunderstanding something.
Every now and then I get a notification hanging off the "bell" icon at top right saying something like "New users interested in dialoguing with you".
On the face of it, this means: at least one specific person has specifically nominated me as someone they would like to have a dialogue with.
So I click on the thing and get taken to a page which shows me (if ...
The Latin noun “instauratio” is feminine, so “magna” uses the feminine “-a” ending to agree with it. “forum” in Latin is neuter, so “magnum” would be the corresponding form of the adjective. (All assuming nominative case.)
Long time lurker introducing myself.
I'm a Music Video Maker who is hoping to use Instrumental Rationality towards accomplishing various creative-aesthetic goals and moving forward on my own personal Hamming Question. The Hammertime sequence has been something I've been very curious about but unsuccessful in implementing.
I'll be scribbling shortform notes which might document my grappling with goals. Most of them will be in some way related to the motion picture production or creativity in general. "Questions" as a topic may creep in, it's one of my favorit...
I don't like that when you disagree with someone, as in hitting the "x" for the agree/disagree voting, the "x" appears red. It makes me feel on some level like I am saying that the comment is bad when I merely intend to disagree with it.
One idea for improving the floating ToC comment tree: use LLMs to summarize them. Comments can be summarized into 1-3 emoji (GPT-3 was very good at this back in 2020), and each separate thread can be given a one-sentence summary. As it is, it's rather bare and you can get some idea of the structure of the tree and eg. who is bickering with whom, but nothing else.
Hello! I have been reading and lurking the place for a long time. This place seemed different than other social media/forums because of the level of discussion held here. It's daunting to finally create an account, but I hope to start commenting/posting later.
Also, I find it funny to consider websites as "places", although it makes sense to call it that way.
Hey!
I'm an IT consultant who works very closely with an innovative AI-driven product. Or, to cut the bullshit, I help deploy and integrate customer service platforms filled to the brim with those very chatbots that annoy endless customers daily.
But that's just my day job. I'm working on a novel (or perhaps a piece of serialized fiction) that just might have silicon shoggoths in it. That's the kind of content the local fauna enjoys, right? It's a little too satirical to be entirely rational, but some recent twitter-chatter out of thi...
Hey there! I just got curious while reading steven pinker's book on rationality about the "rationality community" he keeps referring about then I saw him mentioning trying to be "less wrong" and then I searched it up to stumble upon this place. You guys read and write a lot just browsing here, maybe I should focus on increasing my attention span even more.
Whatever happened to AppliedDivinityStudies, anyway? Seemed to be a promising blog adjacent to the community but I just checked back to see what the more recent posts were and it looks to have stopped posting about a year ago?
Hi LessWrong! I am Ville, I have been reading LW / ACX and other rationalish content for a while and was thinking of joining into the conversation. I have been writing on Medium previously, but have been struggling with the sheer amount of clickbait and low-effort content on the platform. I also don't really write frequently enough to justify a Substack or other dedicated personal blog.
However as LW has a very high standard for content, I am unsure if my writing would be something people here would enjoy. Most recently, I wrote a series of two fables about...
I had a discussion recently where I gave feedback to Ben P. about the dialogue UI. This got my brain turning, and a few other recommendations for UI changes bubbled up to top of mind.
Vote display (for karma and agree/disagree)
Histogram of distribution of votes (tiny, like sparklines, next to the vote buttons). There should be four bars: strong negative vote count, negative vote count, positive vote count, strong positive vote count. The sum of all votes is less informative and interesting to me than the distribution. I want to know the difference between s...
Dear LW team, I have found that I can upvote/agreement-vote deleted comments and it gives karma to author of deleted comment. Is it supposed to work like this?
By now there are several AI policy organizations. However, I am unsure what the typical AI safety policy is that any of them would enforce if they had unlimited power. Is there a summary of that?
It is terribly confusing, but it should not. Each year we review the posts that are at least one year old, as such, at the end of 2023, we review all posts from 2022, hence "2022 Review".
I remember a Slate Star Codex post about a thought experiment that goes approximately like this:
Hello there. This seems to be a quirky corner of the internet that I should've discovered and started using years ago. Looking forward to reading these productive conversations! I am particularly interested in information, computation, complex system and intelligence.
Hello! I'm building a tool with a one of a kind UI for LessWrong kind of deep, rational discussions. I've always loved how writing forces a deeper clarity of thinking and focuses on getting to the right answer. The tool is called CQ2. It has a sliding panes design with quote-level threads. There's a concept of "posts" for more serious discussions with many people and there's "chat" for less serious ones, but both of them have a UI crafted for deep discussions. It's open source as well.
I simulated some LessWrong discussions there – they turned out to be mor...
Hello, my name is Peter and recently I read Basics of Rationalist Discourse and iteratively checked/updated the current post based on the points stated in those basics:
I (possibly falsely) feel that moral (i.e. "what should be") theories should be reducible because I see the analogy with the demand of "what is" theories to be reducible due to Occam's razor. I admit that my feeling might be false (and I know analogy might not be a sufficient reason), and I am ready to admit that it is. However, despite reading the whole Mere Goodness from RAZ I cannot remem...
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.