If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
28 comments, sorted by Click to highlight new comments since:

Arbital links now come with fancy hover-previews!

Looks good! Wikipedia would also be nice :)

I'm Dale Udall, a self-taught GenX philosopher and Grey-tribe quokka. For twenty years, I've been living my life informed by an original naive philosophy I call Triessentialism. I plan to start making it public under this persona on this site, to distance the philosophy from all the other footprints I've left on the Internet.

Triessentialism is a fractal ontology. It can be used for philosophical realizations and reorganization. I've applied it to ethics, erisology, AI safety, economics, music theory, marketing, sociology, self-help psychology, and more.

I believe it could revolutionize the field of teaching people like myself on the autism spectrum how to thrive in society, and not just fail at passing as normal.

I believe it could bring some balance to our public discourse through greater inter-tribe understanding, for those willing to listen and think.

I believe it exists as the hidden bedrock of all solid, time-tested institutions and systems, and I consider myself a paleontologist of philosophy, finding the bones of the past, not an inventor.

My favorite fiction authors from my youth are Isaac Asimov and C.S. Lewis, and my favorite fiction authors in adulthood are Matthew Woodring Stover, Robert Heinlein, George Orwell, and comic book writer Joe Kelly. I've read and enjoyed HPMOR, Worm, The Last Unicorn and Watership Down. I'm an idealist and a romantic in the colloquial senses of those terms.

Well this is quite a tantalizing introduction.

Thank you. I'm currently playing with Excalidraw to create basic diagrams, since Venn diagrams are the best way to introduce the concepts. In fact, whenever I describe it with words, my goal is to simulate these Venns in my listeners' minds, so I'm better off just plopping them into the post.

Now I just have to figure out the best way to include these drawings in the posts. SVG? PNG? Excalidraw native JSON? I'm lurking and reading the faqs to figure that out.

When I turn it into a blog, it might be best to have my own little wiki because of the way my content and terminology are interconnected.

There are a bunch of sequences, like the value learning sequence, that have structured formatting in the sequence overview (the page the link goes to), so something like Headline, a bunch of posts, headline, a bunch of more posts.

How is this done? When I go into the sequence editor, I only see one text field where I can write something which then appears in front of the list of posts.

Yeah, we have some truly horrifying and confusing admin-only UI that we haven't gotten around to polishing up for general consumption that allows us to set that styling. If you ever want anything like this for a sequence of yours, just ping us on Intercom, and happy to set it up for you.

I'm curious about that, but I assume that it's done by the LW team for sequences that they put the spotlight on. It happens for all three sequences that started the AF (including the one you're linking to).

I wrote a post about my (good) schooling experience in a democratic school, and I'd like to have someone read and comment on the draft before i publish it - If you're interested send me a PM or reply in thread. Thanks! :)

I have difficulty doing things on my own, but talking to people is easy. Would anyone like to regularly talk in ways that further AI safety research?

I remember a post where people discussed how people's sense of consciousness differs a lot on Lesswrong. Does anybody know which post I'm pointing to?

A riddle (maybe trivial for you, lesswrongsters, but I am still curious of your answers/guesses):

It is neither truth nor lie. What is it?

A rock. A question. A command. A mistake. A dream. "2+2=4" spoken by a zombie. The output of GPT-3. Simulacrum level 4. The fit of water to a puddle. The peacock's tail.

"Not even wrong."
"This sentence is a lie."

Folklore, propaganda, string theory and irresolvable self-referential statements.

The sentence, "The present king of France is bald."

The intercom seems to be down right now, but here's a bug: images can show up as very, very large in private message conversations: 

The gray box is the area of the conversation itself; the image is huge.

Yeah, sorry, I noticed this too a bit ago, but hadn't gotten around to a fix. But should be very easy to fix.

I'm in Moscow and, despite free healthcare system and the first-search-result commercial clinic being heavily booked up vaccination-wise, lots of the non-first-search-result clinics, by results of my quick roll call, actually have free slots available... the next business day. Call your Russian friends to get them vaccinated or something.

https://www.mos.ru/city/projects/covid-19/privivka/ also contains places where you can get vaccinated, including some shopping malls where you can get vaccinated without booking in advance.

I saw Eliezer link a few time to an essay of his called The meaning that immortality gives to life, but it seems the essay has been removed. does anyone if he retracted it? if so, do you know why? if he didn't, do you know where to find it?

Archived. (My guess is that no one bothered to preserve all content/links from the old Singularity Institute website when moving to the new post-MIRI-rebranding website; your intelligence.org link was presumably the product of a search-and-replace operation and probably never worked.)


Perhaps this essay should be crossposted to LW to have a home (and so the links can lead somewhere).

It seems to already be on LW.

Edit: oops, looks like the essay was posted on LW in response to this comment.