Open & Welcome Thread - January 2021

by jsalvatier1 min read4th Jan 202126 comments


Open ThreadsSite Meta
Personal Blog

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

26 comments, sorted by Highlighting new comments since Today at 5:48 PM
New Comment

Hello everybody!

I have done some commenting & posting around here, but I think a proper introduction is never bad.

I was Marxist for a few years, then I fell out of it, discovered SSC and thereby LW three years ago, started reading the Sequences and the Codex (yes, you now name them together). I very much enjoy the discussions around here, and the fact that LW got resurrected.

I sometimes write things for my personal website about forecasting, obscure programming languages and [REDACTED]. I think I might start cross-posting a bit more (the two last posts on my profile are such cross-posts).

I endorse spending my time reading, meditating, and [REDACTED], but my motivational system often decides to waste time on the internet instead.

I'm looking for a science fiction novel that I believe I first saw mentioned on LessWrong. I don't remember the author, the title, or any of the characters' names. It's about a robot whose intelligence consists of five separate agents, serving different roles, which have to negotiate with each other for control of the body they inhabit and to communicate with humans. That's about all I can remember.

You may be thinking of Crystal Society? Best wishes, Less Wrong Reference Desk


While I'm here, if someone likes Ted Chiang and Greg Egan, who might they read for more of the same? "Non-space-opera rationalist SF that's mainly about the ideas" would be the simplest characterisation. The person in question is not keen on "spaceship stories" like (in his opinion) Iain M. Banks, and was unimpressed by Cixin Liu's "Three-Body Problem". I've suggested HPMoR, of course, but I don't think it took.

The tags / concepts system seems to be working very well so far, and the minimal tagging overhead is now sustainable as new posts roll in. Thank you, mod team!

I am doing an art criticism project that’s very important to me, and I’m looking for high res digital versions the art in the following books. 

Help with getting these via a university library, or pointers to where I could buy an electronic copy of any of these is much appreciated. 

You wrote in markdown, but we have a WYSIWYG editor! Just highlight a piece of text to see the edit menu popup, and you can put the link in that way. Or use cmd-k. Anyway, FTFY.

Thanks, I forgot to make it clear I'm looking for digital versions.

I'm making an online museum of ethos (my ethos). I'm using good and bad art and commentary to make my ethos very visible through aesthetics.

Dileep George's "AGI comics" are pretty funny! He's only made ~10 of them ever; most are in this series of tweet / comics poking fun of both sides of the Gary Marcus - Yoshua Bengio debate ... see especially this funny take on what is the definition of deep learning, and one related to AGI timelines. :-)

I would love to have variable voting so i could give (or take) anywhere between one and my maximum vote strength. The way I'd do it is have each click increase the vote strength by one, and a long press set it to max strength (Keep the current tooltip so people know). then to cancel the vote (whether positive or negative) there would be a small X to the side of the up/down buttons. 

I know it has been discussed already, but just wanted to give this as another datapoint. it happens to me a lot that i want to give a post more than 1 karma but less than 5, so i would use this a lot if it was possible.

I've started browsing and posting here a bit so I should introduce myself.

I've been writing online for around five months and put some draft chapters of a book on my website. The objective is to think about how to immunise a society from decline, which basically means trying to find the right balance between creativity and cohesion (not that they are inversely related—it’s quite possible to have neither). Because I can’t buy into any worldview out there today, I’ve tried to systematise my thoughts into a philosophy I call Metasophism. It’s a work in progress, and most of what I read and write links into that in some way.

Prediction mechanisms are something commonly discussed here which I’ve partly integrated, but I need to think more about that which this site will help with I think.

How did I end up here? A commenter on an early post of mine mentioned LW, which I didn’t then frequent even though I was familiar with some of the writers here. That caused me to check it out, and the epistemic culture caused me to stick around.

Do the newest numbers indicate that the new Covid strand isn't that bad after all, for whatever reason? If not, why not?

Edit: Zvi gave a partial answer here.

Hello, lesswrongsters (if I can call you like this),

What do you think about the following statement: "You should be relatively skeptical about each of your past impressions, but you should be absolutely non-skeptical about your most current one at a given moment. Not because it was definitely true, but because there is practically no other option."

Please, give me your opinions, criticism, etc. about this.

The first sentence of the quote sounds like a mix of the Buddhist concept of the now plus the financial concept of how the current price of a security reflects all information about its price.

Ok, I will put it a little bit straightforward.

My Christian friend claimed that atheists/rationalists/skeptics/evolutionists cannot trust even their own reason (beacuse it is the product of their imperfect brains in their opinion).

So I wanted to counterargue reasonably, and my statement above seems to me a relatively reasonable and relevant. And I don't know whether it would convince my Christian friend, but it is convincing at least me :) .

Thanks in advance for your opinions, etc.

atheists/rationalists/skeptics/evolutionists cannot trust even their own reason

Well, I don't. But at the end of the day, some choices need to be made, and following my own reason seems better than... well, what is the alternative here... following someone else's reason, which is just as untrustworthy.

Figuring out the truth for myself, and convincing other people are two different tasks. In general, truth should be communicable (believing something for mysterious inexplicable reasons is suspicious); the problem is rather that the other people cannot be trusted to be in a figuring-out-the-truth mode (and can be in defending-my-tribe or trying-to-score-cheap-debate-points mode instead).

Part of being a good skeptic is being skeptical of one's own reasoning. You need to be skeptical of your own thinking to be able to catch errrors in your own thinking. 

Consider how justified trust can come into existence. 

You're traveling through the forest. You come to moldy looking bridge over a ravine. It looks a little sketchy. So naturally you feel distrustful of the bridge at first. So you look at it from different angles, and shake it a bit. And put a bit of weight on it. And eventually, some deep unconscious part of you will decide that it's either untrustworthy and you'll find another route, or it will decide its trustworthy and you'll cross the bridge. 

We don't understand that process, but its reliable anyway.

Yes, but it can happen that in the time course of our individual existence two "justified opinions" inconsistent with each other can occur in our minds. (And if they didn't, we would be doomed to believe all flawed opinions from our childhood without possibility to update them because of rejecting new inconsistent opinions, etc.)

And morover, we are born with some "priors" which are not completely true but relatively useful. 

And there are some perceptual illusions. 

And prof. Richard Dawkins claims that there are relatively very frequent hallucinations that could make us think that a miracle is happenning (if I understood him correctly). By relatively frequent I mean that probably any of the healthy people could experience a hallucination at least once in a lifetime (often without realizing it).

And of course, there are mental fallacies and biases.

And if the process is reliable, why different people do have different opinions and inconsistent "truths"?

Thus, I think that the process is relatively reliable but not totally reliable. 


PS: I am relatively new here. So hopefully, my tone is not agressively persuasive. If any of you have a serious problem with my approach, please, criticize me.

>Thus, I think that the process is relatively reliable but not totally reliable. 

Absolutely. That's exactly right. 

>My Christian friend claimed that atheists/rationalists/skeptics/evolutionists cannot trust even their own reason (beacuse it is the product of their imperfect brains in their opinion).

It sounds like there's a conflation between 'trust' and 'absolute trust'. Clearly we have some useful notion of trust because we can navigate potentially dangerous situations relatively safely. So using plain language its false to say that atheists can't trust their own judgement. Clearly they can in some situations. Are you saying atheists can't climb a ladder safely? 

It sounds like he wants something to trust in absolutely. Has he faced the possibility that that might just not exist?

The Invisible People YouTube channel interviews homeless people. At the end, the interviewer always asks what the interviewee would do if they had three wishes. Usually the answers are about receiving help with money, drugs, or previous relationships. Understandably.

But in Working Actor Now Homeless in Los Angeles, the guy's mind immediately went to things like ending deadly disease and world peace. Things that would help others, not himself.

And it wasn't as if he debated doing things for himself vs. for others. My read is that the thought of doing things for himself didn't even really get promoted to conscious attention. It didn't really occur to him. It looked like it was an obvious choice to him that he would use the wishes to help others.

One of the more amazing things I can recall experiencing. It gave me a much needed boost in my faith in humanity.

Another interpretation would be that the system trains people in Los Angeles in a way where there are certain answers allowed to the question of "what would you do if you had a wish" and the allowed questions aren't selfish things. 

If an actor goes to a casting and gets asks for wishes, ending disease and world peace and the safe wishes, drugs aren't. 

Hm, that does seem plausible. I'm curious what others think.

There's hypertext, but there's no link

Thanks. Fixed.