Open & Welcome Thread September 2021

by Horatio Von Becker1 min read1st Sep 202167 comments


Open ThreadsWelcome Threads
Personal Blog

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

66 comments, sorted by Highlighting new comments since Today at 4:05 AM
New Comment

Hey everyone! I'm Ayelet, and currently in my second-to last-year of Germany's equivalent of high school.

I've discovered LessWrong only about two months ago, after I saw someone mention HPMOR in their "top-ten-lifechanger-books ever"-list in a reddit thread. Needless to say, I was really confused and curious, because what crazy kind of fanfiction permanently affects people's lives? So I looked it up online, and started reading. I stumbled upon LessWrong shortly after, while going down the rationality-rabbithole a bit further. And so, here I am, and genuinely believe that discovering this place is one of the greatest things to happen to me so far.

Arriving here felt like seeing sense in the world for the first time; my parents and brother aren't involved in science or academics at all (unless you count the "alternative medicine" and pseudo-science my mum regularly gets from facebook). I genuinely wasn't aware there even was a place like LessWrong, or that discussions could even be so civil, reasonable and informative.

I know I still have a lot to learn, even more to un-learn, but I'm looking forward to the journey. Two months already made me notice countless small, positive changes in the way I think and see myself and the world. (The only troublesome side effect: school has become much less tolerable as a whole. I'm truly trying to get through it with top grades, but now that I see how much time I waste there, it's much harder to try and be interested in the actual material...)

When I was fourteen, I decided to become a politician; mostly out of frustration with where the world is headed, and how little I could to prevent it. I'm still very much interested in trying to help save the world from going to hell in the next few decades, but I'm very uncertain as to whether or not my current job aspirations are really the best way to reach that goal.

Regardless; I'm very glad to be here, and excited to contribute in whichever way I can.

Hello and welcome!

I felt much warmth reading your intro. I remember how magical LessWrong was for me when I first discovered it. (Now, almost a decade in, I have a different feeling towards it, but I remain deeply proud to participate in this community.)

All of which is to say that I feel vicarious excitement for the experiences you have ahead of you. I look forward to meeting you in person one day. : )

(The only troublesome side effect: school has become much less tolerable as a whole. I'm truly trying to get through it with top grades, but now that I see how much time I waste there, it's much harder to try and be interesting in the actual material...)

I think this would not have helped me very much, so YMMV, but one frame you might want to consider is that of half-assing [school] with everything you've got.

Thanks a lot for kind words!

I looked into the half-assing-thing, and found that it might actually be somewhat helpful for me (in the sense that I'll stop putting so much effort in the subjects that aren't as relevant/rewarding when it's not necessary). This is something I've struggled with for quite a while, so thank you for the resources as well, I appreciate the effort :)

Welcome from a fellow German here! IIRC I also stumbled on Less Wrong via HPMoR, though back then the story wasn't even finished yet.

I must say, I'm impressed with the quality of your English writing at that age!

If you're ambitious and driven to choose a career to make the world a better place, check out the resources at 80,000 Hours from the Less-Wrong-adjacent Effective Altruism community. They've done lots of research and thinking into various career paths and their expected impacts, requirements, etc. They're not perfect, in that they e.g. expect a lot from their readers, and below a certain level of ambition and conscientiousness much of their advice might not be particularly applicable. But now might be a good time to check whether their resources could be useful to you.

If you think you could benefit from chatting with someone to get a rough overview of the landscapes of Less Wrong or effective altruism, I'm available to chat. I'm mostly a longtime lurker in the community, but I do have enough familiarity with it that I can at least point towards further resources on most topics.

Thanks for the offer; if I end up having any questions, I'll take you up on it.

I also looked into the 80,000 Hours community, and although I didn't get very far yet, it seems quite promising. It's definitely a lot to take in, but I think you're right; it would be useful for me now to at least dive into it for a few hours and then decide whether or not to continue.

I appreciate the compliment, as well -- I've been working on developing sufficient writing skills for a while now, and am very happy to hear it pays off.

Politicians still have a lot of power in our society, so it's one way to create change. 

Given what you wrote about your background I think there's a good change that you currently don't have a good source about how people become politicians in Germany.

German politics differs from US politics in that money isn't central for becoming a job as a politician. What's central is how the people who go to the meetings of the party for which you want to be elected see you. 

If you want to become a politician it's good to join one of the parties that has representatives in your state (Bundesland) early and participate in discussions. 

There's a lot of tension between moving to the views that the other people in your party have which is partly necessary to be accepted and seen as trustworthy and then contributing your own views. If you have detailed ideas and write them up in a motion and the other people support that motion that's one of the ways to earn an reputation as someone valuable to have around. Depending on the local enviroment it can also be very important with whom you build relationships in addition to your general reputation for being thoughtful. 

You're right - I don't have even half as much of a clue about the whole process as I'd like to have, yet. I very much appreciate that you took the time to explain the basics to me.

Looking for reasonably reliable sources, joining a party, and building a certain reputation there should be extremely high on my list of priorities right now. I'll be looking to check them off as soon as possible.

Thanks a lot!

Welcome! That's very similar to how I arrived here (also discovered HPMOR in german high-school, also ran into LessWrong afterwards and started everything else Eliezer had written), so I hope you end up having a good time. I hope I get to see you around more! :)

Hello there! I'm Kaloyan ("Kalo") and I recently joined LessWrong. I was reminded of the platform's existence in an episode of the Your Undivided Attention podcast. I actually first found the site a couple of years ago (can't even remember how -- searching for Zettelkasten content perhaps?), but did not get involved because I found it very overwhelming. In fact, I still do -- there is so much content, on so many topics I believe to be important, that it feels impossible to become a part of the community. I realize that's just my little voice of worry talking, so now I'm on a mission to prove myself wrong, starting with this post.

I was born and raised in Sofia, Bulgaria and recently graduated from the University of Southampton (UK) with a BSc in Computer Science. After working on my dissertation in my final year, I was inspired to further my research into complex networks and evolutionary game theory, which is what I am doing right now. I am also applying for PhDs and Masters in Europe, hoping to move to a new country soon.

Other than that, I spend a lot of time working on my personal development and the quality of my work. I enjoy experimenting with my productivity, I feel in my element when working to understand and explain complex topics, and I'm just starting to dip my toes into some popular philosophy. I enjoy writing and want to become a better communicator (I've started off by writing on Medium).

Now before I go, here's a flurry of random facts: I did Kung Fu for two years, I am addicted to in love with green tea, the best shows I've seen in the past few years are Dark and Lupin, I have started my own company that failed silently, and if I wasn't doing research I'd become a data artist.

Looking forward to taking part in the conversations on LessWrong. See you in the comments!

there is so much content, on so many topics I believe to be important

To get through the historical content faster, I would suggest reading the original "Sequences" in the book form, and then the 2018 community essays. (That's still a lot of text, but ultimately less than trying to drink from the firehose of LessWrong front page and wondering how much you still missed.)

The Sequences written by Eliezer Yudkowsky are available here. Note that you can also "buy" the e-books for $0.

The 2018 community essays are here as a paper book, but you can find the list of contents here, and then find the links to the individual essays here.

Thank you for the advice!

Greetings, LWers!

I've finally found the time made up my mind to write this, so here I am.

I've noticed that many new members have stumbled upon the rationalist community because of HPMOR. As I never read fanfiction sites (or sites talking about fanfiction sites), my case was quite different. For some reason I distinctly remember the ridiculously long chain of links that brought me here, so I'll post the whole list just to give an idea of how long it can take to realize the existence of a site like LessWrong:

  1. Search for insights about the P=NP conjecture during my PhD.
  2. Find the P-versus-NP page, a very good summary that also links to this excellent post by Scott Aaronson.
  3. Start reading Scott Aaronson's blog.
  4. Scott Aaronson mentions Unsong (in this post).
  5. Start reading Unsong.
  6. Return reading Scott Aaronson's blog.
  7. Scott Aaronson dedicate this post to the infamous NYT article about Scott Alexander.
  8. Fail to realize that Scott Alexander is the author of Unsong.
  9. Scott Aaronson directly quote I Can Tolerate Anything Except The Outgroup (in this post).
  10. Follow the link and read my first SSC post.
  11. Start reading SSC from some top posts.
  12. Still fail to realize that Scott Alexander is the author of Unsong.
  13. Finally notice the "Scott also writes Unsong" note in the about page.
  14. Continue reading SSC.
  15. SSC mentions LessWrong.
  16. Finally land on LW frontpage.
  17. Start reading the Sequences.
  18. Start reading the Codex.
  19. Start reading HPMOR (directly from LW).
  20. Finally sign up (after several months of lurking).

I'm not sure about which conclusion we can draw from this. Maybe that wondering about P=NP has a small chance of making you a better rationalist. Maybe that you can spend more than one year following a computer scientist professor who declares himself on the fringes of the “rationalist movement” without realizing that a rationalist movement even exists (in my defense, I started reading Shtetl-Optimized in mid-2019, and I didn't exactly dig through the older posts... still, it took me more than one year to finally land on LW). In hindsight, many posts from Scott Aaronson are quite obviously related to rationalist concepts. For example, I first learned about the classical paperclip maximizer from Shtetl-Optimized (here), but even googling "paperclip maximizer" I didn't land on the rationalist blogosphere. I just learned the paperclip maximizer classical description. It may be worth mentions that after reading the relevant Wikipedia entry, my first thought was "an amoral paperclip maximizer can fit perfectly into my Planescape campaign", which indicates that maybe I'm a bit too much addicted to D&D.

Welcome! That chain of links was fun to read :)

What that guy said!

Coming across Scott Aaronson by way of searching for info about P=NP. That happened to me a long time ago. At the time, my reaction to 'I think we should add physics doesn't enable P=NP' as a law was something like 'What? Don't you need some reason to assert that it's impossible?' (Though I did wonder if that's where thermodynamics came from.)

Welcome! I always enjoy reading people's journey to here, and am looking forward to seeing you around on here and other rationalist places on the internet! (or in person, if that ever occurs) :)

Hi! I'm Helaman Wilson, I'm living in New Zealand with my physicist father, almost-graduated-molecular-biologist mother, and six of my seven siblings.

I've been homeschooled as in "given support, guidance, and library access" for essentially my entire life, which currently clocks in at nearly twenty two years from birth. I've also been raised in the Church of Jesus Christ of Latter-Day Saints, and, having done my best to honestly weigh the evidence for its' doctrine-as-I-understand-it, find myself a firm believer.

I found the Rational meta-community via the TvTropes>HPMOR chain, but mostly stayed peripheral due to Reddit's TOS, the lack of fiction community on LessWrong, and somewhat-borne-out concerns that I would not actually be accepted here. I was an active participant in Marked for Death, but left over GMing disagreements about two years in.

My biggest present concern with LessWrong as a community is the Karma system, which is not only one-dimensional, but not even a specific axis. I don't mind one-dimensional praise, but I hate inarticulate criticism. Deeply awful feeling. I always try to give my best effort, you know?


If you want to place me elsewhere, it's almost always a variant of Horatio Von Becker, or LordVonBecker on Giant in the Playground, due to the shorter character limit.

Karma for most things is just pretend points (a perk of our small size), so don't feel too stressed. For new-ish posts, though, votes should be primarily interpreted as voting on what you want to appear highly when people look at the front page

My biggest present concern with LessWrong as a community is the Karma system, which is not only one-dimensional, but not even a specific axis. I don't mind one-dimensional praise, but I hate inarticulate criticism. Deeply awful feeling. I always try to give my best effort, you know?

I share this concern, but am also at a loss for what might be better. I thought, briefly, of Slashdot's system where there are various reasons for upvotes (funny, insightful, etc), but that always turned out to be a bit messy. 

I've suggested before that when someone downvotes it might prompt to enter a reason, which is what I'm more curious about. 

I've also wondered before if I could get admin feedback on why something wasn't (or was) Frontpaged.  But, as if they were reading my mind, that feature like that launched this week. :) 

I would like if there was a well-researched LessWrong post on the pros and cons of different contraceptives. - Same deal with a good post on how to treat or prevent urinary tract infection, although I'm less excited about that.

  • I'd be willing to pay some from my private money for this to get done. Maybe up to £1000? Open to considering higher amounts.
  • It would mostly be a public service as I'm kind of fine with my current contraception. So, I'm also looking for people to chip in (either to offer more money or just to take some of the monetary burden off me!)

Examples of content that I would like to see included:

  • Clarity on the contraception and depression question. e.g. apparently theory says that hormonal IUDs should give you less depression risk than pills, but in empirical studies it looks like it's the other way around? Can I trust the studies?
  • Some perspective on the trade-offs involved. E.g. maybe I can choose between a 5% increased chance of depression vs. a 100% increased chance of blood clots. But maybe basically no one gets blood clots anyway, and then I'd rather take the increased blood clot risk! But because the medical system cares more about death than me, my doctor will never recommend me the blood clot one, or something like that.
  • If there wasn't already a post on this (but I think there is), info on that it's totally fine to *not* take 7 day pill breaks every months, but that you can just take the pill all the time. (Although I think it might be recommended to take a short break every X months)
  • Some realistic outlook on how much pain and effects on menstruation I should expect
  • Various potential benefits from contraceptives aside from contraception
  • On the UTI side: Is the cranberry stuff a myth or is it a myth that it's a myth or is it a myth that it's a myth that it's a myth?

Alternatively: If there actually already are really good resources on this topic out there, please let me know!

I think this would be really valuable and would be happy to pay $500 to a post that is good here.

This is a public service. I think you could write this up as a post/question for more visibility.

Thanks! I felt kind of sheepish about making a top-level post/question out of this but will do so now. Feel free to delete my comment here if you think that makes sense.

Hello! I've been lurking for a little while but finally decided to create an account. Mostly because I had questions. But before I ask them, my name is Max, I'm 18 years old and I want to do science for living. I haven't decided yet what exact area is the most appealing to me, but one of those that I really like is theoretical astronomy (not sure if I spelled it right, since English isn't my native language). I came here from the hpmor podcast and I'm really glad that I have discovered this community of like-minded people, thanks to you. So, to my questions. One of them is - what are the posts here? Like, are they just random user's thoughts or some scientific articles, both? I've read "humans are not automatically strategic" or something like that and the post that it was referring to, from that I've got the idea that people here are exchanging their thoughts on certain subjects, trying to learn more about them. But still I don't exactly understand how posts work, like some of them are pinned and recommend, some aren't. Anyway, if you could explain how things work around here, I'd really appreciate it. Thank you all once again.

Hey, welcome. You might want to check out the About Page and FAQ.


I'm Daniel. I'm living in Japan and currently working on a SaaS as a CTO of a startup. I have a blockchain background as well. Specifically, I used to develop smart contracts on Ethereum. Which kind of let me to this community. I found this community through the podcast show Rationally Speaking, which I  found when Vitalik Buterin (co-founder of Ethereum) was on it.

I’m a self-taught programmer so I don’t have experience in academia but I would like to be involved in this community and academia in general.  

I'm interested in a lot of topics that are talked about in this community, but I would especially like to learn more about how academia works, and what the dynamics are like in the context of the relation to startups, scientific/technological evolution, and evolution of society in general.
I was born in Canada and moved to Japan when I was one year old, so I'm looking forward to being involved with the LW community in Japan as well!


Do you have thoughts on Solidity as opposed to Vyper? I've been learning Chialisp, and after which I want to focus on Solidity.


The last time I worked on smart contracts is almost 2 years ago so I'm definitely not qualified to give you advice now, but I hope this would be useful in some way.

I think Solidity has the most mature ecosystem of libraries/development tools, but other new languages like Vyper have that additional security/modern features which were adapted by learning from (the mistakes of) other older languages. (I might be wrong)

Solidity shouldn't be a hard language to learn, so just giving it a try and see how you feel about it could be a good option!

Hey! Fellow crypto-native here, nice to meet you! Have intro-ed myself further down in this thread, happy to connect 1-to-1 if you feel that'd be helpful.

Hi Samuel! Nice to meet you too!

Yes, it would be nice if we can connect. 

I do have a similar experience in turning down a job. I got a job offer in the DeFi space, but I turned it down since it wasn't much aligned with what I want to do long term.

You can DM me anytime!

Are we going to be doing Petrov Day this year? I don't see anything currently about it here.

My guess is we are going to do some Petrov Day thing again, but not confirmed. We tend to usually plan it a week or two before it goes live.

FWIW I like this idea, and would be cool if there was some fanfare on the site for it.

Hey everyone! My name is Samuel, I'm a 20-year-old biochemical engineering student from India. I've been fascinated by the positive and negative implications of superintelligence right from when I became a teen. Hence my internet username ghosts_in_the_code alluding to the movie - I, Robot.


I have spent my lockdown deeply engrossed / addicted to cryptocurrency and "DeFi", and am now quite knowledgible on the same (my crypto twitter). Was even offered a full-time position and generous financial compensation, but ended up turning it down because I was growing increasingly unsure about whether that was the highest societal impact thing I could be doing.


Now trying to take a break and learn more about existential risks humanity faces. In general yeah just trying to navigate and figure out where I want to be in life and what I wish to optimise for.

I've recently become interested in DeFi, but I'm not entirely sure where to start. What exactly have you been doing with it?

Happy to connect on twitter. The short version is its basically just leverage and shorting of crypto assets, and a lot of exotic speculative assets thrown in, some with social status or community component.

Some people hope it can evolve into more.

Can you short a crypto asset on DeFi without exposing yourself to unlimited risk? How can you trust a dapp isn't a scam, or buggy or insecure? Are there any trustworthy derivatives like futures or options?

Futures aren't live on ethereum - maybe check Solana or something. Covered options are trivial to build on ethereum, but no liquidity yet - check out Hegic and Opyn.

There's plenty of risks, maybe you can check the "What are the risks?" section I wrote in this artcle:

Risk analysis in DeFi is new and needs to evolve. Smart contract risk relies on audits and time the app has been live without being hacked. Oracle risk I have no clue how you'll price. Default risk is a bit hard to study in current lending markets - Gauntlet's agent-based simulations for Aave's default risk are probably the closest you'll get. 

I'm trying to find an article on lesswrong? I swear I read but can't find via google.

It was a different analogy around Chesterton's fence where the town comes together to discuss a recently erected lamp post. Everyone is unhappy for different reasons. Some people want it to be taller and brighter some people want it to be shorter and dimmer and some people want it removed so they can do evil things in the dark. Then a monk appears and tells everyone what they need to do is think about what it means to have light. 

Then a mob forms, tears the light post down, someone gets stabbed maybe or robbed. And then everyone has to sit there and think about what happened, and what it means to have light, but now they have to do it in the dark.


Did I dream this up? I can't find it anywhere.

This is a long shot, and a completely different metaphor, but are you perhaps thinking about the Parable of the Dammed?

It wasn't but you helped!

All I needed to fix my googling was the word Parable :). Turns out it was from Chesterton's own writings:

"Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their un-mediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark."


I discovered LessWrong last week, coming across a link to this post by @johnswentworth; Core Pathways of Aging - LessWrong It was sent to me by a kind stranger on the lifespan discord, where I was looking for science based methods of increasing healthy lifespan.

So far I have found the field of longevity extremely difficult to navigate. Research papers, commercial interests and anecdotal evidence are mixed togheter in one big bowl.  Everyone is selling a book, youtube channel, podcast or supplement.  

It reminds me of walking down a street of resteaurants with barkers trying to entice entice passersby (usually tourists) to come in to dine. As one could expect, the food is overly priced and of poor quality. 

The excellent text by johnswentworth led me to read a lot more of the articles posted on lesswrong and I truly enjoy the calm, rational and intelligent texts. To search for the truth, admitting when one is in doubt and stribing to be objective. 

Truly a fresh breath of air in my badly polluted environment. 

I intend to use this site for self-improvement, especially english language and try to learn methods for solving complex problems and process-optimization in the factory where I work. 

Thank you all. 

Some readers might have noticed that “Rough notes on the Sam Altman Q&A: GPT and AGI” is not currently on the site. The LW team has taken it down as a default while we and the author can decide whether it should be posted or not, given that maybe Sam requested that things like this not be shared and we generally ought to respect such requests.

There are a few important principles in conflict around the publishing of this post. I'm trying to figure out where the balance lies. Ideally I’d write up the current state of my thinking, but doing so is proving to be equivalent to reaching the final state of my thinking, so it’ll have to wait a bit longer.

Did Sam ask at the Q&A that it not be shared or did he contact LW and ask that it be removed? If that's top secret, I'm okay with out an answer. More just curious.

Question on LW norms: When do you strongly upvote your own comments? Never? Always? If you're very confident in the comment? If you think the comment is particularly valuable? If the comment was time-consuming to write?

Posts are strong-upvoted by default and comments are not. I usually stick with the defaults. I have strong-upvoted my own comments, because this is allowed, but I do so pretty rarely, much less often than I strong-upvote comments from others. You don't get any extra Karma for it, and may get downvoted even more if people think the score is too high. I feel like I need a higher threshold for mine. Strong upvotes as a feature are valuable (in part) because they are optional and rare. I don't strong upvote because a comment was time consuming, for myself or others. I might if I think the comment is particularly valuable and wouldn't be noticed otherwise, or if I feel it was downvoted unfairly, to give others a chance to notice it and vote.

I personally have never upvoted my own comment, though not because of some principle objection to doing it. I think as long as you don't do it all the time it can be useful when you think a comment is particularly important/relevant/whatever and you think people should read it. Being confidant in the comment or the comment being time-consuming don't seem like good reasons to upvote your own comment. Also, my guess is you might get more downvotes if people think you shouldn't have strongly upvoted your own comment - I'm not sure to what extent though.

Of course, the norm would be very different if comments were automatically strongly upvoted like posts, so even if this is the current norm it doesn't mean it's the one that "should" be.

Something like this seems right. It's not the worst thing ever to do it, but it's a bit of a faux pas in my books I'd only do if it really did seem important.

(mod note: I edited this post to have the standard Open Thread text)

I remember someone (Paul Christiano, I think?) commenting somewhere on LessWrong, saying that Ian Goodfellow got the first GAN working the on the same day that he had the idea, with a link to an article.

Does anyone happen to remember that comment, or have a link to that article?

Not an article, but I have a link to an interview where Ian tells that story (timestamp around 3:40 if you only want that part, 2:44 if you want it as part of the complete story).

Any chance we could get a "book review" icon to decorate post titles in lists so that people don't feel they need to flag them with "[book review]..."? This could be based on the presence of the "book review" tag.

That's an interesting idea! I'll think about it.

Hello, I would like to ask whether there is any summary/discussion of necessary/sufficient criteria according to which a reason for whatever (belief, action, goal, ...) is sufficient. If not, I would like to discuss it.

I'm sure there's people here who could give a better answer. My take would be, from the rationalist/Bayesian perspective, is that you have a probability assigned to each belief based on some rationale, which may be subjective and involve a lot of estimation.

The important part is that when new relevant evidence is brought to your attention about that belief, you "update." In the Bayesian sense thinking "given the new evidence B, and the probability of my old belief A, what is the probability of A given B?"

But in practice that's really hard to do because we have all of these crazy biases. Scott's recent blog post was good on this point

OK, thanks, but then one of my additional questions is: what is the reasonable threshold for the probability of my belief A given all available evidence B1, B2, .., Bn? And why?

Are you suggesting that beliefs must be binary? Either believed or not? E.g. if the probability of truth is over 50% then you believe it and don't believe if it's under 50%? Dispense with the binary and use the probability as your degree of belief. You can act with degrees of uncertainty. Hedge your bets, for example.

Ok, thanks. This is very interesting, and correct in theory (I guess). And I would be very glad to apply it. But before doing my first steps in it on my own by the trial-&-error method, I would like to know some best practices in doing so, if they are available at all. I strongly doubt this is a common practice in a common population and I slightly doubt that it is the common practice also for a "common" attendee of this forum, but I would still like to make this my (usual) habit.

And the greatest issue I see in this is how to talk to common people around me about common uncertain things that are probabilistic if they actually think of the common things as they would be certain. Should I try to gradually and unnoticeably change their paradigm? Or should I use double language: probabilistic inside, but confidential outside? 

(I am aware that these questions might be difficult, and I don't necessarily expect direct answers.)

I'm not sure what to say besides "Bayesian thinking" here. This doesn't necessarily mean plugging in numbers (although that can help), but develop habits like not neglecting priors or base rates, considering how consistent the supposed evidence is with the converse of the hypotheses and so forth. I think normal, non-rationalist people reason in a Bayesian way at least some of the time. People mostly don't object to good epistemology, they just use a lot of bad epistemology too. Normal people understand words like "likely" or "uncertain". These are not alien concepts, just underutilized.

I'm not sure what you mean by “threshold for the probability of belief in A.”

Say A is “I currently have a nose on my face.” You could assign that .99 or .99999 and either expresses a lot of certainty that it’s true, there’s not really a threshold involved.

Say A is “It will snow in Denver on or before October 31st 2021.” Right now, I would assign that a .65 based on my history of living in Denver for 41 years (it seems like it usually does).

But I could go back and look at weather data and see how often that actually happens. Maybe it’s been 39 out of the last 41 years, in which case I should update. Or maybe there’s an El Niño-like weather pattern this year or something like that… so I would adjust up or down accordingly.

The idea being, overtime, encountering evidence and learning to evaluate the quality of the evidence, you would get closer to the “true probability” of whatever A is.

Maybe you’re more asking about how should certain kinds of evidence change the probability of a belief being true? Like how much to update based on evidence presented?