If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
39 comments, sorted by Click to highlight new comments since:

Good afternoon, everyone. I'm happy to be here.

I've been following the rationality movement for a few years now, and I've been going back and forth on joining for about as long. My first introduction to LessWrong was through the posts on akrasia and techniques that might help with that. I followed that with reading some of SSC's greatest hits. Meditations on Moloch haunts me and I now see Molochian influence in a lot of places these days.

I think I'm joining now because I want to handle uncertainty better. Uncertainty gives me a knot in my chest and a buzzing noise in my mind, it makes me uncomfortable and demands my attention. I want the universe to have clear, sharp, definitive answers on everything that could be found with just enough experimentation, logical thinking, and equipment sensitivity... but that's not the way things work. I want to learn to sit with uncertainty, to not tie myself in knots trying to find the answer.

I'll likely read more than write. I'm just glad that a place like this exists.

Helion Energy has recently announced that their most recent fusion generator prototype has been running for 16 months at 100 million degrees with 10,000 operational cycles, with “upwards of 95%” energy harvesting and reclamation efficiency. This is an unusual setup because it recovers most of the energy used to compress the plasma, which means that the break-even point for net energy gain (aka.  > 1) is much lower. Some people on r/fusion have estimated  factors of around 4, given public information, but, and here's the catch, Helion have not given any  value for the machine, nor said whether it breaks even.

If these commenters are right—and I'm not going to dismiss that possibility outright, given a lot of the subreddit members are professionals in the field—this would represent perhaps the weirdest possible timeline for fusion development I can imagine: that a public startup had achieved  > 1 while people are still mocking fusion as forever decades out, and for well over a year they've just... chosen not to tell anybody.

Of course, it could just be that  is less than 1, and this is merely a very exciting prelude.

Helion has now raised $500 million for a net-positive reactor with a target date of 2024.

this would represent perhaps the weirdest possible timeline for fusion development I can imagine: that a public startup had achieved QE > 1 while people are still mocking fusion as forever decades out, and for well over a year they've just... chosen not to tell anybody.

I mean, it's a really weird timeline if they saved this announcement up because it'd make a bigger splash post-covid.

UK neuroimaging study notes loss of brain tissue for even mild cases of COVID-19, particularly in regions relating to smell and emotional recall. How concerned should we be?

What might be the mechanism of action? Direct destruction of neurons by the virus? Reversible atrophy due to temporary loss of the sense of smell? Oxygen deprivation? Some kind of toxicity that penetrates the blood-brain barrier? Should we be worried about spike protein toxicity affecting the brain even for the vaccines?

Sense of smell was something that was reported for patients infected with SARS-CoV-2 on the other hand I'm not aware of any similar reports for loss of smell for vaccines.

One of the key reasons why I used free text instead of a poll in Which rationalists faced significant side-effects from COVID-19 vaccination? was that having free text makes it more likely that unforseen symptoms like loss of smell get reported then if I just use a poll. Exposing more people to that post to report side effects is one straightforward thing you can do if you a bit worried.

Although my sense of smell isn't totally reliable to begin with (for pre-existing reasons) my sense of taste and smell didn't seem particularly affected by the vaccine. I haven't noticed anything unusual here.

I'm planning to post a few essays on LW soon, on my experiences with topics like productivity systems, inner resistance, akrasia etc.

Do you have any tips for posting your first essay to the site, e.g. re: style, or re: obvious things to do or to avoid, in order to get a better reception and maybe some comments?

Currently I'm planning to at least reread the SSC essay on Nonfiction Writing Advice.

Having someone else proofread it, or just generally give style feedback, is often quite valuable. You can post in the Open Thread with a link to a Google Doc, or you can ping me on Intercom, and I can take a look and suggest any obvious improvements that stand out.

Also, there is value to independently posting a link to your essay via other non-LW channels. E.g. sharing it on Facebook or Discord or wherever you might find others interested in it.

I have heard the following hack mentioned (not sure where - it might have been in response to a similar question, or an observation):

Start your own blog and write stuff on it. (While this advice might not take into account the cost, or explain how to mitigate it, it seems to suggest that a) writing helps you at getting better at writing, b) it can be a way of getting feedback in order to improve.)

This has the following advantage: (unless you were to implement such a thing, somehow) no one can downvote stuff on your blog. (Even if you accidentally post a draft.*)

*This is in fact my main advice: Avoid accidentally posting a draft. (If you're not sure about the interface:

  • don't draft it here
  • 'Name' the post something like 'testing saving draft of post on LW' (and remember to remove that before actually posting it)

Do you guys think it is worth to learn chinese if I'm planning a career in science?

China is becoming more and more influential in the world, plus in 2020 it's published more scientific papers than USA, most of which are not translated, thus being able to read them would be an advantage. (https://www.scimagojr.com/countryrank.php?year=2020)

I'm not sure how to find information about which country puts more money and is progressing faster in molecular biology/biophysics though.

I have undergraduate degrees in physics and mathematics. I taught myself business, entrepreneurship, computer science, machine learning, web development and Chinese. I have run my own consumer hardware startup.

The Chinese word for kinetochore is 动粒. If you go to the English Wikipedia page on kinetochores it's all in English. If you go to the Chinese Baidupedia page on 动粒 the first sentence lists the English word "kinetochore". That's because English is the lingua franca of science.

Learning Chinese because you love China and Chinese culture is a stupendous idea. Learning Chinese because you want to break out of your Western cultural assumptions is a great idea. Learning Chinese because China is the center of the world is perfectly reasonable. Learning Chinese because you want to advance your scientific career is inefficient.

Learning Chinese is harder than learning chemistry. It is harder than learning business and entrepreneurship. Learning Chinese has a difficulty comparable to maybe 4 years of full-time technical training in physics. That's twice as long as a Master's Degree in computer science. If you're already planning to get a graduate degree in molecular biology then learning Chinese too basically amounts to doubling your workload. You could get bigger bang for your buck teaching yourself to program.

I expect that the biggest use of Chinese would be if you wanted to do business in China or with Chinese companies. If you want to do this then learning even a little Chinese is a really good idea (though somehow not mandatory). If you are not interested in either of these things then Chinese is unlikely to help you (directly) in career success.

Learning Chinese should be thought of as part of a liberal education. You should learn Chinese for the same reason you should learn about fiction, art, history, physics, anthropology, math and psychology—because it broadens your understanding of the world. This sort of thing is very useful, but it can be hard to pin down exactly how it's useful.

If you're willing to throw years of effort into something with no (immediate) career payoff then yeah, you should learn Chinese. But you should not learn Chinese (just) so you can read biology papers written in Chinese.

That's because English is the lingua franca of science.

That's true today. The question is whether it will still be true in two or three decades. The Chinese government can just decide that it wants to fund a certain sub-section of biology with a lot more funding then there's outside of China for that part of biology and have the relevant papers published in Chinese. 

I am curious too how this will play out. Lingua francas tend to be sticky, but they also tend to follow the world's dominant power.

On the lingua franca of science issue, I get the impression that for scientific careers over the last few generations, going out of one's way to learn foreign languages to read/communicate with non-English-speakers seems to have become less prevalent, rather than more, among English speakers.

For instance, mandatory foreign language requirements in US PhD programs are rarer and rarer (perhaps only in elite schools, and more or more restricted to humanities, not STEM) for fields like hard science.

Of course this is in comparison to and a holdover from when non-English European languages like French, German, Russian etc. made up a larger share of the scientific literature in past generations if not centuries, and may not apply to the rise of Asia.

But I do wonder, has the relative importance in science from the rise of China or Asia (let's say when Japan rose to prominence last generation or two ago) convinced more people to learn non-western languages in the same way people did with French, German, Russian ec. when continental Europe was a scientific center, that can be seen in language learning trends?

Most discussion of language learning centers around business, international relations, geopolitical stuff, with science relatively little discussed but that might be because scientists make up only a small proportion of the populace. 

https://onlinelibrary.wiley.com/doi/full/10.1002/leap.1089 is a paper that describes language distribution in scientific papers and the share of non-English papers is currently falling. Knowing non-English languages falls in importance for science. 

It's worth noting that Chinese is an impractical language for science. When coining a new term in English a reader has a good idea of how to pronounce it while the same isn't true in Chinese as far as I understand. 

Given the political enviroment in China, the government howerver can decide to set standards even if those aren't good. Wouldn't be the first time that internal politics reduced China's technological capacity ;)

Historically, yes, it has been hard to figure out how to pronunciation scientific neologisms in Chinese. (The Periodic Tables of the Elements is especially full of unique characters.) These days, I don't think that is much of an issue. If you coin a new term from commonly-used characters then its pronunciation tends to be obvious. For example, 高能加速器 (high-energy particle accelerator) is composed entirely of well-known characters with single pronunciations.

High energy particle accelerator is a phrase that's made up out of other building blocks. A word like entropy on the other hand isn't. 

I don't think we are at a time where everything that could be discovered on a basic level has words. New scientific paradigms usually need new words and for a Chinese research community to form, funding a community to gather around a new paradigm would be a way to do it.

Learning Chinese because you love China and Chinese culture is a stupendous idea

There seems to be a definite shift in the last decade or two (or maybe generation) from the perception that people who are into Chinese-related things like culture/language are doing it for heritage and cultural interest reasons vs. doing it because of the perceived importance of China geopolitically, business-wise, science-wise etc. and because China is seen as "the future". 

Whether it's really practical or not, it appears claimed practical (careerist) reasons have increasingly taken over cultural reasons/liberal arts for being interested in China.

By contrast, it's interesting that say learning, French or Japanese, is still more associated with interest and appreciation for the culture than hardheaded pragmatism. Or even stuff like learning Korean because K-pop is seen as cool now.

Here is just an example (from a fairly mainstream media source, NPR), of what I was thinking about when it comes to motivation, titled  A Daughter's Journey To Reclaim Her Heritage Language, and discussing a third-generation Chinese American who never previously spoke a Chinese language trying to learn at age 30 to reconnect with her roots.

Back in the days (perhaps even not so long ago as the 90s), it feels like this -- along with liberal arts folks, cultural intellectuals like humanities professors --  was far closer to an archetype if not one of the central examples of the average American interested in Chinese culture or language. 

Now this sort of thing is heavily swamped by the perception that interest in China is all political/business/realpolitik related. The heritage/culture side -- both Chinese Americans interested in so-called "reconnecting with their roots" or anyone of any heritage for that matter interested in the subject -- seems pretty drowned out by comparison.

Yeah, I largely agree with lsusr. According to my mom (whose career has focused on second language acquisition and Chinese-American cultural exchange), basically no student gets past second year Chinese at a university level unless they're majoring it. Like, even business majors who plan to work in China. When I took university-level Chinese it really shocked me how much harder it was than other languages I'd learned – after nine months of five hours a week of quality university-level instruction, reading-wise I could barely understand books aimed at toddlers, and speaking-wise I could theoretically order food in a restaurant but wouldn't be able to understand any responses to what I said.

And it would be harder than other languages even if you were just learning to speak, but learning to read basically doubles the difficulty (if not more). My mom is quite fluent in speaking and listening – she worked for years as a Mandarin-English medical interpreter, and lived and worked in China (and Japan, which uses some Chinese characters) for a decade long before Google Translate existed – but she's almost entirely illiterate in Chinese. Many if not most people in the village where my dad grew up were illiterate as well. 

Point being, your question was whether it's worth it for you to learn (to read) Chinese, and I think the answer to that is no for almost anyone in almost any situation. Not because it wouldn't be great to know Chinese, but because the time investment is so shockingly huge.

I thought a bit more about it and at the rate that automatic translation gets better I would expect that it's no problem to read biology papers in 1-2 decades if they are written in Chinese and you don't know Chinese.

China's sciences are not very good, and relatedly most of those papers are likely of extremely low quality. I know Chinese, and it's a wonderful language, but I wouldn't recommend learning it for that purpose. My 2c

While most of the papers don't get translated Chinese authors generally get rewarded more for publications in high impact factor journals. That means that a good Chinese scientist currently publishes in English and the Chinese language paper will be on average crap.

On the other hand China is progressing and very nationalist, so there's a good chance that some fields will progress to publish high quality research in China sooner or later.

Anyone have reading recommendations for fiction or even just a summary description of what a positive future with AI looks like? I've been trying to decide what to work on for the rest of my career. I really want to work on genetics, but worry that, like every other field, it's basically going to become irrelevant since AI will do everything in the future.

I like to think that depictions of good life after AGI are just called slice of life stories. Just find a story about three friends baking a cake and add "also, most of the production of ingredients was handled by robots." Any story that doesn't hinge on someone being poor or in danger is valid post-scarcity. This eliminates a huge fraction of all stories we tell, but a much smaller fraction of the stories you'd actually like to have happen to you.

I'm not sure of any slice of life stories that actual do have the "also, robots" conceit, though. Maybe Questionable Content?

It seems unlikely to me that the things we do post-AGI would remain the same. If you had the lamp from Aladdin and the genie actually worked as described, would your life remain the same? Would you still spend your time baking cakes?

I know for myself personally I would try to enhance the capabilities of myself and those I care about (assuming they were amenable). To comprehend calculus as Newton did, or numbers as Ramanujan did would I think be an experience far more interesting than baking cakes or taking part in my usual hobbies. And there are thousands of other ways in which I would use my allotment of AI power to change my own experience.

I suspect this would be true for many people, so that self-augmentation via AGI would fundamentally change the experience of being human.

What does such a world look like? I have a very hard time visualizing it. Would power tend to concentrate even more than it does now? How would AI deal reconcile competing human interests?

Good points. I was imagining some successful slow takeoff scenario where there's a period of post-scarcity with basically human control of the future (reminds me of the Greg Egan story Border Guards.). But late into a slow takeoff, or full-on post-singleton, the full transhumanist package will be realizable.

I'm not so sure that learning to love numbers at the expense of my current hobbies is all that great an idea. Sure, my future self would like it, but right now I don't love numbers that much. I think a successful future would need some anti-wireheading guardrails that would make it difficult to learn to love math in a way that really eclipsed all your previous interests.

Eh. I think it might fit in nicely under the time you might currently spend doing a crossword puzzle or sudoku. Living for longer arguably allows for 'doing something just for a little while' paying off in a bigger way (where it was previously more constrained by lifetime length).

Also to some extent there's 'integration' - better memory doesn't necessarily mean you love memorizing things for contests, and 'that's the only thing do now'. Maybe instead you just bake without a recipe if you want to do something you've done before. Or you remember more sports plays and use them despite continuing to play 'just for fun'.


If you gained more appreciation for artwork, that wouldn't necessarily 'change your entire life'. Instead you might go to an art museum once in a while.

(I also don't see why you're afraid of becoming an artist. Oh no, my values might change and I might become Michelangelo! What? Why are you already worried about that? Do you think you are predisposed to getting addicted to that? Why?

Are you a recovering mathematician or something? (I also don't know what your hobbies are, and why they wouldn't mix with each other - math problems have to come from somewhere.)


Would you still spend your time baking cakes?

Me, no. People who like doing that, yes. That's not to say it would necessarily last forever, but things (and people) change over time. I also think there's something different about people who, for example:

1. Buy furniture, versus

2. Go out and make it.

Arguably, people having less constraints might mean more of 2.

How would AI deal reconcile competing human interests?

What does this mean?

How would AI deal reconcile competing human interests? What does this mean?

It was a typo. It was meant to say "How would AI reconcile competing human interests?"

Everyone says the Culture novels are the best example of an AI utopia, but even though it's a cliché to mention the culture, it's a cliché for a good reason. Don't start with Consider Phlebas (the first one), but otherwise just dive in. My other recommendation is the Commonwealth Saga by Peter F Hamilton and the later Void Trilogy - it's not on the same level of writing quality as the Culture, although still a great story, but it depicts an arguably superior world to that of the Culture - with more unequivocal support of life extension and transhumanism.

The Commonwealth has effective immortality, a few downsides of it are even noticeable (their culture and politics is a bit more stagnant than we might like), but there's never any doubt at all that it's worth it, and it's barely commented on in the story. The latter-day Void Trilogy Commonwealth is probably the closest a work of published fiction has come to depicting a true eudaemonic utopia that lacks the problems of the culture.

Potentially trivial math question:

Imagine that you have a computer with the following properties:

  • flat
  • black
  • hot
  • speck of red on one side

And your prior at this point for p(slow given it's a computer) = 0.5.

If an article about computers said p(slow given flat, black, and speck of red) = 0.25, then would you use the new # as your prior or combine the two pieces of information to calculate p(slow given flat, black, hot, and speck of red)?

I'm inclined to say that I should use .25 as the new prior and forget about 0.5 but I may also be making a silly logical error here.

o_1 = (observed: flat, black, with red speck)

p_1 = p(slow given o_1) = 0.5

o_2 = (observed: article claiming p_1 = 0.25)

p_2 = p(slow given o_1, o_2) = ?

I wouldn't call this trivial, it depends on the particulars (which is why in the notation above, there's a contradiction around the value of p_1). The easiest resolution involves following up on the article by looking for data/causal models. Like using the computer, and acquiring evidence.

The solstice is in 8 days. Is there any plans for a Summer Solstice this year? I'm interested in meeting in person in Northern California / the bay (and am fully vaxxed, of course)

There is a private FB event I am invited to and I can invite you to. I don't think we're friends on FB, so if you have FB, you can add me, and then I can invite you: https://www.facebook.com/oliver.habryka/ 

👍 Added