If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New to LessWrong?

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 10:32 AM

Why don't female organisms insert more of their own DNA into the offspring?

I don't think Fisher's principle explains it, because it only applies to the level of which sex your offspring has, and there's a clear advantage of having a male offspring when there's many females around.

But if you're female, and sperm and egg mix, you could theoretically control how much of the male's DNA gets to contribute to the embryo. And there's not much to stop you, since the male can't check beforehand how much of its DNA you will use. But if a gene causes the female organism to increase more of its DNA into the offspring, that gene also increases its own chance of continuing to exist.

Sure, the offspring will be less fit on average, but I'm astonished that the equilibrium is that the female organism doesn't "cheat" at all, instead of there being a 2/3 female 1/3 male offspring genome or something.

Maybe this is a situation where every gene benefits by not cheating, but individual genes defecting would make sense? And if so, how does the reproductive process prevent this?

It's worth starting by noting, that male and female births are not 50-50. While conceptions are 50-50, births aren't and there are mechanisms that terminate pregnancies unsuccessfully that have different likelihoods based on gender. 

While it makes sense that the value is near 50% for humans it's not exactly both in reality and in computer models I did for human evolution (and it surprised me).

Sexual selection is very useful. In humans, mitochondrial DNA is only passed maternally and we see that evolution reduced the number of mitochondrial genes to a minimum. 

For each of the chromosomes we get one from our mother and one from our father. There's no easy way to give 1.25 from the mother and 0.75 from the father. If we get two of one of the chromosomes or none from one parent in most cases the pregnancy terminates unsuccessfully and in the few remaining cases, it produces severe harm (like down syndrome). 

What asymmetries did you introduce into your simulations that lead to a difference? Models with no gender differences but with mandatory sexual reproduction usually tend to be 50/50 in my experience. 

My models had humans with their full 46 chromosomes and multiple genes per chromosome. In addition, I have transposons on those chromosomes. I also tried to have a model of mating behavior where males and females obviously have different roles. 

Mutations on the x-chromosome lead more frequently to pregnancy termination in male offspring. This is pretty obvious given that female offspring have more redundancy when it comes to the X chromosome. 

The transposon-related pregnancy terminations that in turn terminate more female pregnancies than male ones are less obvious. I think I have some insight there that could be publishable. If anyone wants to collaborate on a paper I'm happy to say more privately. 

Transposons and their effects get often not taken as seriously as they should. 

I think this makes sense because eggs are haploid (already only have 23 chromosomes) but a natural next question is: why are eggs haploid if there is a major incentive to pass more of the 46 chromosomes?

If you would say that there are two copies of chromosome 11 in the egg and none in sperm, you would lose sexual selection for chromosome 11.

From a message on reddit by /u/eniteris:

I think the biggest reason why you don't commonly see the selective incorporation of male DNA is because the machinery to do the selection would be too costly compared to just transitioning to asexual reproduction.

That being said, there's a wide breadth of parthenogenesis strategies, of which the most relevant are the kleptons, which can sometimes incorporate some of the male DNA.

Individual genes defecting are probably closest to transposons and other selfish genetic elements, and those are in competition with systems that silence them to prevent them from defecting.

Polyploidy probably have greater flexibility for the non-balanced incorporation of DNA, but I'm not familiar enough to comment any more.

From another message:

I guess selective incorporation machinery might not be too costly, but why selectively incorporate when you can just turn to full asexual reproduction, or have both sexual and asexual reproduction? (ie: virgin birth in sharks, some reptiles, etc.)

I guess it's the difference between having each offspring being an 90/10 split of genetics, or having 80% of your population asexually reproduce and 20% sexually reproduce.

(I am satisfied with this as an explanation)

To put it another way, if a female inserts more of her DNA into an offspring then she loses out on the benefits of sexual reproduction.

Polyploidy probably have greater flexibility for the non-balanced incorporation of DNA, but I'm not familiar enough to comment any more.

It's not clear this is the case when the sum isn't 1. (i.e. 1/2 + 1/2 = 1, versus 1/2 + 1 = 3/2*)

*I'm guessing that's the breakdown.

(A suggestion for the forum)

You know that old post on r/ShowerThoughts which went something like "People who speak somewhat broken english as their second language sound stupid, but they're actually smarter than average because they know at least one other language"?

I was thinking about this. I don't struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I'm sure it's noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here. Things like strange sentence structure, or weird use of italics, or overuse of a word, or over/under hedging... all the writing skills you already mastered in grade school. And you probably grew up seeing that the ones who continued to struggle with it often didn't get other things quickly either. 

Maybe you notice some of these already in what you're reading right now (despite my painstaking efforts otherwise). It's likely to look "wannabe" or "amateurish" because it isone learns language and rhythm by imitating. But this imitation game is confined to language & rhythm, and it would be a mistake to also infer from this that the ideas behind them would be unoriginal or amateurish.

I'd like to think it wouldn't bother anyone on LW because people here believe that linguistic faux pas, as much as social ones, ought to be screened off by the content. 

But it probably still happens. You might believe it but not alieve it. Imagine someone saying profound things but using "u" and "ur" everywhere, even for "you're". You could actually try this (even though it would be a somewhat shallow experiment, because what I'm pointing at with "cadence" is deeper than spelling mistakes) to get a flavor for it.

A solution I can think of: make a [Non-Native Speaker] tag and allow people to self-tag. Readers could see it and shoot for a little bit more charity across anything linguistically-aesthetically displeasing. The other option is to take advantage of customizable display names here, but I wonder if that'd be distracting if mass-adopted, like twitter handles that say "[Name] ...is in New York".

I would (maybe, at some point) even generalize it to [English Writing Beginner] or some such, which you can self-assign even if you speak natively but are working on your writing skills. This one is more likely to be diluted though.

Could this be accomplished using custom commenting guidelines? Perhaps just adding a sentence about whether one wants to opt into or out of linguistic-aesthetic feedback would suffice if one has strong feelings on the matter.

This would work for top level posts, but for comment replies, the commenting guidelines feature would need to be expanded to show the guidelines of the person being replied to as well as the author of the main post. For instance, when writing this reply I see only Raemon's commenting guidelines.

I just realized that jasoncrawford and joshwentworth are two different people. Explains so many things.

...you are also probably thinking of johnswentworth.

His username is actually johnswentworth. Easy to remember if you know his real name, John Wentworth.

Huh.

Read HPMOR(3 times start to finish), found and currently reading the sequences, and now actually venturing into LessWrong. Read maybe 10 articles as well as a lot of comments.

And I'm a bit intimidated :D. But that's fine LOL. Shut up and do the impossible! 

Welcome! I agree it can be a bit intimidating at first, but I expect you'll find your bearings. And don't feel hesitant to ask questions here in this thread.

Has anyone tried to map relationships between (at least some) LessWrong posts?

What I'm looking for would be some kind of overview of what connects to what that could be parsed without clicking through links recursively. If I assume a chronologically earlier post cannot refer to a later one, I expect this type of structure to have interesting properties. A practical application that comes to mind is to inform an algorithm to decide what to give attention.

To define a relationship, links from one post to another would be a good if imperfect metric (not all links represent the same type of relationship, and not all relationships are represented by explicit links).

I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)

Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved

[ . . . ]

I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.

If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.

It seems like an important topic but I'm a bit confused by what he's saying here. Is the perspective he's discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn't that suggest every human's life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of 'hedonium' and 'dolorium', in which case it's of solely altruistic concern & can be dealt with by strictly limiting compute?

 

Also, I'm not really sure if this set of views is more "a broken bone/waterboarding is a million times as morally pressing as making a happy person", or along the more empirical lines of "most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn't scale to the same degree." Even a tiny chance of the second one being true is awful to contemplate. 

Here's a model that might simplify things:

Really negative events can affect people's lives for a long time afterward.


From that model, it's easier to have utility effects by, say, reducing extreme negative events, than say, making someone who is 'happy' a little bit happier. So while the second thing may seem easier to do (cost), the first thing may still be more impactful even if you divide by its cost.


The obvious connection is how things play out within a person's life. If, say, you break your arm, maybe it'll be harder to do other things because:

  • it's in a cast and you can't use it while it heals
  • You're in pain. Maybe you don't enjoy things like, like watching a movie, as much, when you're in a lot of pain.

[Insert argument for wearing a helmet while riding a bike or motorcycle even if it's mildly inconvenient - because it helps reduce/prevent stuff that's way more inconvenient.]


and pleasure doesn't scale to the same degree

It's easy to scale pain? This just seems like an argument that 'Becoming slightly happier' is less pressing morally than 'reducing the amount of torture* in the world'.

*Might be worth noting that if this is about extreme pain, then this implies 'improving access to medical care' can be a very powerful intervention, i.e., effective altruism.

Thanks for the response; I'm still somewhat confused though. The question was to do with the theoretical best/worst things possible, so I'm not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here. 

 

Specifically I'm confused about:

Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased

I'm not really sure what's meant by "the reality" here, nor what's meant by biased. Is the assertion that humans' intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn't likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it's worse (rather than better)? I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.

so I'm not entirely sure whether parallels to (relatively) minor pleasures/pains are meaningful here. 

Ah. I suggested them because I figured that such '(relatively) minor' things are what people have experienced and thus are the obvious source for extrapolating out to theoretical maximum/s.


I don't know what's meant by 'reality' there. Your guess seems reasonable (and was more transparent than what you quoted).

I'm not sure how to guess the maximum ratio.


I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.

Likewise. (A quadrillion seems like a lot - I'd need a detailed explanation to get why someone would choose that number.)

I think...it makes sense less as emotion, than as a utility function - but that's not what is being talked about.

Part of it is...when people are well off do they pursue the greatest pleasure? I think negative extremes prompt a focus on basics. In better conditions, people may pursue more complicated things. Overall, there's something about focus I guess:

'I don't want to die' versus 'I'm happy to be alive!'. Which sentiment is stronger? It's easy to pull that up for a thought experiment, that's extreme, but, if people don't have that as a risk in their lives then maybe the second thing, or the absence of the risk doesn't have as much salience, because the risk isn't present? (Short version: a) it's hard to reason about scenarios outside of experience*, b) this might induce asymmetry in estimates or intuition.)

*I have experienced stuff and found 'wow, that was way more intense than I'd expected' - for stuff I had never experienced before.

Hey everyone. I'm new here. I've recently been kinda freaking out about AGI and its timelines... Specially after reading Eliezer's post about there being no fire-alarm.

However, he also mentions that "one could never be sure of a new thing coming, except for the last one or two years" (something along those lines).

So, in your opinion, are we already at a stage where AGI could arrive anytime? Because due to things like GPT-3, Wu Dao 2.0 and AlphaCode, I've been getting really scared... Plus if there is something more advanced being developed in secret...

Or will there at least be a 1-2 year "last epistemic stage" which we can be sure we haven't reached yet? (as Soares also mentions)

Cause everyday I've been looking out the window expecting the nano-swarms to come lol... But I'm just a lay person, so I'd like to hear some more expert opinions.

(If you're scared, in general it's good to do things that give you courage. Perhaps think through your strengths, the ways you can change the world, and make sure you have good relationships with friends/family when you need support.)

IMO we are already at a stage where AGI could arrive at any time in some sense, but the probability of it arriving in the next year or so is pretty small--some AI lab would need to have some major breakthrough between now and then, something that enables them to do with merely hundreds of millions of dollars of compute what seems like it should take trillions (with current algorithms). I think we probably have like eight years left or something like that.

Sober view as well, and much closer to mine. I definitely agree that compute will be the big bottleneck - GPT-3 and the scaling hypothesis scare the heck out of me.

8 years makes a lot of sense, after all many predictions point to 2030.

A more paranoid me would like to ask, what number would you give to the probabilities of it arriving: a) next week, b) next year?

And also: are you also paranoid like me looking out the window fom the nano-swarms, or think that at least in the very, very near-term it's still close to impossible?

I am not looking out my window for the nano-swarms; I think there's a less than 1% chance of that happening this year. We would need a completely new AI paradigm I think, which is not impossible (It's happened a bunch of times in the past, and there are a few ideas floating around that could be it) but unlikely and especially unlikely to happen all of a sudden without me hearing signs first. And then even with said new paradigm it would be surprising if takeoff was so fast that I saw nanobots before hearing any disturbing news through the grapevine.

So, <1% chance of nano-swarms surprising me this year, <<1% this week.

Maybe something like 2% chance of agentic AGI (or, APS-AI to use Carlsmith's term) happening this year?

Fair argument, thanks.

Another (very weird) counterpoint: you might not see the "swarm coming" because the annexing of our cosmic endowment might look way stranger than the best strategy human minds can come up with.

I remember a safety researcher once mentioned to me that they didn't necessarily expect us to be killed, just contained, while superintelligence takes over the universe. The argument being that it might want to preserve its history (ie. us) to study it, instead of permanently destroying it. This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact. Similar to the serious component in "global poverty is just a rounding error".

Now I think if you add that our "imprisonment" might be made barely comfortable (which is quite unlikely, but maybe plausible in some almost-aligned-but-ultimately-terrible uncanny value scenarios), then it's possible that there's never a discontinuous horror that we would see striking us; instead we will suddenly be blocked from our cosmic endowment without our knowledge. Things will seem to be going essentially on track. But we never quite seem to get to the future we've been waiting for.

It would be a steganographic takeoff.

Here's a (only slightly) more fleshed out argument:

If 

  • deception is something that "blocks induction on it" (eg. you can't run a million tests on deceptive optimizers and hope for the pattern on the tests to continue), and if
  • all our "deductions" are really just an assertion of induction at higher levels of abstraction (eg. asserting that Logic will continue to hold) 

then deception could look "steganographic" when it's done at really meta levels, exploiting our more basic metaphysical mistakes.

Interesting stuff. And I agree. Once you have a nanosystem or something of equivalent power, humans are no longer any threat. But we're yet to be sure if such thing is physically possible. I know many here think so, but I still have my doubts.

Maybe it's even more likely that some random narrow AI failure will start big wars before anything more fancy. Although with the scaling hypothesis on sight, AGI could come suddenly indeed.

"This is basically as bad as also killing everyone, because we'd still be imprisoned away from our largest possible impact."

Although I quite disagree with this. I'm not a huge supporter of our largest possible impact. I guess it's naive to attribute any net positive expectation to that when you look at history or at the present. In fact, such outcome (things staying exactly the same forever) would probably be among the most positive ones in the advent of non aligned AI. As long as we could still take care of Earth, like ending factory farming and dictatorships, it really wouldn't be that bad...

I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it's not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now,  there is a possibility of stuff being developed in secret, which is impossible to account for, but I'd say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.

A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other "parts", as in an agent (even if a dumb one), etc. Thanks

Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the real world" and bringing value to people. So many systems that looked amazing in academic papers have flopped when deployed - even from top firms - for instance Microsoft's Tay and Google Health's system for detecting diabetic retinopathy. Another example is Google's Duplex. And for how long have we heard about burger flipping robots taking people's jobs?

There are reasons to be skeptical about about a scaled up GPT leading to AGI. I touched on some of those points here. There's also an argument that the hardware costs are going to balloon so quickly to make the entire project economically unfeasible, but I'm pretty skeptical about that.

I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon.

Long story short, is existentially dangerous AI eminent? Not as far as we can see right now knowing what we know right now (we can't see that far in the future, since it depends on discoveries and scientific knowledge we don't have). Could that change quickly anytime? Yes. There is Knightian uncertainty here, I think (to use a concept that LessWrongers generally hate lol).

Economic value might not be a perfect measure. Nuclear fission didn't generate any economic value either until 200.000 in Japan were incinerated. My fear is that a mixture of experts approach can lead to extremely fast progress towards AGI. Perhaps even less - maybe all it takes is an agent AI that can code as well as humans, to start a cascade of recursive self-improvement.

But indeed, a Knightian uncertainty here would already put me at some ease. As long as you can be sure that it won't happen "just anytime" before some more barriers are crossed, at least you can still sleep at night and have the sanity to try to do something.

I don't know, I'm not a technical person, that's why I'm asking questions and hoping to learn more.

"I'm more worried about someone reverse engineering the wiring of cortical columns in the neocortex in the next few years and then replicating it in silicon."

Personally that's what worries me the least. We can't even crack c.elegans! I don't doubt that in 100-200 years we'd get there but I see many other way faster routes.

In general, whenever Reason makes you feel paralyzed, remember that Reason has many things to say. Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'. Many pairs of those trains of thought contradict each other. This pattern is all over the history of philosophy, religion, & politics. 

Future hazards deserve more research funding, yes, but remember that the future is not certain.

"Thousands of people in history have been convinced by trains of thought of the form 'X is unavoidable, everything is about X, you are screwed'."

Care to give a few examples? Because I'd venture saying that, except for religious and other superstitious beliefs, and except for crazy lunatics too like fascists and communists, they were mostly right.

"the future is not certain"

Depends on what you mean by that. If you mean that it's not extremely likely, like 90% plus, that we will develop some truly dangerous form of AI this century that will pose immense control challenges, then I'd say you're deeply misguided given the smoke signals that have been coming up since 2017.

I mean, it's like worrying about nuclear war. Is it certain that we'll ever get a big nuclear war? No. Is it extremely likely if things stay the same and if enough time passes (10, 50, 100, 200, 300 years)? Hell yes. I mean, just look at the current situation...

Though I don't care about nuclear war much because it is also extremely likely that it will come with a warning, so you can also run to the countryside, and even then if things go bad like you're starving to death or dying of radiation poisoning, you can always put an end to your own suffering. With AI you might not be so lucky. You might end in an unbreakable dictatorship a la With Folded Hands.

How can you not feel paralyzed when you see chaos pointed at your head and at the heads of other humans, coming in as little as 5 or 10 years, and you see absolutely no solution, or much less anything you can do yourself?

We can't even build a provably safe plane, how are we gonna build a provably safe TAI with the work of a few hundred people over 5-30 years, and with complete ignorance by most?

The world would have to wake up, and I don't think it will.

Really, the only ways we will not build dangerous and uncontrollable AI is if either we destroy ourselves by some other way first (or even just with narrow AI), or the miracle happens that someone cracks advanced nanotechnology/magic through narrow AI and becomes a benevolent and omnipotent world dictator. There's really no other way we won't end up doing it.

What's the art on the frontpage now?

It’s our celebration art for the Best of LessWrong 2020, ie the end of the review. It was generated by a neural net with the prompt:

The ancient hidden library of luminescent ethereal alchemy crystalline fractals books knowledge secrets, aquarelle by Ross Tran outrun #conceptart #pixelart #monochrome | green on white color palette

Is the neural net publically available?

Yep, it's the one in the Eleuther AI Discord!

Is there any way for my non-American friend to fly domestically in the USA without his passport? He has a student ID but no other form of ID at the moment (no drivers licence, etc.) The reason he doesn't have his passport is that he sent it in to get renewed & it probably won't arrive back in time.

He still has a few weeks before the flight, so he can e.g. go apply for a State ID if that's a possibility. Unfortunately it seems that you need a passport to get a State ID... He probably can't get a drivers license for similar reasons, and anyway he doesn't know how to drive.

This user has been banned for this post

Can someone recommend me a textbook on causal inference?

I read the first ~120 pages of Causality by Pearl, and I think it's not the right book for me. The biggest two problems were the lack of exercises, and the long historical degressions. I also had difficulties wrapping my head around the big picture, even when understanding the low-level concepts (I don't think I could implement a full stack for causal inference, even with the book, although I could probably implement the IC* algorithm).

I am still looking for something that's formal, and will probably try chapter 16 of “All of Statistics” by Wassermann.

Did you check out his more recent primer? I think it's better on exercises, tho I don't think it's fully there yet. 

That does look more concise and to the point, thank you.

This comment says ‘user has been banned for this post’ which is super confusing to me

Sorry, I was making a meta joke about “Causality” and Pearl being super revered here, and rejecting the book being nearly heresy. Should I remove it?

I definitely didn’t get the joke, but if you have this meta commentary here now I think it’s fine though.

Nah, does indeed seem kinda funny.

Elements of Causal Inference by Peters is supposed to be good. I read Probabilistic Graphical Models by Koller and Friedman but didn't like it much, but I liked Causality, so maybe we'll be reversed and it's your jam.

Is the font for comments different from the font for posts?

Should I expand this comment into a proper top-level post that goes into more detail in the concept and does longer/more reviews of methods?

https://www.lesswrong.com/posts/oqzasmQ9Lye45QDMZ/causality-transformative-ai-and-alignment-part-i?commentId=rJ7Ssb82bR677F2hq

1.

That does sound interesting.

2.

If you don't want to do one big post that's super long, but still want the benefits of that structure, you could try drafting it out, then doing parts of it as shorter posts.

(I don't know how long of a post you're thinking of doing, but I'm throwing that out there in case it helps.*)

3.

If you want to do a post about doing posts (or a series of posts) afterwards, there might be interest as well. Or it might just be helpful in a 'personal blogs are useful notes for their author'.


*Maybe this already exists, but a link FAQ or something about making posts might make a good addition to the open thread text. Or a related tag or two, if it exists.

[Meta] The jump from Distinct Configurations to Collapse Postulates in the Quantum Physics and Many Worlds sequence is a bit much - I don't think the assertiveness of Collapse Postulates is justified without a full explanation of how many worlds explains things.  I'd recommend adding at least On Being Decoherent in between.

I was going to suggest this should go on a talk page, but it looks like tags have those but not Sequences.

Contest: making a one-page comic on artificial intelligence for amateur mathematicians by March 9. The text must be in French and the original drawing on paper must be sent to them. Details at https://images.math.cnrs.fr/Onzieme-edition-de-Bulles-au-carre-a-vos-crayons-jusqu-au-9-mars?lang=fr

I'm not related in any way to this contest but I figured there may be some people interested in popularizing Alignment. I can help translate text to French. The drawing quality does not need to be amazing, see some previous winners at https://images.math.cnrs.fr/Resultats-du-9e-concours-Bulles-au-carre.html?lang=fr