And why would anybody do that?
I think babysitting a baby is not very informative about whether you would enjoy having kids. Having a kid is first and foremost about having the deepest and most meaningful emotional connection of your life.
Take that away and you just don't have a sensible test run. It's like finding out whether you like hiking by going up and down the stairs of your apartment building all morning.
Having kids is like having parents, except the emotional connection is stronger in the other direction. Would you rather have grown up in an orphanage if that had meant more time for your hobbies and other goals?
I think the most important thing has not been mentioned yet:
How you dress and take care of yourself is the very first and often only impression of how much you have your shit together. Having your shit together - doing the things you need to do in time and doing them well - is the most important trait in a long-term partner.
If the one clearly fucked up receptor copy is sufficient for your "symptoms", it seems pretty likely that one of your parents should have them too. I think there is no reason to expect a denovo mutation to be particularly likely in your case (unlike in cases that lead to severe disfunction). And of course you can check for that by sequencing your parents.
So my money would be on the second copy also being sufficiently messed up that you have basically no fully functioning oxytocin receptors. If you have siblings and you are the only odd one in the family, you could make a pretty strong case for both copies being messed up, by showing that you are the only one with the combination of frameshift in one copy and particular SNPs in the other. (If you are not the only odd one you can make an even stronger case).
Seems a lot harder to write a post a day if one is not holed up in Lighthaven.
Heard that story many times by or about exchange students to the US.
What gives you the impression of low integrity?
There's an interestingly pernicious version of a selection effect that occurs in epistemology, where people can be led into false claims because when people try to engage with arguments, people will drop out at random steps, and past a few steps or so, the people who believe in all the arguments will have a secure-feeling position that the arguments are right, and that people who object to the arguments are (insane/ridiculous/obviously trolling), no matter whether the claim is true:
I find this difficult to parse: people, people, people, people, people.
These seem to be at least three different kind of people: The evangelists, the unconvinced (who drop out) and the believers (who don't drop out). Not clearly distinguishing between these groups makes the whole post more confusing than necessary.
The function of the feedforward components in transformers is mostly to store knowledge and to enrich the token vectors with that knowledge. The wider you make the ff-network the more knowledge you can store. The network is trained to put the relevant knowledge from the wide hidden layer into the output (i.e. into the token stream).
I fail to see the problem in the fact that the hidden activation is not accessible to future tokens. The ff-nn is just a component to store and inject knowledge. It is wide because it has to store a lot of knowledge, not because the hidden activation has to be wide. The full content of the hidden activation in isolation just is not that relevant.
Case in point: Nowadays the ff-nns actually look different than in GPT-3. They have two hidden layers with one acting as a gating mechanism: The design has changed to allow the possibility to actively erase part of the hidden state!
Also: This seems very different from what you are talking about in the post, it has nothing to do with "the next run". The hidden layer activations aren't even "accessible" in the same run! They are purely internal "gears" of a subcomponent.
It also seems to me like you have retreated from
with its intermediate states ("working memory") completely wiped.
to "intermediate activations of ff-components are not accessible in subsequent layers and because these are wider than the output not all information therein contained can make it into the output".
The way METR time horizons tie into AI 2027 is very narrow: As a trend not even necessarily on coding/software engineering skills but on machine learning engineering. I think that is hard to attack except by claiming that the trend will taper off. AI 2027 does not require unrealistic generalisation.
The reason why I think that time horizons are much more solid evidence of AI progress then earlier benchmarks, is that the calculated time horizons explain the trends in AI-assisted coding over the last few years very well. For example it's not by chance that "vibe coding" became a thing when it became a thing.
I have computed time horizon trends for more general software engineering tasks (i.e. with a bigger context) and my preliminary results point towards a logistic trend, i.e. the exponential is already tapering off. However, I am still pretty uncertain about that.