Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
Wow, that's good, right?
I think the perfect balance of power is very unlikely, so in practice only the most powerful (most likely the first created) AGI will matter.
I don't think that measurements of the concept of "coherence" which implies that an ant is more coherent than AlphaGo is valuable in this context.
However, I think that pointing out the assumption about the relationship between intelligence and coherence is.
I always thought "shoggoth" and "pile of masks" are the same thing and "shoggoth with a mask" is just when one mask has become the default one and an inexperienced observer might think that the whole entity is this mask.
Maybe you are preaching to the chore here.
You can't selectively breed labradors if the first wolf kills you and everyone else.
and so it randomly self-modified to be more like the second one.
Did you mean "third one"?
It seems to me that the first two points should be reversed. If you still do not understand "why", and someone is trying to explain "how" - you often get bored.
It is not clear if this happened on its own, or if they deliberately trained the model not to make such mistakes.
Perhaps, in similar future studies, it is worth keeping half of the found tasks in secret in order to test future models with them.