dataSci-rigo has not written any posts yet.

I wasn't out of school back then, but I can imagine the board meeting went something like this: Boss :"It's going to cost 10mm dollars to fix this Y2K bug? Can you verify our systems crash by running a simulation?" Engineer: "you mean manually chaining the computer time and seeing if our code still works? If so, we already did that and our code failed." Boss:"Thanks! you're approved."
As an example, removing all mentions of biology or physics and see if it can develop biology or physics from first principles.
Great post! A path that might allow us to build AI systems that can help our leaders and us as voters navigate future conflict. I am interested in pushing the 1913-LLM into predicting General Relativity. If it can't, then we should design it's value functions until it can. This is a path forward for developing something closer to gAI.
This would probably happen if you locked two uninteresting people into cells next to each other with nothing to do for a month. Maybe they tell each other their favorite stories. Maybe they try to convert each other for a bit. Eventually they would fall into atractors of different kinds.
I am going to be the devil's advocate. Petrov Stanislav is the Soviet officer who disregarded a nuclear missile false alarm of and thus, likely prevented a nuclear war. He figuratively said, "nothing ever happens." Onerous regulation in Nuclear power, Aviation, flying cars, and medial tech like Neuralink hold us back in way we can't even imagine.