Wiki Contributions

Comments

I disagree with you in the fact that there is a potential large upside if Putin can make the West/NATO withdraw their almost unconditional support to Ukraine and even larger if he can put a wedge in the alliance somehow. It's a high risk path for him to walk down that line, but he could walk it if he is forced: this is why most experts are talking about "leaving him a way out"/"don't force him in the corner".  It's also the strategy the West is pursuing, as we haven't given Ukraine weapons that would enable them to strike deep into Russian territory. 

I am also very concerned that the nuclear game theory would break down during an actual conflict as it is not just between the US and Russia but between many parties, each with their own government. Moreover,  Article 5 binds a response for any action against a NATO state but doesn't bind a nuclear response vs a nuclear attack. I could see a situation where Russia threatens with nukes a NATO territory of a non-nuclear NATO state if the West doesn't back down and the US/France/UK don't commit to a nuclear strike to answer it, but just a conventional one, in fear of a nuclear strike on their own territory.  In fact, it is under Putin himself that Russia's nuclear strategy apparently shifted to "escalate-to-deescalate", which it's exactly the situation we might end up in. 

Fundamentally, the West leaders would have to play game of chicken with a non-moral restrained adversary that that they do not know the complete sanity of.  

From what I have read, and how much nuclear experts are concerned, I am thinking that the chances of Putin using a nuclear warhead in Ukraine over the course of the war is around 25%.  Conditional on that happening, total nuclear war breaking out is probably less than 10%, as I see much more likely the West folding/deescalating. 

I am trying to improve my forecasting skills and I was looking for a tool that would allow me to design a graph/network where I could place some statement as a node with an attached probability (confidence level) and then the nodes can be linked so that I can automatically compute the joint or disjoint probability etc.

It seems such a tool could be quite useful, for a forecast with many inputs. 

I am not sure if bayesian networks or influence graphs are what I am looking for or if they could be used for such scope. Nevertheless, I haven't exactly found a super user-friendly tool for either of them. 

It is quite common to hear people expecting a big jump in GDP after we have developed trasformative AI, but after reading this post we should be more precise: it is likely that real GDP will go up, but nominal GDP could stall or fall due to the impacts of AI on employment and prices. Our societies and economic model is not built for such world (think falling government revenues or real debts increasing). 

sairjy4moΩ00-2

We could study such a learning process, but I am afraid that the lessons learned won't be so useful. 

Even among human beings, there is huge variability in how much those emotions arise or if they do, in how much they affect behavior.  Worst, humans tend to hack these feelings (incrementing or decrementing them) to achieve other goals: i.e MDMA to increase love/empathy or drugs for soldiers to make them soulless killers. 

An AGI will have a much easier time hacking these pro-social-reward functions. 

Anyone that downvoted could explain to me why? Was it too harsh? or is it because of disagreement with the idea? 

Human beings and other animals have parental instincts (and in general empathy) because they were evolutionary advantageous for the population that developed them. 

AGI won't be subjected to the same evolutionary pressures, so every alignment strategy relying on empathy or social reward functions, it is, in my opinion, hopelessly naive. 

The dire part of alignment is that we know that most human beings themselves are not internally aligned, but they become aligned only because they benefits from living in communities. And in general, most organisms by themselves are "non-aligned", if you allow me to bend the term to indicate anything that might consume/expand its environment to maximize some internal reward function. 

But all biological organisms are embodied and have strong physical limits, so most organisms become part of self-balancing ecosystems. 

AGI, being an un-embodied agent, doesn't have strong physical limits in its capabilities so it is hard to see how it/they could find advantageous or would they be forced to cooperate.  

Very engaging account of the story, it was a pleasure to read. I often thought about what drive some people to start such dangerous enterprises and my hunch is that, as you said, they are a tail of useful evolutionary traits: some hunters, or maybe even an entire population, had a higher fitness because they took greater risks. From an utilitarian perspective it might be a waste of human potential for a climber to die, but for every extreme climber there is maybe an astronaut, a war doctor or a war journalist, a soldier and so on.

Answer by sairjyMay 04, 202211

The Chinchilla's paper states that a 10T parameter model would require 1.30e+28 flops or 150 milion petaflop days. A state-of-the-art Nvdia DGX H100 requires 10 KW and it produces theoretically 8 petaflops FP16. With a training efficiency at 50% and a training time of 100 days, it would require 375,000 DGX H100 systems to train such model, for a total power required of 3.7 Gigawatt. That's a factor of 100x larger any supercomputer in  production today. Also, orchestrating 3 milion GPUs seems well beyond our engineering capabilities. 

It seems unlikely we will see 10 T models trained like using the scaling law of the Chinchilla paper any time in the next 10 to 15 years. 

If 65% of the AI improvements will come from compute alone, I find quite surprising that the post author assigns only 10% probability of AGI by 2035. By that time, we should have between 20x to 100x compute per $. And we can also easily forecast that AI training budgets will increase 1000x easily over that time, as a shot to AGI justifies the ROI. I think he is putting way too much credit on the computational performance of the human brain.

Load More