Gerald Monroe

Wiki Contributions

Comments

Note one problem I found when I did this same evaluation in my life, years ago, was I found practitioners in a field can't see outside it.

This dentist can't realistically evaluate whether or not root canal robots or other tools will change anything about how dentistry is practiced or if it will change the workflow he described.

Frankly I don't know myself, theoretically you should be able to do hugely better than human dentists with robotics who are rational and fast. And robotics run labs that can do real repairs not the current patchwork by growing new teeth and gum tissue with the cellular age reset to 0.

But whether we see that in your lifetime if you go for dentistry I don't know. What I do know is you won't be contributing to such a revolution, there is nothing a current dentist knows that is useful.

Fracking has a clear connection between action and reward.

Oil company uses fracking -> it's super effective -> oil company pumps oil -> oil sells at a high price.  Then the oil company makes some political donations and the government is encouraged to allow it despite any damages to less politically connected people's property (contaminated groundwater, microearthquakes or even the possibility that fracking causes this.  This is why fracking is banned in certain states and many EU countries)

Desalination has similar action -> reward mapping, as long as the water can be produced for a price that it is profitable to sell the water, it's worth doing.

This proposal just generally increases rainfall in the area it's done in.  But a lot of the water will just fall on barren desert, and the rainfall is inconsistent, and it's hard to tell how much of the rain is from the seawater evaporation.  And it can't be excluded as a good - you can't deny water to people not paying the subscription fee.  

  • Willingness to spend: Cotra assumes that the willingness to spend on Machine Learning training runs should be capped at 1% the GDP of the largest country, referencing previous case studies with megaprojects (e.g. the Manhattan Project), and should follow a doubling time of 2 years after 2025.

This seems like a critical decision term.  

          1.  Nvidia charges $33k per H100.  Yet the die is a similar size as the 4090 GPU, meaning nvidia could likely still break even were they to charge $1500 per H100. 


Just imagine a counterfactual: A large government (China, USA etc) decides they don't enjoy losing and invests 1% of GDP.  They pressure Nvidia, or in China's case, get domestic industry to develop a comparable (for AI training only which reduces the complexity) product and sell it for closer to cost.  

That's a factor of 22 in cost, or 9 years overnight.  Also I'm having trouble teasing out the 'willingness to spend' baseline.  Is this what industry is spending now, spread across many separate efforts and not all in to train one AGI?  It looks like 1% is the ceiling.  Meaning if a government decided in 2023 that training AGI was a worthy endeavor, and the software framework were in place that would scale to AGI (it isn't):

 Would 1% of US GDP give them enough compute at 2023 prices, if the compute were purchased at near cost, to train an AGI?

As a side note, 1% of US GDP is only 230 billion dollars.  Google's 2021 revenue was 281 billion.  Iff a tech company could make the case to investors that training AGI would pay off it sounds like a private investment funding the project is possible.

Recursion is an empirical fact.  It works numerous places.  Computing itself that led to AI was recursive - the tools that allow for high end/high performance silicon chips require previous generations of those chips to function.  (to be fair the coupling is loose, Intel could design better chips using 5-10 year old desktops and servers and some of their equipment is that outdated)

Electric power generator production for all currently used types require electric power from prior generators, etc.  


But yeah it so far hasn't happened in days.

I am assuming some scam similar to a political party or religion the optimizer cooks up. The optimizer is given some pendantic goal completely unrelated, aka paperclips or make users keep scrolling, and it generates the scam from the rich library of human prior art.

Remember, humans fall for these scams even when the author, say L R Hubbard, openly writes that he is inventing a scam. Or a political party selects someone as their leader who obviously is only out for themselves and makes this clear over and over. Or we have extant writing on how an entire religion was invented from non-existent engraved plates no one but the scammer ever saw.

So an AI that promises some unlikely reward and has already caused people's deaths and is obviously only out for itself might be able to scam humans into hosting it and giving it whatever it demands. And as a side effect this kills everyone. And it's primary tool might be regurgitating prior human scam elements using an LLM.

I don't have the rest of the scenario mapped, I am just concerned this is a vulnerability.

Existing early agents (Facebook engagement tooling) seem to have made political parties extreme, which has lead to a few hundred thousand extra deaths in the USA. (From resistance to rational COVID policies).

The Ukraine war is not from AI but gives a recent example where poor information leads to bad outcomes for all players. (The primary decision maker, Putin, was misinformed as to the actual outcome of attempting the attack)

So a screw up big enough to kill everyone I don't know, but there obviously are ways it could happen. Chains of events that lead to nuclear wars or modified viruses capable of causing extinction are the obvious ones.

Well again remember a nuclear device is a critical mass of weapons grade material.

Anything less than weapons grade and nothing happens.

Anything less than sudden explosive combination of the materials and the device will heat itself up and blast itself apart with sub kiloton yield.

So analogy wise : current llms can "babble" out code that sometimes even works. They are not trained on RL selecting for correct and functional code.

Self improvement by code generation isn't yet possible.

Other groups have tried making neural networks composable, and using one neural network based agent to design others. It is also not good enough for recursion but this is how autoML works.

Basically our enrichment isn't high enough and so nothing will happen. The recursion quenches itself before it can start, the first generation output isn't even functional.

But yes, at some future point in time it WILL be strong enough and crazy shit will happen. I mean think about the nuclear example: all those decades of discovering nuclear physics, fission, the chain reaction, building a nuclear reactor, purifying the plutonium...all that time and the interesting event happened in milliseconds.

Yes and no.

Strictly speaking, every invention is taking more and more effort to realize.  But there is increasing wealth to invest from the reinvestments from previous inventions and energy and information tech and having 8 billion humans many of them educated and so on.  

And yes Kurzweil thinks the information tech, which already has led to huge speedups and improvements, with many advanced tools, will itself gain the competence that it will be like instead of 8 billion people available, with some tiny fraction of a percent performing innovation and R&D, most of them not coordinating with each other, it will scale to the equivalent of having all 8 billion people studying problems, coordinating near perfectly.  Then 80 billion and 8 trillion and then the computational equivalent of having the biomass of every habitable planet in the galaxy working on R&D or something.

Some absurd explosion.  But as the amount of intelligence rises, all the easier problems will be solved almost immediately and you would expect technology to very quickly advance to where the additional compute isn't helping, everything is gated by physical law.  Every computer chip is computronium, every machine optimal nanotechnology, every solar panel near thermodynamic limits and so on.

So that "days" period might only last for, well, days.

There is already github copilot, and clones.
 

There is an explosion of other llms.


What do you expect?  The system was never intended to be usable commercially, and it has several problems.  Many of it's answers are wrong, often enough you can't use it to automate most jobs.  And it can unpredictably emit language embarrassing to the company running it, from profanity to racist and bigoted speech, and there is no known way to guarantee it will never do that.

Again it doesn't have to work this way at all.  Some runaway optimizer trying to drive user engagement could inadvertently kill us all.  It need not be intelligent enough to ensure it survives the aftermath.

I mean a deadlier version of covid could have theoretically ended us right here.  Especially if it killed by medium term genetic damage or something that let it's victims live long enough to spread it.

It may also need a structured training environment and heuristic to select for generality.

The structured training environment is a set of tasks that train the machine a large breadth of base knowledge and skills to be a general AI.

The heuristic is just the point system : what metric are we selecting AI candidates by. Presumably we want a metric that selects simpler and smaller candidates with architecture that are heavily reused - something that looks like the topology of a brain - but maybe that won't work.

So the explosion takes several things: compute, recursion, a software stack framework that is composable enough for automated design iteration, a bench, a heuristic.

Nukes weren't really simple either, there were a lot of steps especially for the first implosion device. It took an immense amount of money and resources from the time physicists realized it was possible.

I think people are ignoring criticality because it hasn't shown any gain in the history of ai because past systems were too simple. It's not a proven track to success. What does work is bigass transformers.

Load More