Gerald Monroe

Wiki Contributions

Comments

Right. I see this as a problem also, asking the model if it's sure is injecting information if we only ask on wrong answers. If we ask always it may disturb more right answers than it fixes wrong ones.

Its also accuracy dependent - if the model is 99 percent accurate on a subtask then asking if it's sure may degrade accuracy, while it may improve it on a subtask it's 50 percent accurate on.

Or in other words, we could prompt it and it might do better on AP English but less good on the bar exam.

From his worldview it would be like a cancer patient getting a stage 4 diagnosis.

This is an 8-13 times decrease in required memory and proportionally compute unless they increased convolutions a lot.

It means 18 months to 6 years of AI compute progress overnight. (18 months because compute dedicated to AI is doubling every 6 months, 6 years is about how long to get 8 times the compute per dollar)

<del>Maybe meta did a sloppy job of benchmarking the model.</del>

Update: From reading the paper, they did not. They appear to have replicated https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications and found that the scaling laws were even MORE favorable to more tokens than the lesswrong post. Also they made slight tweaks to the transformer architecture. What's notable about this is it's a collaboration, tweaks came from multiple other AI labs.

You, on the other hand, are proposing a novel training procedure, and one which (I take it) you believe holds more promise for AGI than LLM training. 


It's not really novel.  It is really just coupling together 3 ideas:

  (1) the idea of an AGI gym, which was in the GATO paper implicitly, and is currently being worked on.  https://github.com/google/BIG-bench

  (2) Noting there are papers on network architecture search https://github.com/hibayesian/awesome-automl-papers , activation function search https://arxiv.org/abs/1710.05941 , noting that SOTA architectures use multiple neural networks in a cognitive architecture https://github.com/werner-duvaud/muzero-general , and noting that an AGI design is some cognitive architecture of multiple models, where no living human knows yet which architecture will work.  https://openreview.net/pdf?id=BZ5a1r-kVsf 

   So we have layers here, and the layers look a lot like each other and are frameworkable.  

     Activations functions which are graphs of primitive math functions from the set of "all primitive functions discovered by humans"  

    Network layer architectures which are graphs of (activation function, connectivity choice)

    Network architectures which are graphs of layers.  (you can also subdivide into functional module of multiple layers, like a column, the choice of how you subdivide can be represented as a graph choice also)

    Cognitive architectures which are graphs of networks

And we can just represent all this as a graph of graphs of graphs of graphs, and we want the ones that perform like an AGI.  It's why I said the overall "choice" is just a coordinate in a search space which is just a binary string.  

You could make an OpenAI gym wrapped "AGI designer" task.

3.  Noting that LLMs seem to be perfectly capable of general tasks, as long as they are simple.  Which means we are very close to being able to RSI right now.

 

 

No lab right now has enough resources in one place to attempt the above, because it is training many instances of systems larger than current max size LLMs (you need multiple networks in a cognitive architecture) to find out what works.  

They may allocate this soon enough, there may be a more dollar efficient way to accomplish the above that gets tried first, but you'd only need a few billion to try this...

That's exactly what I am talking about.  One divergence in our views is you haven't carefully examined current gen AI "code" to understand what it does.  (note that some of my perspective is informed because all AI models are similar at the layer I work at, on runtime platforms)

https://github.com/EleutherAI/gpt-neox

If you examine the few thousand lines of python source especially the transformer model, you will realize that functionally that pipeline I describe of "input, neural network, output, evaluation" is all that the above source does.  You could in fact build a "general framework" that would allow you to define many AI models, almost of which humans have never tested, without writing 1 line of new code.   

So the full process is :

[1] benchmark of many tasks.  Tasks must be autogradeable, human participants must be able to 'play' the tasks so we have a control group score, tasks must push the edge of human cognitive ability (so the average human scores nowhere close to the max score, and top 1% humans do not max the bench either), there must be many tasks and with a rich permutation space.  (so it isn't possible for a model to memorize all permutations)

[2] heuristic weight score on this task intended to measure how "AGI like" a model is.  So it might be the RMSE across the benchmark.  But also have a lot of score weighting on zero shot, cross domain/multimodal tasks.  That is, the kind of model that can use information from many different previous tasks on a complex exercise it has never seen before is closer to an AGI, or closer to replicating "Leonardo da Vinci", who had exceptional human performance presumably from all this cross domain knowledge.

[3] In the computer science task set, there are tasks to design an AGI for a bench like this.  The model proposes a design, and if that design has already been tested, immediately receives detailed feedback on how it performed.  

As I mentioned, the "design an AGI" subtask can be much simpler than "write all the boilerplate in Python", but these models will be able to do that if needed.  

 

As tasks scores approach human level across a broad set of tasks, you have an AGI.  You would expect it to almost immediately improve to a low superintelligence.  As AGIs get used in the real world and fail to perform well at something, you add more tasks to the bench, and/or automate creating simulated scenarios that use robotics data.  

I agree.  For power that comes from (money/reputation/military/hype) developing AI systems, this is where I was wondering where the symmetry is for those who wish to stop AI being developed.  The 'doomer' faction over time won't be benefitting from AI, and thus their relative power would be weaker and weaker with time.  Assuming at least initially, AI systems have utility value to humans, if the AI systems treacherous turn it will be later, after initially providing large benefits.  

With this corporate battle right now, Altman is going to announce whatever advances they have made, raise billions of dollars, and the next squabble will have the support of people that Altman has personally enriched.  I heard a rumor it works out to be 3.6 million per employee with the next funding round, with a 2 year cliff, so 1.8 million/year on the line.  This would be why almost all OpenAI employees stated they would leave over it.  So it's a direct example of the general symmetry issue mentioned above.  (leaving would also pay: I suspect Microsoft would have offered more liquid RSUs and probably matched the OAI base, maybe large signing bonuses.  Not 1.8m but good TC.  Just me speculating, I don't know the offer details and it is likely Microsoft hadn't decided)

The only 'obvious' way I could see doomers gaining power with time would be if early AI systems cause mass murder events or damage similar to nuclear meltdowns.  This would give them political support, and it would scale with AI system capability, as more powerful systems cause more and more deaths and more and more damage.

What would be your defense?  Agents successful at seeking power have more power.  

So your model is that people can make big llms, and the innovation from openAI and from open source will eventually all be in one large model. Aka "gpt 4.1". But that each llm shop, while free of encumbrances and free to seek maximum profit, would not have the necessary concentration of money and talent in one place to develop AGI.

Instead they would simply keep making smaller delta's to their product, something a less talented and GPU poorer crew could do, and capabilities would be stuck in a local minimum.

So you believe that either this would push back AGI several years (eventually the staff at these smaller shops would skill up from experience and as compute gets cheaper they would eventually have what 100B of compute will buy in 2024) or possibly longer if there is no smooth path of small incremental steps from gpt-4.1 to AGI.

I will add one comment to this : it's not actually a threshold of "gpt4.1 to AGI". Assuming you believe RSI will work, you need "a good enough seed model plus sufficient compute to train and benchmark thousands of automatically generated AGI candidates".

Gpt4.1 plus a reinforcement learning element might be enough for the "seed AI".

As Zvi mentioned in one of the roundups, the conventional wisdom for entering a new monopolistic tech niche is to grow as fast as possible.

So it's likely that OpenAI loses money per user. GitHub copilot allegedly costs $40 in compute per $20 a month subscriber.

So yes, you are right, but no, it doesn't matter. This is because there's other variables. The cost of compute is driven up by outside investment. If somehow dynamiting openAI causes all the outside investors to go invest somewhere else - sort of like the hype cycles for nft or crypto - the cost of compute would drop.

For example Nvidia is estimated to pay $3000 to build each H100. If Nvidia charges $5000 a card, and stops charging a 20 percent software license fee, that essentially cuts the compute cost by more than half*, making current AI models at current prices more than profitable.

Nvidia would do this in the hypothetical world of "investors get bored and another ai winter begins". This neglects Nvidia reducing their costs and developing a cheaper to build card per unit of LLM performance, which they obviously are doing.

*Quick and dirty sanity check: assuming 50 percent utilization (GPU is bounded by memory I/o then it would use $33,000 in electricity over 5 years and currently costs $50,000 at current prices, 25k is list price, 25k is license fee. Were Nvidia to simply charge a more modest margin the all in cost would drop from 83k to 38k. Data center electricity is probably cheaper than 13 cents but there are costs for backup power and other systems)

Conclusion: what's different now is general ai is bringing in enough revenue to be a self sustaining business. It's not an industry that can fold and go dormant like failed tech startups that folded and the product or service they developed ceased to be available anywhere.

The time to blow up openAI was prior to the release of chatGPT.

Were the ai startups high risk and dependent on investor largess to exist, then absolutely I would agree with your model.

But : https://sacra.com/c/openai/#:~:text=OpenAI has surpassed %241.3B,at the end of 2022.

Assume openAI was collecting a 10 percent profit margin (the other 90 percent paid for compute). Allegedly they burned about 540 million in 2022 when gpt-4 was developed. Call it 1 billion total cost for a gpt-4 (compute + staff compensation).

Then 130 million annual net revenue on a 1 billion investment is 13 percent ROI. In terms of "monetary free energy" that's net output. Large businesses exist that run on less margin.

I am not specced in economics or finance but that looks like a sustainable business, and it's obviously self amplifying. Assuming the nascent general ai* industry has easy short term potential growth (as in, companies can rent access to a good model and collect many more billions) then it's possibly self sustaining. Even without outside investment some of those profits would get invested into the next model, and so on.

You can also think of a second form of "revenue" as investor hype. Fresh hype is created with each major improvement in the model that investors can perceive, and they bring more money each round.

While yes the EMH is imperfect,* investors clearly see enormous profit in the short term from general ai. This is the main thing that will drive the general ai industry to whatever is technically achievable : tens of billions in outside money. And yes, blowing up openAI slows this down...but the investors who were willing to give openAI tens of billions in a few weeks still have their money. Where's it going to go next?

  • By general ai I mean a model that is general purpose with many customers without needing expensive modifications. Different from AGI, which means also human level capabilities. General ai appears, from the data above, to be financially self sustaining even if it is not human level.

*As other lesswrong posts point out, downside risks like nuclear wars or markets ceasing to exist because a rampant ASI ate everything would not be "priced in".

Load More