I own probably about 10^13 FLOP/s, mostly in my pc (currently in Amsterdam). Google owns more than that, in various datacenters (e.g. one in The Dalles, Oregon).

I'd like to get a better picture of the distribution of FLOP/s in the world. Any high level broadly relevant information would be useful, the more the better. 

E.g. useful information if true would be: 1. X% of FLOP/s is in personal computers. 2. Y% of FLOP/s is owned by the N largest companies. 3. Z% of FLOP/s are located in the N biggest cities/in the N largest supercomputers/in the US. etc.

New Answer
New Comment

2 Answers sorted by

Lone Pine

30

Due to exponential growth in the computing industry, we can assume that the majority of flops were shipped in the last few years. NVidia and AMD shipped over 12 million graphics cards last year. (Note this analysis excludes laptop chips, but includes datacenters.) If we take an NVidia GeForce RTX 2080 (20 Teraflops for half-floats) as a typical card, we can estimate that less than 12M * 20T * 5 = 1.2e21 flops were shipped as GPUs in the last five years, and we can assume that is the vast majority of GPU flops in active use today.

How are those GPUs utilized? Presumably, most are used in either personal computers or datacenters, with datacenters breaking down into ML (research/deployment), cryptomining and other uses I'm not aware of. Cryptomining may or may not be a significant fraction of the total. Gamers are very pissed at miners for "hoarding all the cards," and all the industry players have reasons to obscure what is really going on in this industry, so we might not have accurate information.

I wanted to run a Fermi estimate on x86 CPUs for comparison but I couldn't find the right information. (I didn't look very hard.) I did find that about 27 billion ARM chips were sold in 2020. ARM chips vary widely, with at least some having >1000 Gflops but others having <10 Gflops. Using a conservative estimate, 27B * 10B * 5 = 1.35e21, suggesting that there are more flops in low-power embedded applications than in GPUs.

Afaik most low-power embedded ARM chips have more like 1Gflops than 10 - at least a few years ago, most didn't even have a dedicated floating point unit, and you could probably couldn't even get 100Mflops out of them.

Also, this is somewhat beside the point but if you tried to distribute any ML across such chips, you'd have enough latency problems ect. that they don't fully count towards "compute capacity for mid-to-large AIs"

burmesetheater

20

One way to maybe shed some light on this is to sort the latest Top500 results by location (maybe with extra work to get the specific locations inside the country, if required). There is a very long tail but most of it should correlate with investment in top infrastructure. Of course certain countries (US, China) might have undeclared computing assets of significant power (including various private datacenters), but this probably doesn't change the big picture much. 

4 comments, sorted by Click to highlight new comments since:

I was hoping to find a database of chips and their specs, which might help answer this question. (The geographical question, or the related question of types of compute i.e. GPU/CPU/embedded/datacenter/etc.) I just saw this website, chipdb which has information about historical processors of interest to collectors. Unfortunately they have no information about modern chips, esp. GPUs -- NVidia isn't even a category! Maybe there is a more current online database that tracks these things.

I just had a very quick look at that site, and it seems to be a collection of various chip models with pictures of them? Is there actual information on quantities sold, etc? I couldn't find it immediately.

Nope. It's a site by and for collectors, and apparently what they care about is reference images of the face of old chips. You'd think that ChipDB would be a database of chips, but this one is sorely lacking. I added this comment in hopes that someone knows of a more useful (to us) database.