LESSWRONG
LW

batterseapower
9050
Message
Dialogue
Subscribe

http://blog.omega-prime.co.uk

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
San Francisco ACX Meetup “First Saturday”
batterseapower2mo10

Thanks for organising - great to chat briefly with a few of you. Sorry I had to run off after 45mins - hope to have a deeper convo next month. - Max

Reply
Hong Kong Hong Kong - ACX Meetups Everywhere Fall 2024
batterseapower1y10

Hope to see you all there! We normally get 12-15 attendees and always spirited discussion :-)

Reply
Hong Kong – ACX Meetups Everywhere Spring 2024
batterseapower1y10

We have 8 RSVPs right now. More are welcome :-)

Reply
Hong Kong – ACX Meetups Everywhere Spring 2024
batterseapower1y10

Last time we had about 12 people - hope we can get similar numbers for this one :-) - Max

Reply
Are we in an AI overhang?
batterseapower5y100

Isn't GPT3 already almost at the theoretical limit of the scaling law from the paper? This is what is argued by nostalgebraist in his blog and colab notebook. You also get this result if you just compare the 3.14E23 FLOP (i.e. 3.6k PFLOPS-days) cost of training GPT3 from the lambdalabs estimate to the ~10k PFLOPS-days limit from the paper.

(Of course, this doesn't imply that the post is wrong. I'm sure it's possible to train a radically larger GPT right now. It's just that the relevant bound is the availability of data, not of compute power.)

Reply
No posts to display.