I'm a software engineering graduate with a passion for AI and future studies, hoping to make a difference in the world.


Sorted by New


The Colliding Exponentials of AI

Thank you for the excellent and extensive write up :)

I hadn't encountered your perspective before, I'll definitely go through all your links to educate myself,     and put less weight on algorithmic progress being a driving force then.


The Colliding Exponentials of AI

You can achieve infinitely (literally) faster than Alexnet training time if you just take the weight of Alexnet.

You can also achieve much faster performance if you rely on weight transfer and or hyperparameter optimization based on looking at the behavior of an already trained Alexnet. Or, mind you, some other image-classification model based on that.

Once a given task is "solved" it become trivial to compute models that can train on said task exponentially faster, since you're already working down from a solution.

Could you clarify, you mean the primary cause of efficiency increase wasn’t algorithmic or architectural developments, but researchers just fine-tuning weight transferred models?


However, if you want to look for exp improvement you can always find it and if you want to look for log improvement you always will.

Are you saying that the evidence for exponential algorithmic efficiency, not just in image processing, is entirely cherry picked? 


In regards to training text models "x time faster",  go into the "how do we actually benchmark text models" section the academica/internet flamewar library.

I googled that and there were no results, and I couldn’t find an "academica/internet flamewar library" either.


Look I don’t know enough about ML yet to respond intelligently to your points, could someone else more knowledgeable than me weigh in here please?

The Colliding Exponentials of AI

Wow GPT-3 shaved at least 10 years off the median prediction by the looks of it. I didn't realise Metaculus had prediction history, thanks for letting me know.

The Colliding Exponentials of AI

My algorithmic estimates essentially only quantify your "first category" type of improvements, I wouldn’t know where to begin making estimates for qualitative "second category" AGI algorithmic progress.

My comparisons to human level NLP (which I don’t think would necessarily yield AGI) assumed scaling would hold for current (or near future) techniques, so do you think that current techniques won't scale, or/and that the actual 100-1000x figure I gave was too high?

I'm not sure what the ratio is but my guess is it's 50/50 or so. I'd love to see someone tackle this question and come up with a number.

Yeah that would be great if someone did that.

Open & Welcome Thread – October 2020

I just had a question about post formatting, how do I turn a link into text like this example? Thanks.

Open & Welcome Thread – October 2020

Hello LessWrong!

I’m a bit late to the party, I first signed up in February, but I haven’t really been active until recently, so I wanted to make an introductory post.

My name’s Nathan, I’m a 23 year old software engineering graduate from Australia. The first time I properly came into contact with the community was last year when a stray 4chan comment recommended Eliezer’s short story Three Worlds Collide. That story changed my life, I had never read anything that impressed me that much before. It was the first time I had encountered a person that seemed to be more experienced/skilled/intelligent in the things that I cared about. I had always considered truth seeking a central pillar of my identity, but I had never encountered or even heard of anyone else who cared about it as much as I did. So, as this author seemed to be of the same frame of mind, I decided to just read everything he had ever written to see if I could pick up a few things. Oh boy who knows how many millions or words that turned out to be. 

But I did read it all, and just about everything else the rationalist community and friends had written as well, which unearthed a coincidence: I had technically come into contact with the community before! In 2015 I had read Scott Alexanders story  “…And I Show You How Deep The Rabbit Hole Goes” and had quite liked it but had been confused by what Slate Star Codex was, and had never gone back. Lesson learned: if you like something investigate it thoroughly.

Entering the rationalist community felt like coming back to a home I never knew I had, I wasn’t alone!

I have decided to try and make a difference, devote my yet to exist career if I can, to help figure out what’s happening with AGI and the control problem. 

I have made a few comments so far, not always to positive response, so I hope I can do better, if you see something of mine and think I could have done better please point it out to me! I’ll correct it and rewrite my work in response. I would like to contribute much more to the community in the future. I have even been working on a post for a while, polishing, rewriting, just trying to do it justice and make something you want to read, that will meaningfully elucidate. I hope to post it soon, and that we can all become less wrong!




Brainstorming positive visions of AI

I think if we get AGI/ASI right the outcome could be fantastic from not just from the changes made to the world, but the changes made to us as conscious minds, and that an AGI/ASI figuring out mind design (and how to implement it) will be the most signifigant thing that happens from our perspective. 

I think that the possible space of  mind designs is a vast ocean, and the homo sapiens mind design/state is a single drop within those possibilities. The chance that our current minds are what you would choose for yourself given knowledge of all options is very unlikely. Given that happiness/pleasure (or at least that kind of thing) seems to be a terminal value for humans, our quality of experience could be improved a great deal. 

One obviouse thought is if you increase the size of a brain or otherwise alter its design could you increase the potential magnitude for pleasure. I mean we think of animals like insects, fish, mammals etc on a curve of increasing consciousness generally with humans at the top, if that is the case humanity need not be an upper limit on the 'amount' of consciousness you can possess. And of course, within the mind design ocean more desirable states than pleasure may exist for all I know.

I think that imagining radical changes to our own conscious experience is unintuitive and its importance as a goal underappreciated, but I cannot imagine that anything else AGI/ASI could do that would be more important or rewarding for us.

Load More