VermillionStuka

I'm a software engineering graduate with a passion for AI and future studies, hoping to make a difference in the world.

Comments

The Future of Biological Warfare

The future of biological warfare revolves around the use of infectious agents against civilian populations.

Future? That's been the go-to biowar tactic for 3000+ years.

Gauging the conscious experience of LessWrong

I had in mind a scale like 0 would be so non-vivid it didn’t exist in any degree, 100 bordering on reality (It doesn’t map to the memory question well though, and the control over your mind question could be interpreted in more than one way). Ultimately the precision isn’t high for individual estimates, the real utility comes from finding trends from many responses.

Gauging the conscious experience of LessWrong

I’ll go first: I am constantly hearing my own voice in my head narrating in first person, I can hear the voice vividly and clearly, while typing this sentence I think/hear each syllable at the speed of my trying. The voice doesn’t narrate automatic actions like where to click my mouse but could if I wanted it to. The words for the running monologue seeming get plucked out of a Black box formed of pure concepts, which I have limited access too most of the time. I can also listen to music in my own head, hearing the vocals and instruments clearly, only a few steps down from reality in vividness.

When I picture imagery, it is almost totally conceptual and ‘fake’, for example I couldn’t count the points on an imaginary star, which seems to be Aphantasia.  I also have Ideasthesia (Like Synaesthesia but with concepts evoking perception-like sensory experiences) which causes me to strongly associate concepts with places, for example when reading the Game of Thrones series, I’m forced to constantly think about a particular spot in my old high school. Between 20-40% of concepts get linked to a place. 

And I hesitate to mention it but my psychedelic experiences have been visually extremely vivid and intense despite my lack of visual imagination. I have heard anecdotal evidence that not everybody has vivid imagery on LSD.

The LessWrong 2018 Book is Available for Pre-order

You mention its being sold to Australia, but that isn’t an option in the checkout :(

The Colliding Exponentials of AI

Thank you both for correcting me, I have removed that section from the post.                                              

The Colliding Exponentials of AI

Thank you for the excellent and extensive write up :)

I hadn't encountered your perspective before, I'll definitely go through all your links to educate myself,     and put less weight on algorithmic progress being a driving force then.

Cheers

The Colliding Exponentials of AI

You can achieve infinitely (literally) faster than Alexnet training time if you just take the weight of Alexnet.

You can also achieve much faster performance if you rely on weight transfer and or hyperparameter optimization based on looking at the behavior of an already trained Alexnet. Or, mind you, some other image-classification model based on that.

Once a given task is "solved" it become trivial to compute models that can train on said task exponentially faster, since you're already working down from a solution.

Could you clarify, you mean the primary cause of efficiency increase wasn’t algorithmic or architectural developments, but researchers just fine-tuning weight transferred models?

 

However, if you want to look for exp improvement you can always find it and if you want to look for log improvement you always will.

Are you saying that the evidence for exponential algorithmic efficiency, not just in image processing, is entirely cherry picked? 

 

In regards to training text models "x time faster",  go into the "how do we actually benchmark text models" section the academica/internet flamewar library.

I googled that and there were no results, and I couldn’t find an "academica/internet flamewar library" either.

 

Look I don’t know enough about ML yet to respond intelligently to your points, could someone else more knowledgeable than me weigh in here please?

The Colliding Exponentials of AI

Wow GPT-3 shaved at least 10 years off the median prediction by the looks of it. I didn't realise Metaculus had prediction history, thanks for letting me know.

The Colliding Exponentials of AI

My algorithmic estimates essentially only quantify your "first category" type of improvements, I wouldn’t know where to begin making estimates for qualitative "second category" AGI algorithmic progress.

My comparisons to human level NLP (which I don’t think would necessarily yield AGI) assumed scaling would hold for current (or near future) techniques, so do you think that current techniques won't scale, or/and that the actual 100-1000x figure I gave was too high?

I'm not sure what the ratio is but my guess is it's 50/50 or so. I'd love to see someone tackle this question and come up with a number.

Yeah that would be great if someone did that.

Load More