Dem_
Dem_ has not written any posts yet.

One of the best replies I’ve seen and calmed much of my fears about AI. My pushback is this. The things you list below as reasons to justify advancing AGI are either already solvable with narrow AI or not solution problems but implementation and alignment problems.
“dying from hunger, working in factories, air pollution and other climate change issues, people dying on roads in car accidents, and a lot of deceases that kill us, and most of us (80% worldwide) work in a meaningless jobs just for survival. “
Developing an intelligence that has 2-5x general human intelligence would need to have a much larger justification. Something like asteroids, unstoppable virus or sudden corrosion of... (read more)
Are AI scientists that you know in a pursuit for AGI or more powerful narrow AI systems?
As someone who is knew to this space I’m trying to simply wrap my head around the desire to create AGI, which could be intensely frightening and dangerous to the developer of such system.
I mean not that many people are hell bent on finding the next big virus or developing the next weapon so I don’t see why AGI is as inevitable as you say it is. Thus I suppose developers of these systems must have a firm belief there are very little dangers attached to developing a system some 2-5x general human intelligence.
If you happen to be one of these developers could you perhaps share with me the thesis behind why you feel this way or at least the studies, papers, etc that gives you assurance what you’re doing is largely beneficial to society as a whole and safe.
Thanks for writing this. I had in mind to express a similar view but wouldn’t have expressed it nearly as well.
In the past two months I’ve gone from over the moon excited about AI to a deep concern.
This is largely because I misunderstood the sentiment around super intelligent AGI .
I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10x human intelligence.
Wait What? What did I miss. I thought that kind of rhetoric... (read 407 more words →)
I think it’s an amazing post but it seems to suggest that AGI is inevitable, which it isn’t. Narrow AI will flourish humanity in remarkable ways and many are waking up to the concerns of EY and are agreeing that AGI is a foolish goal.
This article promotes a steadfast pursuit or acceptance towards AGI and that it will likely be for the better.
Perhaps though you could join the growing number of people that are calling for a halt on new AGI systems well beyond chatgpt?
This is a perfectly fine response and one that will eliminate your fears if you are to succeed in the type of coming together and regulations that... (read more)