Posts

Sorted by New

Wiki Contributions

Comments

Dem_1y30

I think it’s an amazing post but it seems to suggest that AGI is inevitable, which it isn’t. Narrow AI will flourish humanity in remarkable ways and many are waking up to the concerns of EY and are agreeing that AGI is a foolish goal.

This article promotes a steadfast pursuit or acceptance towards AGI and that it will likely be for the better.

Perhaps though you could join the growing number of people that are calling for a halt on new AGI systems well beyond chatgpt?

This is a perfectly fine response and one that will eliminate your fears if you are to succeed in the type of coming together and regulations that would halt what could be a very dangerous technology.

This would be nothing new, Stanford and MIT aren’t allowed to work on bio weapons and radically larger nukes, (which if they did, they could easily make humanity threatening weapons in short order.)

The difference is the public and regulators are much less tuned into the high risk dangers of AGI, but it’s logical to think that if they knew half of what we knew, AGI would be seen in the same light as bio weapons.

Your intuitions are usually right, it’s an odd time to be working in science and tech but you still have to do what is right.

Dem_1y10

One of the best replies I’ve seen and calmed much of my fears about AI. My pushback is this. The things you list below as reasons to justify advancing AGI are either already solvable with narrow AI or not solution problems but implementation and alignment problems.

“dying from hunger, working in factories, air pollution and other climate change issues, people dying on roads in car accidents, and a lot of deceases that kill us, and most of us (80% worldwide) work in a meaningless jobs just for survival. “

Developing an intelligence that has 2-5x general human intelligence would need to have a much larger justification. Something like asteroids, unstoppable virus or sudden corrosion of atmosphere would justify the use of bringing out an equally existential technology like superhuman AGI.

What I can’t seem to wrap my head around is why a majority has not emerged that sees the imminent dangers in creating programs that are 5x generally smarter than us at everything. If you don’t fear this I would suggest anthropomorphizing more not less.

Regardless I think politics and regulations crush the AGI pursuits before GPT even launches its next iteration.

AGI enthusiasts have revealed their hand and so far the public has made it loud and clear that no one actually wants this, no one wants displacement or loss of their jobs no matter how sucky they might be. These black boxes scare the crap out of people and people don’t like what they don’t know.

Bad news and fear spreads rapidly in todays era, the world is run by Instagram moms more than anyone else. If it’s the will of the people, Google and Microsoft will find out just how much they are at the mercy of the “essential worker” class.

Dem_1y10

Are AI scientists that you know in a pursuit for AGI or more powerful narrow AI systems?

As someone who is knew to this space I’m trying to simply wrap my head around the desire to create AGI, which could be intensely frightening and dangerous to the developer of such system.

I mean not that many people are hell bent on finding the next big virus or developing the next weapon so I don’t see why AGI is as inevitable as you say it is. Thus I suppose developers of these systems must have a firm belief there are very little dangers attached to developing a system some 2-5x general human intelligence.

If you happen to be one of these developers could you perhaps share with me the thesis behind why you feel this way or at least the studies, papers, etc that gives you assurance what you’re doing is largely beneficial to society as a whole and safe.

Dem_1y66

Thanks for writing this. I had in mind to express a similar view but wouldn’t have expressed it nearly as well.

In the past two months I’ve gone from over the moon excited about AI to a deep concern.

This is largely because I misunderstood the sentiment around super intelligent AGI .

I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10x human intelligence.

Wait What? What did I miss. I thought that kind of rhetoric was isolated to the at worst ungrounded in reality LCD user and at best the radical Kurzweil types. I mean listen to us, are we really needing to argue about what percentage the risk is that human life gets exterminated by AGI?

Let me step off my soap box and address a concern that was illuminated in this piece and one that the biggest AGI proponents should at least ponder.

The concern has to do with the risks of hurting innocent bystanders that won’t get to make the choice about integrating AGI into the equation. Make no doubt, AGI both aligned and non aligned will likely cause an immense disruption on the part of billions of people. At the low scale displacing jobs and at the high getting murdered by an unaligned AGI. We all know about the consequences of the Industrial Revolution and job displacement but we look back at historical technological advances with appreciation that they lead us to where we are. But are you so sure that AGI is just the next step in that long ascension? To me it looks not to be. In fact AGI isn’t at all what people want. What we are learning about happiness is that work is incredibly important.

You know who isn’t happy? Retired and/or elderly who find themselves with no role in society and an ever increasing narrowing of friends and acquaintances.

“They will be better with AGI doing everything, trust me, technological progression always enhances”

Are you sure about that? I have so many philosophical directions I could go to disprove this (happiness is less choice not more) but I will get to the point which is:

You don’t get to decide. Not this time anyway.

It might be worth mentioning the crypto decentralization movement is the exact opposite of AGI. if you are a decentralization enthusiast who wants to bring power away from a centralized few then you should be ashamed to support the AGI premise of a handful of people modifying billions of life’s without their consent.

I will end with this. Your hand has been played. The AGI enthusiasts have revealed their intentions and it won’t sit well with basically…everyone. Unless AGI can be attained in the next 1-2 years it’s likely to see one of the biggest push backs our world has ever witnessed. Information spreads fast and you’re already seeing the mainstream pick up on the absurdity of pursuing AGI and. when this technology starts disrupting people’s lives get ready for more than just regulation.

Let’s take a deep breath. Remember AI is to solve problems and life’s tragedies, not create them.