Riccardo Varenna

I work together with a group of activists in Germany to make a difference in the world. You can find more details on our website: https://singularitygroup.net/

Starting 2023 and with the release of new AI technologies like GPT-4 we have somewhat shifted our focus towards these developments, trying to raise awareness about the capabilities of the new tech, mainly through livestreams that implement and combine the latest APIs that are available while combining it with entertainment to reach a larger audience, a bit more info on what we worked on here: https://customaisolutions.io/ 

We have tried many other projects in the past years since I have been part of the group (2015), starting with fundraising for charity, focusing on spreading awareness to working on a mobile game.
The reason we decided to work on a the game "Mobile Minigames" is that the mobile games industry is one of the biggest industries in the world in terms of profits and audience. We want to make use of our experience in the industry to build a platform we can use for good as well as make money we can use for good cause.

Posts

Sorted by New

Wiki Contributions

Comments

Really enjoyed reading this, it's a refreshing approach to tackle the issue, giving practical examples of what risk scenarios would look like.

I initially saved this post to read thinking it would provide counterarguments to AI being an x-risk, which to some degree it did.

Pointing out that some of these mistakes that can lead to AI being an x-risk are "rather embarrassing" is really compelling, I wonder how likely (in percentages of confidence) you see those mistakes to be made. Because even though these mistakes might be really embarrassing, depending on the setting and who can make them as you mention in the post, they are more or less likely.