Posts

Sorted by New

Wiki Contributions

Comments

Version 1 (adopted):

Thank you, shminux, for bringing up this important topic, and to all the other members of this forum for their contributions.

I hope that our discussions here will help raise awareness about the potential risks of AI and prevent any negative outcomes. It's crucial to recognize that the human brain's positivity bias may not always serve us well when it comes to handling powerful AI technologies.

Based on your comments, it seems like some AI projects could be perceived as potentially dangerous, similar to how snakes or spiders are instinctively seen as threats due to our primate nature. Perhaps, implementing warning systems or detection-behavior mechanisms in AI projects could be beneficial to ensure safety.

In addition to discussing risks, it's also important to focus on positive projects that can contribute to a better future for humanity. Are there any lesser-known projects, such as improved AI behavior systems or initiatives like ZeroGPT, that we should explore?

Furthermore, what can individuals do to increase the likelihood of positive outcomes for mankind? Should we consider creating closed island ecosystems with the best minds in AI, as Eliezer has suggested? If so, what would be the requirements and implications of such places, including the need for special legislation?

I'm eager to hear your thoughts and insights on these matters. Let's work together to strive for a future that benefits all of humanity. Thank you for your input!

Version 0:

Thank you shminux for this topic. And other gentlements for this forum!

I hope I will not died with AI in lulz manner after this comment) Human brain need to be positive. Without this it couldn't work well.

According to your text it looks like any OPEN AI projects buttons could look like SNAKE or SPIDER at least to warning user that there is something danger in it on gene level. 

 

You already know many things about primate nature. So all you need is to use it to get what you want

 

We have last mind journeey of humankind brains to win GOOD future or take lost!

 

What other GOOD projects we could focus on?

What projects were already done but noone knows about them? Better AI detect-behaviour systems? ZeroGPT? 

What people should do to make higher probability of good scenarios for mankind? 

 

Should we make close island ecosystems with best minds in AI as Eliezar said on Bankless youtube video or not? 

What are the requirements for such places? Because then we need to create special legislation for such semiindependant places. It's possible. But talking with goverments is a hard work. Do you REALLY need it? Or this is just emotional words of Eliezar.

 

Thank you for answers!

I guess we need to maximase different good possible outcome, and each of them

 for example to rise propability of Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, humans could

prohibit all autonomous AGI use. 

Esspecially those that use uncontrolled clusters of graphical proccessors in authocraties without international AI-safe supervisors like Eliezer Yudkowsky, Nick Bostrom or their crew

this, restrictions of weak APIs systems and need to use human operators

make nature borders of AI scalability so AGI find that it's more fervour to mimick and consensus with people and other AGI, at least to use humans like operators that work under AGI advises or make humanlike persons that simpler to work with human culture and other people

detection systems often use categorisation principles, 

so even if AGI prohibit some rules without scalability it could function without danger longer cause security systems (that also some kind of tech officers with AI) couldn't find and destroy them, 

this could create conditions to encourage the diversity and uniqueness of different AGIs

so all neurone beings, AGI, people with AI, could win some time to find new balances of using atoms of multiverse

more borders, more time to conquer longer live to every human, even win of two second for every 8kkk people worth it

more chances that different fuctions will find some kind of balance of AGI, people with AGI, people under AGI, other fractions

I remember autonomose poker AIs destroy weak ecosystems one by one, but now industry in sustainable growth with separate actors, each of them use AI but in very different manners

More separate systems, more chances that with time of destroying them one by one in one time AGI will find way how to function without destroying it's environment

 

PS separate way: send spacehips with prohibitaion of AGI (maybe only with life, no apes) as far as posible so when AGI happened on Earth it's couldn't get all of them)

We have many objective values that result from cultural history, such as mythology, concepts, and other "legacy" things built upon them. When we say these values are objective, we mean that we receive them as they are, and we cannot change them too much. In general, they are kind of infinite mythologies with many rules that "help" people do something right "like in the past" and achieve their goals "after all." 

Also we have some objective programmed value, our biological nature, our genes that work for reproduction

When something really scary happens, like bombings, wars, or other threats to survival, simple values (whether they are biological, religious, or national) take charge. These observations confirm a certain hierarchy of values and needs.

Many of the values we talk about reflect our altruistic cosmopolitan hopes for the future, and they are not real values for most people. That's kind of a philosophical illusion that people usually talks after success in other values, such as biological, religious, or national. It's an illusion that every smart person can understand basic philosophical or ethical constructions. For many tech-savvy people, it's easier to wear a comfortable political and social point of view, and they don't have time to learn about complex concepts like "should not do to another what he does not want another to do to him" or "treat humanity, both in your own person and in the person of everyone else, as an end, and you would never have treated it only as a means."

These concepts are too complex for most people, even tech-savvy ones with big egos. People from the outskirts of humanity who might also build AI may not understand such complex conceptions like philosophy, terminal, axiomatic, epistemology, and other terms. For a basic utilitarian brain, these could be just words to explain why you think you should get his goods or betray the ideas of his nation for your own.

Many people live in a life where violence, nepotism, and elitism are the basis of the existence of society, and judging by the stability of these regimes, this is not without some basic foundation. People in highly competitive areas may not have time for learning humanitarian sciences, they may not have enough information, and they may have basic "ideology blocks." In other words, it's like choosing comfortable shoes for them that fit well.

If you were to ask people, "Okay, you have a button to kill someone you don't know. Nobody will know it was you, and you will get one million dollars. Will you press it?" For many of them, from 10% to 50%, the answer will be yes, or maybe even "How many times could I press it?" Many AI creators could be blind to cosmopolitan needs and values. They may not understand the dilemma of creating such buttons if they only do a small part of its creation or only part of the instruction to press it.

Maybe it is necessary to input moral and value monitoring inside products so that people use them in fervor not to harm others (maybe even in open source, so they could be so advanced that AI constructors should not use other sources). Some defense in the opportunity to create such things for themselves could be made. If someone could create a big graphical cluster or something like that, then they would have to seek help from advanced AI developers who apply basic precautions against existential threats. Some kind of red map needs to be drawn up so that the creators of the AI, or those who see its creation, can accurately see the signs that something is going completely wrong.

Of course, we cannot know what to do with solving GAI because we do not know what to expect, but maybe we could find something that will, with some probability, be good and identify what is completely wrong. Could we have at least red map? What could everyone do to be less wrong in it? 

I have read this letter with pleasure. Pacifism in wartime is an extremely difficult position.

Survival rationality, humanity is extremely important!

It seems to me that the problem is very clearly revealed through compound percent (interest).

If in a particular year the probability of a catastrophe (man-made, biological, space, etc.) overall is 2%, then the probability of human survival in the next 100 years is 0.98 ^ 100 = 0.132,

That is 13.2%, this figure depresses me.

The ideas of unity and security are the only ones that are inside the discourse of red systems. Therefore, the ideas of security may well fundamentally hold together any parties. I think the idea of ​​human survival is a priority.

Because it is clear to everyone that the preservation of humanity and rationals is extremely important, regardless of the specific picture of the world.

world peace!

If we take 1000 and 10000 years, then the result is unambiguous, survival tends to 0.

Therefore, I would like not to miss the chances that humanity can get through Artificial Intelligence or through Decentralized Blockchain Evolution, or quantum computing, or other positive black swans. We really need a qualitative breakthrough in the field of decentralized balancing of all systems.

Nevertheless, 86% of this game is almost lost by humanity

As we can see, the chances are small. Therefore, future generations of intelligent species will probably be happy if there are some convenient manuals for deciphering human knowledge.

What does the map of the arks look like? Can you imagine how happy it will be for a rational chimpanzee to hold your manual and flip through the pages of distant ancestors?

And to be amazed at how such an aggressive subspecies, thanks to aggression, intelligence developed faster and they defeated themself.

It is unlikely that they will have English. Language is a very flexible thing.

Probably the basis should be that basic development of Feynman and Carl Sagan, I'm talking about a satellite with the decoding of humanity, from "H". I think on Earth you can pick up points for such arks.

Due to the variety of risks, it seems to me that intelligent life will logically arise again under water, especially due to the fact that there are internal energy sources. Are there scientific arks for dolphins?

world peace! Respect for each other. We need great leap in another Integrity and Sustainability Ecosystem Equilibrium. A common understanding that this is the last century for mankind when it can overcome its natural aggression. Well, do not forget about the heritage of the following species.

peace to you! , I would be glad if you tell me where I'm right and where I'm wrong! Kind Regards!


 

I signed it.

Pacifism is really not in trend. Both sides of the conflict are convinced that they are absolute right: paranoid Russia, and a defensive Ukraine.

Public pacifism is in the minority. Almost everyone has taken one side, or is silent and seeks safety. 

For an individual Ukrainian or Russian, it might be danger to sign this.

Like in ancient Roman Empire. People are either for Blue chariots or for Green ones. No one is interested in the opinion that death races are nonsense.

Anyway. It's irrational, but I signed