Igor Ivanov

Wiki Contributions

Comments

Can you elaborate on your comment? 

It seems so intriguing to me, and I would love to learn more about "Why it's a bad strategy if our AGI timeline is 5 years or less"?

Why do you think that it will not be competitive with other approaches?

For example, it took 10 years to sequence the first human genome.  After nearly 7 years of work, another competitor started an alternative human genome project using completely another technology, and both projects were finished approximately at the same time.

I think, we are entering a black swan and it's hard to predict anything.

I absolutely agree with the conclusion. Everything is moving so fast.

I hope, these advances will cause massive interest in the alignment problem from all sorts of actors, and even if OpenAI are talking about safety (and recently they started talking about it quite often) in a large part because of PR reasons, it still means that they think, society is concerned about the progress which is a good sign.

What are examples of “knowledge of building systems that are broadly beneficial and safe while operating in the human capabilities regime?”

I assume the mentioned systems are institutions like courts, government, corporations, or universities


 


Charlotte thinks that humans and advanced AIs are universal Turing machines, so predicting capabilities is not about whether a capability is present at all, but whether it is feasible in finite time with a low enough error rate.

I have a similar thought. If AI has human-level capabilities, and a part of its job is to write texts, but it writes large texts in seconds and can do it 24/7, then is it still within the range of human capabilities?

Thanks for your view on doomerism and your thoughts on the framing of a hopre

One thing helping me to preserve hope is the fact that there are so many unknown variables about AGI and how humanity will respond to it, that I don't think that any current-day prediction is worth a lot.

Although I must admit, doomers like Connor Leahy and Eliezer Yudkovsky might be extremely persuasive but they also don't know many important things about the future and they are also full of cognitive biases.  All of this makes me tell myself a mantra "There is still hope that we might win".

I am not sure whether this is the best way to think about these risks but I feel like if I'll give it up, it is a straightforward path to existential anxiety and misery, so I try not to question it too much.
 

[This comment is no longer endorsed by its author]Reply

I agree. We have problems with emotional attachment to humans all the time, but humans are more or less predictable, not too powerful, and usually not so great at manipulations

Thank you for your comment and everything you mentioned in it. I am a psychologist entering the field of AI policy-making, and I am starving for content like this

It does, and it causes a lot of problems, so I would prefer to avoid such problems with AIs 

Also, I believe that an advanced AI will be much more capable in terms of deception and manipulation than an average human

Load More