LESSWRONG
LW

Karl von Wendt
810Ω10161280
Message
Dialogue
Subscribe

German writer of science-fiction novels and children's books (pen name Karl Olsberg). I blog and create videos about AI risks in German at www.ki-risiken.de and youtube.com/karlolsbergautor.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
If Anyone Builds It, Everyone Dies: Advertisement design competition
Karl von Wendt5d10

Thanks again! My drafts are of course just ideas, so they can easily be adapted. However, I still think it is a good idea to create a sense of urgency, both in the ad and in books about AI safety. If you want people to act, even if it's just buying a book, you need to do just that. It's not enough to say "you should read this", you need to say "you should read this now" and give a reason for that. In marketing, this is usually done with some kind of time constraint (20% off, only this week ...).

This is even more true if you want someone to take measures against something that is in the mind of most people still "science fiction" or even "just hype". Of course, just claiming that something is "soon" is not very strong, but it may at least raise a question ("Why do they say this?").

I'm not saying that you should give any specific timeline, and I fully agree with the MIRI view. However, if we want to prevent superintelligent AI and we don't know how much time we have left, we can't just sit around and wait until we know when it will arrive. For this reason, I have dedicated a whole chapter on timelines in my own German language book about AI existential risk and also included the AI-2027 scenario as one possible path. The point I make in my book is not that it will happen soon, but that we can't know it won't happen soon and that there are good reasons to believe that we don't have much time. I use my own experience with AI since my Ph.D. on expert systems in 1988 and Yoshua Bengio's blogpost on his change of mind as examples of how fast and surprising progress has been even for someone familiar with the field.

I see your point about how a weak claim can water down the whole story. But if I could choose between a 100 people convinced that ASI would kill us all, but with no sense of urgency, and 50 or even 20 who believe both the danger and that we must act immediately, I'd choose the latter.

Reply
If Anyone Builds It, Everyone Dies: Advertisement design competition
Karl von Wendt6d10

Thanks! I don't have access to the book, so I didn't know about the timelines stance they take. 

Still, I'm not an advertising professional, but subjunctives like "may" and "could" seem significantly weaker to me. As far as I know, they are rarely used in advertising. Of course, the ad shouldn't contain anything that is contrary to what the book says, but "close" seems sufficiently unspecific to me - for most laypeople who never thought about the problem, "within the next 20 years" would probably seem pretty close.

A similar argument could be made about the second line, "it will kill everyone", while the book title says "would". But again, I feel "would" is weaker than "will" (some may interpret it to mean that there may be additional prerequisites necessary for an ASI to kill everyone, like "consciousness"). Of course, "will" can only be true if a superintelligence is actually built, but that goes without saying and the fact that the ASI may not be built at all is also implicit in the third line, "we must stop this".

Reply
If Anyone Builds It, Everyone Dies: Advertisement design competition
Karl von Wendt7d10

Thanks!

Reply
If Anyone Builds It, Everyone Dies: Advertisement design competition
Karl von Wendt7d50

I'm not a professional designer and created these in Powerpoint, but here are my ideas anyway.

General idea:

2:1 billboard version:

1:1 Metro version:

With yellow background:

Reply
Too Soon
Karl von Wendt2mo20

Thanks for your comment! If we talk about AGI and define this as "generally as intelligent as a human, but not significantly more intelligent", then by definition it wouldn't be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn't be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it's different, but the risks are even higher (however, it's not easy to draw a clear line between AGI and ASI).

All in all, I would agree that we lose some capabilities to shape our future if we don't develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.

Reply
Too Soon
Karl von Wendt2mo158

Could you explain why exactly AGI is "a necessity"? What can we do with AGI that we can't do with highly specialized tool AI and one ore more skilled human researchers?

Reply
Too Soon
Karl von Wendt2mo120

I'm sorry for your loss. I would just like to point out that proceeding cautiously with AGI development does not mean that we'll reach longevity escape velocity much later. Actually, I think if we don't develop AGI at all, the chances for anyone celebrating their 200th birthday are much greater. 

To make the necessary breakthroughs in medicine, we don't need a general agent who can also write books or book a flight. Instead, we need highly specialized tool AI like AlphaFold, which in my view is the most valuable AI ever developed, and there's zero chance that it will seek power and become uncontrollable. Of course, tools like AlphaFold can be misused, but the probability of destroying humanity is much lower than with the current race towards AGI that no one knows how to control or align.

Reply1
Can we ever ensure AI alignment if we can only test AI personas?
Karl von Wendt4mo10

Very interesting point, thank you! Although my question is not related purely to testing, I agree that testing is not enough to know whether we solved alignment.

Reply
Can we ever ensure AI alignment if we can only test AI personas?
Karl von Wendt4mo10

This is also a very interesting point, thank you!

Reply
Can we ever ensure AI alignment if we can only test AI personas?
Karl von Wendt4mo30

Thank you! That helps me understanding the problem better, although I'm quite skeptical about mechanistic interpretability. 

Reply
Load More
22Can we ever ensure AI alignment if we can only test AI personas?
Q
4mo
Q
8
-7The benefits and risks of optimism (about AI safety)
2y
6
24The Game of Dominance
2y
15
106Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell?
2y
53
65A Friendly Face (Another Failure Story)
2y
21
46Agentic Mess (A Failure Story)
2y
5
10Coordination by common knowledge to prevent uncontrollable AI
2y
2
19We don’t need AGI for an amazing future
2y
32
29Paths to failure
2y
1
11Prediction: any uncontrollable AI will turn earth into a giant computer
2y
8
Load More