In the PDF version of the Dive into Deep Learning book, at page 27, we can read this: > Frequently, questions about a coming AI apocalypse and the plausibility of a singularity have been raised in non-technical articles. The fear is that somehow machine learning systems will become sentient and...
I recently saw a post arguing that top AI labs should shut down. This let me wonder whether the AI Safety community thinks OpenAI is net negative for AI safety. I chose OpenAI because I consider it as the most representative top AI lab (in the sense that, if we...
Why does this post exist? In order to learn more about my own opinion about AI safety, I tried to write a thought every day before going to bed. Of course, I failed doing this every day, and this is the reason why I have only thirty arguments since January....
Summary In the last blog post, I introduced my plan to make the safest cryptographic box in the world, and to make it widely available. This would, in theory, make it possible to run infinitely dangerous programs (including superintelligences) safely. This cryptographic box was supposed to use a scheme that...
Summary Since September 2023, I started learning a lot of math and programming skills in order to develop the safest cryptographic box in the world (and yes, I am aiming high). In these four months, I learned important things you may want to know: * Fully Homomorphic Encryption (FHE) schemes...