Darren McKee

Wiki Contributions

Comments

Hahaha. With enough creativity, one never has to change their mind ;)

I don't have much more to share about the book at this stage as many parts are still in flux. I don't have much on hand to point you towards (like a personal website or anything). I had a blog years ago and do that podcast I mentioned. Perhaps if you have a specific question or two?

I think a couple loose objectives. 1. To allow for synergies if others are doing something similar, 2. to possible hear good arguments for why it shouldn't happen, 3. to see about getting help, and 4. other unknown possibilities (perhaps someone connects me to someone else what provides a useful insight or something)

None taken, it's a reasonable question to ask. It's part of the broader problem of knowing if anything will be good or bad (unintended consequences and such).  To clarify a bit, by general audience, I don't mean everyone because most people don't read many books, let alone non-fiction books, let alone non-fiction books that aren't memoirs/biographies or the like. So, my loose model is that (1) there is a group of people who would care about this issue if they knew more about it and (2) their concerns will lead to interest from those with more power to (3) increase funding for AI safety and/or governance that might help. 
Expanding on 1, it could also increase those who want to work on the issue, in a wide range of domains beyond technical work. 
It's also possible that it is net-positive but still insufficient but was worth trying. 

Thanks for the comment. I agree and was already thinking along those lines. 
It is a very tricky, delicate issue where we need to put more work into figuring out what to do while communicating it is urgent, but not so urgent that people act imprudently and make things worse. 
Credibility is key and providing reasons for beliefs, like timelines, is an important part of the project.