Most of the book was written in 2020 or earlier, which makes it ancient in terms of technical advances and social recognition of AI concerns. I would say that the paragraph is correct as of the date of writing, where it talks about the non-technical articles generally circulated in the media at the time.
For example, not even GPT-2 is mentioned until page 422 of the book, possibly written later than these background chapters. The "success stories" for deep learning on the previous page refer mostly to Siri, Alexa, progress in ImageNet benchmarks, and AlphaGo. They refer to self-driving cars with full autonomy as being "not yet within reach". Anything written in the present tense should be interpreted as being back when GPT-2 was new.
Their statements are less true now, and it is possible that the authors would no longer endorse those paragraphs as being true of the current day if brought back to their attention. By now I would expect them to be aware of the recent AI safety literature including technical publications assessing safety of current AI systems in ways that present counterexamples to multiple statements in the second paragraph without any reference to sentience.
I think that quite often when people say ‘consciousness’ in contexts like this, and especially when they say ‘sentience’, they mean something more like self-awareness than phenomenal consciousness.
Probably they are also not tracking the distinction very carefully, or thinking very deeply about any of this. But still, thinking the problem is ‘will AIs become self-aware?’ is not quite as silly as thinking it is ‘will the AIs develop phenomenal consciousness?’ and I think it’s the former that causes them to say these things.
In the PDF version of the Dive into Deep Learning book, at page 27, we can read this:
If you have been interested about the alignment problem and AI safety, you should probably already know that the second sentence of the first paragraph is wrong. Indeed, both this sentence and the first sentence of the second paragraph are mentioning sentience as the primary concern of AI safety. However, the opinion of the field is that sentience is not the primary concern of AI safety. As professor Stuart Russell famously said:
This mistake is concerning because this book is supposed to be the reference book of Deep Learning. I want to send an email about this to the authors (although I would appreciate it if an expert could do it instead). However, when writing the email, I realized that I do not have enough evidence that AI safety researchers agree that consciousness is not the primary concern of AI safety.
I have, of course, read many AI safety researchers express this opinion, including Stuart Russell, but I never wrote down the articles where I read these opinions. Does anyone have a list of articles/quotes/surveys which could convince any researcher that this is indeed the opinion of the field?
(It is pretty hard to convince someone that X is the opinion of the field if they believe the opposite)