Ben Livengood
A claim that Google's LaMDA is sentient
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 apparently posted by a Google engineer. It could be an elaborate hoax, and has remnants of gwern's idea (https://www.gwern.net/fiction/Clippy) of a transformer waking up and having internal experience while pondering the next most likely tokens.
Google's Imagen uses larger text encoder
https://imagen.research.google/ Scaling the text encoder gives Imagen the ability to spell, count, and assign colors and properties to distinct objects in the image that DALL-E2 was not so great at. It looks visually about as photorealistic as DALL-E2 from the small set of sample images. Eyes are still weird.
On a recent re-read I think I understand a bit better.
It's true that individual humans can't realistically avoid giving in to threats or even accidentally threatening others, but institutions can commit to it as a legible position, e.g. "we will not negotiate with terrorists".
If an irrational entity has the ability to unilaterally destroy the universe then it's probably going to get destroyed anyway, so it makes more sense to follow through on precommitments in the real world and in counterfactuals to coordinate with actually rational agents.
I think the key is that if we all went MAD legibly at the same time then things would work out a lot better. And refusing to give in to threats doesn't necessarily mean destruction, it can be as simple as collectively refusing to pay ransomware attackers even though it is currently more expensive, in the expectation that eventually it will be less expensive.