Email me at assadiguive@gmail.com, if you want to discuss anything I posted here or just chat.
The existential risk argument is suspiciously aligned with the commercial incentives of AI executives. It simultaneously serves to hype up capabilities and coolness while also directing attention away from the real problems that are already emerging. It’s suspicious that the apparent solution to this problem is to do more AI research as opposed to doing anything that would actually hurt AI companies financially.
This claim is bizarre, notwithstanding its popularity. It is bad for the industry if it is true that AI is likely to destroy the world, because if this (putative) fact becomes widely known, the AI industry will probably be shut down. Obviously it would be worth imposing more costs on AI companies to prevent the end of the world than to prevent the unemployment of translators or racial bias in imagegen models.
I don't think this kind of relative-length-based analysis provides any more than a trivial amount of evidence about their real views.
Yeah, things happened pretty slowly in general back then.
I don't really believe there is any such thing as "epistemic violence." In general, words are not violence.
There's a similar effect with stage actors who are chosen partly for looking good when seen from far away.
I'm no expert on Albanian politics, but I think it's pretty obvious this is just a gimmick with minimal broader significance.
Agreed.
The system prompt in claude.ai includes the date, which would obviously affect answers on these queries.
What model did OpenAI delete? Where can I learn more?