The main difference in my mind is that a human can never be as powerful as potential ASI and cannot dominate humanity without the support of sufficiently many cooperative humans. For a given power level, I agree that humans are likely scarier than an AI of that power level. The scary part about AI is that their power level isn't bounded by human biological constraints and the capacity to do harm or good is correlated with power level. Thus AI is more likely to produce extinction-level dangers as tail risk relative to humans even if it's more likely to be aligned on average.
Related question: What is the least impressive game current LLMs struggle with?
I’ve heard they’re pretty bad at Tic Tac Toe.
I’m new to the term AIXI and went three links deep before I learned what it refers to. I’d recommend making this journey easier for future readers by linking to a definition or explanation near the beginning of the post.
I think your assessment may be largely correct but I do think it's worth considering how things are not always nicely compressible.
This review led me to find the following podcast version of Planecrash. I've listened to the first couple of episodes and the quality is quite good.
this concern sounds like someone walking down a straight road and then closing their eyes cause they know where they want to go anyway
This doesn't sound like a good analogy at all. A better analogy might be a stylized subway map compared to a geographically accurate one. Sometimes removing detail can make it easier to process.
I don't think it's necessarily GDPR-related but the names Brian Hood and Jonathan Turley make sense from a legal liability perspective. According to info via ArsTechnica,
Why these names?
We first discovered that ChatGPT choked on the name "Brian Hood" in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood's 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
As for Jonathan Turley, a George Washington University Law School professor and Fox News contributor, 404 Media notes that he wrote about ChatGPT's earlier mishandling of his name in April 2023. The model had fabricated false claims about him, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed. Turley told 404 Media he has not filed lawsuits against OpenAI and said the company never contacted him about the issue.
Interestingly, Jonathan Zittrain is on record saying the Right to be Forgotten is a "bad solution to a real problem" because "the incentives are clearly lopsided [towards removal]".
User throwayian on Hacker News ponders an interesting abuse of this sort of censorship:
I wonder if you could change your name to “April May” and submitted CCPA/GDPR what the result would be..
It's not a classic glitch token. Those did not cause the current "I'm unable to produce a response" error that "David Mayer" does.
Did you mean to link to my specific comment for the first link?