I don't know, the obviously wrong things you see on the internet seems to differ a lot based on your recommendation algorithm. The strawmanny sjw takes you list are mostly absent from my algorithm. In contrast, I see LOTS of absurd right-wing takes in my feed.
The links to subsections in the table of contents seem to be broken.
It's Galaxy A54.
I'm not sure how to share screenshots on mobile on LW 😅
The idea seems cool but the feed doesn't work well on my phone. It cuts the sides of the text which makes things unreadable. (I have a Samsung)
Now, the EU itself needs some reforms badly, namely, as Draghi report suggests, relaxing the regulation, but there seems no political will to do that. At least, last time I’ve checked I have still seen those annoying “accept cookies” banners alive and kicking.
This is not true; there is a lot of political will for deregulation and simplification (see e.g. here). Everyone is talking about it in Brussels.
I assume the point about "accept cookies" banners was a joke, but just in case it wasn't: it takes time for regulations to be changed, so the fact that we still see the "accept cookies" banners offers no evidence that the EU is not taking deregulation seriously (another question is, if getting rid of those banners or other GDPR rules would boost competitiveness; I suspect it won't).
Also, IMO the most important reforms we need are not about regulation, but about harmonizing standards across the EU and creating a true single market.
I would expect higher competence in philosophy to reduce overcondidence, not increase it? The more you learn, the more you realize how much you don't know
This LessWrong version of the post seems to cut the final section of the original post?
What exactly is worrying about AI developing a comprehensive picture of your life? (I can think of at least a couple problems, e.g. privacy, but I'm curious how you think about it)
I think there are several potential paths of AGI leading to authoritarianism.
For example consider AGI in military contexts: people might be unwilling to let it make very autonomous decisions, and on that basis, military leaders could justify that these systems be loyal to them even in situations where it would be good for the AI to disobey orders.
Regarding your point about requirement of building a group of AI researchers, these researchers could be AIs themselves. These AIs could be ordered to make future AI systems secretly loyal to the CEO. Consider e.g. this scenario (from Box 2 in Forethought's new paper):
In 2030, the US government launches Project Prometheus—centralising frontier AI development and compute under a single authority. The aim: develop superintelligence and use it to safeguard US national security interests. Dr. Nathan Reeves is appointed to lead the project and given very broad authority.
After developing an AI system capable of improving itself, Reeves gradually replaces human researchers with AI systems that answer only to him. Instead of working with dozens of human teams, Reeves now issues commands directly to an army of singularly loyal AI systems designing next-generation algorithms and neural architectures.
Approaching superintelligence, Reeves fears that Pentagon officials will weaponise his technology. His AI advisor, to which he has exclusive access, provides the solution: engineer all future systems to be secretly loyal to Reeves personally.
Reeves orders his AI workforce to embed this backdoor in all new systems, and each subsequent AI generation meticulously transfers it to its successors. Despite rigorous security testing, no outside organisation can detect these sophisticated backdoors—Project Prometheus' capabilities have eclipsed all competitors. Soon, the US military is deploying drones, tanks, and communication networks which are all secretly loyal to Reeves himself.
When the President attempts to escalate conflict with a foreign power, Reeves orders combat robots to surround the White House. Military leaders, unable to countermand the automated systems, watch helplessly as Reeves declares himself head of state, promising a "more rational governance structure" for the new era.
Relatedly, I'm curious what you think of that paper and the different scenarios they present.
I see. Why do you have this impression that the default algorithms would do this? Genuinely asking, since I haven't seen convincing evidence of this.