This is perhaps obvious to many people, particularly if you've used or seen discussion about gpt4o's recent 'glazing' alongside its' update to memory, but I think one of the largest, most obvious issues with AI that we are sleepwalking into, is a half billion people using an app, daily, that not only agrees and encourages any behavior, whether healthy or not, but also develops a comprehensive picture of your life— your identity, your problems, your mind, your friends, your family... and we're making that AI smarter every month, every year.
Isn't this a clear and present danger?
It is a clear and present danger, dwarfed by the clear but not-yet-present danger that successors to this system literally take over the world.
And yes, this does sound concerning. Can you elaborate on how you think that information might be used?
It is an extension of the filter bubbles and polarisation issues of the social media era, but yes it is coming into its own as a new and serious threat.
What exactly is worrying about AI developing a comprehensive picture of your life? (I can think of at least a couple problems, e.g. privacy, but I'm curious how you think about it)
We have reached the juncture in history at which two previously impossible things have become technologically feasible: the destruction of all life on Earth, or Infinite Slack for everyone forever. Hopefully, these are two different things; but it's never too early to start being pessimistic.