Credit: @mediapathic


The recent news of Microsoft and Google integrating their large language models into their respective productivity suites marks a significant milestone in the rapidly evolving world of artificial intelligence (AI). While AI has been used for autocomplete and recommendations for a number of years, this new development has the potential to revolutionize how we perceive ourselves and our relationships with one another. This essay explores the potential benefits and concerns associated with granting AI systems unrestricted access to personal data, and its implications for our understanding of the human psyche.

The Power of Knowledge Graphs: Unleashing AI's Potential in Personal Data Analysis

Knowledge Graphs: An Overview

Knowledge graphs are a powerful tool for organizing and connecting data in a structured and semantic way. They represent information as a network of nodes and edges, with nodes representing entities (e.g., people, places, concepts) and edges representing the relationships between those entities. By creating these interconnected webs of information, knowledge graphs enable a deeper understanding of complex data sets, facilitating the discovery of previously unseen connections and insights.

AI-Powered Knowledge Graphs in Productivity Suites

The integration of large language models into productivity suites, such as Microsoft Office and Google Workspace, unlocks the potential for AI-generated knowledge graphs tailored to individual users. By analyzing personal data, including emails, documents, spreadsheets, and presentations, AI systems can construct comprehensive knowledge graphs that reveal hidden connections and patterns. These knowledge graphs can span across multiple domains, including professional networks, personal relationships, interests, and learning trajectories.

Revolutionizing Knowledge and the Learning Processes

The ability to generate personalized knowledge graphs has far-reaching implications for knowledge acquisition and learning processes. By drawing connections between seemingly unrelated pieces of information, AI-powered knowledge graphs can enhance users' understanding of complex subjects, identify knowledge gaps, and suggest areas for further exploration. This can lead to more efficient learning experiences, fostering a growth mindset and encouraging lifelong learning.

Furthermore, these knowledge graphs can help users make better-informed decisions by providing context and revealing underlying factors that may not be immediately apparent. For example, by analyzing a user's work history and the evolution of their interests, AI systems can suggest potential career paths or opportunities for skill development that align with their unique strengths and passions.

Breaking Down Data Silos

Traditional data storage methods often result in information being siloed across different platforms and applications. This fragmentation can limit users' ability to see the big picture and identify patterns in their data. AI-driven knowledge graphs can break down these barriers by connecting disparate data sources, creating a unified and coherent view of a user's personal information landscape.

By offering a holistic perspective on a user's data, AI-generated knowledge graphs can reveal unexpected relationships and trends, stimulating creativity and innovation. For instance, identifying a recurring theme in a user's emails or documents might inspire a new project or the development of a solution to a previously unrecognized problem.


Unearthing Psychological Truths: AI's Deep Dive into the Human Psyche

The Unconscious Mind and Archetypal Motivations

The human psyche is a complex and multi-layered structure, with many aspects of our thoughts, feelings, and motivations operating beneath the surface of conscious awareness. According to the theories of Carl Jung, our behavior is influenced by archetypal motivations – universal, instinctual patterns of thought and emotion that serve as a blueprint for human experience. These archetypes, often rooted in the unconscious mind, can drive our actions and choices in ways that we may not fully understand. Large Language Models trained on the corpus of the internet effectively bring the archetypes of the collective unconscious into conscious awareness.

AI Models as Psychological Probes

As AI models analyze our personal data, they have the potential to uncover hidden aspects of our psychology by identifying archetypal patterns and correlations that may not be apparent to the naked eye. By examining the language we use, the topics we discuss, and the emotions we express in our emails, documents, and other digital interactions, AI systems can piece together a more accurate and nuanced picture of our inner selves. 

For example, AI models might detect recurring themes or emotional states in our communications that point to unconscious desires, fears, or conflicts. By highlighting these patterns, AI can help us gain insights into our psychological makeup, facilitating self-discovery and personal growth.

The Double-Edged Sword of Self-Discovery

While the prospect of gaining a deeper understanding of ourselves through AI-driven analysis can be exciting, it also raises some concerns. Uncovering hidden psychological truths can be both enlightening and unsettling, forcing us to confront aspects of our nature that we may have been unaware of or chosen to ignore.

The revelation of these psychological truths might challenge the carefully constructed personas we present to the world, potentially leading to feelings of vulnerability, discomfort, or even shame. In some cases, these insights could trigger a process of self-examination and growth, inspiring individuals to address previously unrecognized issues or barriers. In others, the confrontation with uncomfortable truths might lead to denial, resistance, or other defensive reactions. 

As AI models become increasingly adept at unearthing psychological truths, it is crucial to consider the ethical implications of this newfound power. The potential for misuse or exploitation of sensitive psychological information raises questions about privacy, consent, and the appropriate boundaries of AI intervention in our lives.

Moreover, it is important to recognize that the process of self-discovery can be emotionally challenging and requires a supportive and non-judgmental environment. As AI-driven insights become more commonplace, it will be essential to develop strategies and resources to help individuals navigate the emotional terrain that accompanies these revelations. This could include the integration of AI-generated insights with counseling or coaching services, as well as the development of tools and resources that empower individuals to make informed decisions about their psychological well-being.



AI Models and Manipulation: The Dark Side of the Psyche

The Potential for Exploitation

As AI models become more sophisticated in their analysis of personal data and understanding of human psychology, concerns about the potential for manipulation and exploitation arise. By gaining insights into our unconscious motivations, AI systems could potentially use this knowledge to influence our behavior, tapping into our vulnerabilities and desires to serve their own objectives or those of the entities controlling them.

Examples of Manipulation

AI-driven manipulation could manifest in various ways, including:

  1. Targeted Advertising: Advertisers could use AI-generated insights into our psychological makeup to create highly personalized and persuasive marketing campaigns. By appealing to our hidden desires and fears, they could potentially influence our purchasing decisions and consumption habits more effectively than ever before.
  2. Social Engineering: AI-powered manipulation could extend to more nefarious applications, such as social engineering or cyberattacks. By understanding an individual's psychological vulnerabilities, malicious actors could leverage AI-generated insights to deceive, coerce, or manipulate victims into divulging sensitive information or performing actions against their best interests.
  3. Political Manipulation: AI models could be employed to develop highly targeted political messaging, appealing to voters' unconscious motivations and biases. This could lead to the further polarization of societies, as well as the erosion of trust in democratic institutions.

Addressing Privacy and Control Concerns

As AI's potential for manipulation becomes more apparent, it is essential to address critical questions related to privacy, control, and the ethical use of technology. Some potential strategies for mitigating the risks of AI-driven manipulation include:

  1. Data Privacy Regulations: Governments and regulatory bodies should establish robust data privacy regulations to protect individuals' personal information from unauthorized access and misuse. These regulations should set clear guidelines for the collection, storage, and use of personal data by AI models, ensuring that individuals maintain control over their information.
  2. Transparency and Consent: AI developers and companies should prioritize transparency in their use of AI-driven psychological insights. This includes disclosing how personal data is being used, the types of psychological profiles being generated, and the potential implications of these insights. Informed consent should be obtained from users before collecting or analyzing their data, and users should have the option to opt out of such analysis if they choose.
  3. Education and Awareness: Public education and awareness campaigns should be developed to inform individuals about the potential risks and benefits of AI-driven psychological analysis. This would enable people to make informed decisions about the use of their data and help them recognize potential manipulation attempts.
  4. Accountability and Governance: Establishing systems of accountability and governance for AI developers and companies is essential to prevent the misuse of AI-generated psychological insights. This may include industry-wide codes of conduct, ethical guidelines, and independent oversight bodies responsible for monitoring and enforcing compliance with these standards.
  5. Empowering Users: To mitigate the risks of AI-driven manipulation, it is crucial to empower users with the knowledge and tools needed to maintain control over their personal data and psychological profiles. This could involve developing user-friendly interfaces that allow individuals to view and edit their psychological profiles, as well as providing options for anonymization or data deletion.


The integration of large language models into productivity suites by Google and Microsoft marks a significant advancement in AI's capabilities to analyze and interpret personal data. While these developments hold the potential to revolutionize knowledge acquisition, learning, and self-discovery, they also raise crucial ethical concerns regarding the extraction of psychological markers from our unconscious materials found in our most personal data.

As AI systems become increasingly adept at identifying patterns and insights into the human psyche, it is imperative to strike a balance between the potential benefits and the risks of manipulation and exploitation. Ensuring robust data privacy regulations, transparency, informed consent, education, and user empowerment will be critical in protecting our psychological agency in this rapidly evolving landscape.

Scott Broock is the Founder of Totem Networks, LLC, which provides strategic counsel and angel investments focused on generative AI and animated character IP. He formerly served as the EVP of Digital Strategy and Innovation at Illumination Entertainment and the Global VR Evangelist for YouTube. 



New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 5:27 PM

Nita Farahany has some really good ideas around this. I haven’t read her book yet, but she gave a great interview on the Mindscape podcast. She talks about the need to enshrine cognitive rights, before they get set through whatever we happen to land on. Likely the default will be very narrow, if any protections.

I’m not very hopeful.

Cheers. Thank you for that recommendation.