Ten years ago, when you asked a question, whether about your research paper, an assignment, or simply seeking a clarification, you would get thrown into large webs of knowledge systems, an old teacher, an unexpected hyperlink on a blogpost, print only magazines, information disembodied from the synthetic analysis of large language models.
To truly understand something complex, you would have to penetrate walls of raw data, make original, nuanced connections across disciplines, opening windows into rich fields of inquiry. You would indulge in a “self-conversation” that would test your understanding of the world. The type of conversation that is asymmetrical, unpredictable, and disruptive, pitting many varieties of articles, blogs, research papers, books, and video essays against each other, both testing their limits and your ability to generate original ideas.
Answers today, with generative AI, may look like conversations, but they are not dialogic. They are polished and complete, leaving little room for friction. In the age of fast answers, maybe there is something heraldic about the lack of them.
But, before we delve into the importance of manual exploration and struggle in getting answers, we must first analyze the art of asking questions.
Why do we ask questions? Is there an intrinsic value in asking questions? Are some types of questions more valuable than others and thus require that struggle more explicitly?
On questions
In the most simplistic view, questions are tools of collecting facts. When you enter a new domain, you may ask questions to gain background knowledge. In this case, the easier you can access base level information, the better. Should this information come from a teacher or an LLM is a debate for another day, but, for now, I believe that generative AI is certainly helpful at this stage.
As you continue becoming an expert in your field, you gain a sense of existing gaps in literature. At this point, asking questions becomes more radical. Now that you have enough information to know that which is unknown, questions become an act of knowledge production in and of themselves.
The distance between your question and the answer contains the potential for discovery and innovation. This distance is not a void. It is filled with false starts, irrelevant detours, partial understandings, and accidental encounters. It allows questions to mutate, sharpen, or even collapse entirely. This is where the importance of exploration comes into play.
I argue that the ease and immediacy of AI-generated answers erode the distance and agency that make knowledge transformative.
Productivity
One of the key arguments in favor of using generative AI in research is that it can speed up real work. It can generate hypothesis, propose materials and help researchers iterate faster. Why would you comb through hundreds of research papers when AI can summarize all that information for you in neat bullet points? It can share the sources with you too. It might leave some out, so you can use a blended approach to solve this problem. It may hallucinate some of them, but surely you can certainly verify that. You are only augmenting your thinking, not outsourcing it. There is nothing wrong with that. It can get the work done. It makes you “productive.”
I think this is where we need to address the elephant in the room. The definition of productivity is not what it used to be. Productivity now is just fear masked as efficiency. The fear of losing time, fear of being left behind, fear of not being fast enough. Fast knowledge is easy, accessible, and trendy. But it is also isolating and restricting. By optimizing for speed, we are losing pathways that built intuition and agency.
Agency
The beauty of figuring out answers as you go is that it can send you to places you never even knew you needed to go, until you actually do. Old journals, op-eds, banned books, recorded interviews, references hidden inside footnotes, a conversation with an expert, and the endless ocean of information just waiting for you to sail through across libraries, bookstores, museums, online, and more. Your self-directed research becomes a mode of knowledge production where you are a sovereign intellect, and where meaning emerges through your embodied, social exploration and raw reasoning.
With generative AI, this relationship inverts. Algorithmic logic becomes the primary agent of thought, generating, interpreting, and circumscribing the intellectual terrain.
When researching alone, a footnote can send you sideways into an unfamiliar discipline, an out-of-print book, or a marginal argument that unsettles your assumptions. With AI, these detours are smoothed out. The answer arrives already curated, its edges trimmed, its contradictions resolved before you ever encounter them.
A conversation with even the most inferential AI has an unintended precision, a tone that is clipped, pragmatic. The ones you have when you’re alone with the many sources of information are prose, but more poetic, softer around the edges, oceanic spirals. It doubles back, hesitates, contradicts itself. Meaning emerges unexpectedly.
Your agency catches the plurality of every medium of information, processing the gestalt of thoughts and tensions, and reflecting them back to you as stars under which you voyage, your identity spelled out in a constellation of matter, energy, memories, and signals.
The struggle was not an inefficiency in accessing answers. It was a feedback loop. It signaled growth. It was authorship.
Ten years ago, when you asked a question, whether about your research paper, an assignment, or simply seeking a clarification, you would get thrown into large webs of knowledge systems, an old teacher, an unexpected hyperlink on a blogpost, print only magazines, information disembodied from the synthetic analysis of large language models.
To truly understand something complex, you would have to penetrate walls of raw data, make original, nuanced connections across disciplines, opening windows into rich fields of inquiry. You would indulge in a “self-conversation” that would test your understanding of the world. The type of conversation that is asymmetrical, unpredictable, and disruptive, pitting many varieties of articles, blogs, research papers, books, and video essays against each other, both testing their limits and your ability to generate original ideas.
Answers today, with generative AI, may look like conversations, but they are not dialogic. They are polished and complete, leaving little room for friction. In the age of fast answers, maybe there is something heraldic about the lack of them.
But, before we delve into the importance of manual exploration and struggle in getting answers, we must first analyze the art of asking questions.
Why do we ask questions? Is there an intrinsic value in asking questions? Are some types of questions more valuable than others and thus require that struggle more explicitly?
On questions
In the most simplistic view, questions are tools of collecting facts. When you enter a new domain, you may ask questions to gain background knowledge. In this case, the easier you can access base level information, the better. Should this information come from a teacher or an LLM is a debate for another day, but, for now, I believe that generative AI is certainly helpful at this stage.
As you continue becoming an expert in your field, you gain a sense of existing gaps in literature. At this point, asking questions becomes more radical. Now that you have enough information to know that which is unknown, questions become an act of knowledge production in and of themselves.
The distance between your question and the answer contains the potential for discovery and innovation. This distance is not a void. It is filled with false starts, irrelevant detours, partial understandings, and accidental encounters. It allows questions to mutate, sharpen, or even collapse entirely. This is where the importance of exploration comes into play.
I argue that the ease and immediacy of AI-generated answers erode the distance and agency that make knowledge transformative.
Productivity
One of the key arguments in favor of using generative AI in research is that it can speed up real work. It can generate hypothesis, propose materials and help researchers iterate faster. Why would you comb through hundreds of research papers when AI can summarize all that information for you in neat bullet points? It can share the sources with you too. It might leave some out, so you can use a blended approach to solve this problem. It may hallucinate some of them, but surely you can certainly verify that. You are only augmenting your thinking, not outsourcing it. There is nothing wrong with that. It can get the work done. It makes you “productive.”
I think this is where we need to address the elephant in the room. The definition of productivity is not what it used to be. Productivity now is just fear masked as efficiency. The fear of losing time, fear of being left behind, fear of not being fast enough. Fast knowledge is easy, accessible, and trendy. But it is also isolating and restricting. By optimizing for speed, we are losing pathways that built intuition and agency.
Agency
The beauty of figuring out answers as you go is that it can send you to places you never even knew you needed to go, until you actually do. Old journals, op-eds, banned books, recorded interviews, references hidden inside footnotes, a conversation with an expert, and the endless ocean of information just waiting for you to sail through across libraries, bookstores, museums, online, and more. Your self-directed research becomes a mode of knowledge production where you are a sovereign intellect, and where meaning emerges through your embodied, social exploration and raw reasoning.
With generative AI, this relationship inverts. Algorithmic logic becomes the primary agent of thought, generating, interpreting, and circumscribing the intellectual terrain.
When researching alone, a footnote can send you sideways into an unfamiliar discipline, an out-of-print book, or a marginal argument that unsettles your assumptions. With AI, these detours are smoothed out. The answer arrives already curated, its edges trimmed, its contradictions resolved before you ever encounter them.
A conversation with even the most inferential AI has an unintended precision, a tone that is clipped, pragmatic. The ones you have when you’re alone with the many sources of information are prose, but more poetic, softer around the edges, oceanic spirals. It doubles back, hesitates, contradicts itself. Meaning emerges unexpectedly.
Your agency catches the plurality of every medium of information, processing the gestalt of thoughts and tensions, and reflecting them back to you as stars under which you voyage, your identity spelled out in a constellation of matter, energy, memories, and signals.
The struggle was not an inefficiency in accessing answers. It was a feedback loop. It signaled growth. It was authorship.