As this is my first post on LessWrong, I begin with an introduction of myself to establish my background. I am finishing my last year of my undergraduate studies, with a degree in Computer Science. My primary concern, and the thesis of this post, is the potential for epistemic failure caused by outsourcing foundational cognitive tasks to Large Language Models (LLMs).
I had started applying to colleges when ChatGPT 3 came out, and I remember the discussions around how this software would impact the future. I quickly began using LLMs for classes, studying, writing/revising essays, writing emails, and especially for coding.
I felt like I had been given "rocket boosters", and cruised through my introductory classes.
When I started taking more difficult classes like operating systems I was hit with the consequences: I had essentially never written a single bit code myself. My mistake was trading the ability of consistent success for the illusion of efficiency; I prioritized a local optimum over incrementally building my long-term cognitive base.
This experience forced a cold-turkey discontinuation of using LLMs for basic tasks and coding. My current belief (with 75% - 80% confidence) that unregulated LLM use creates a serious and measurable long-term deficit in reasoning and critical thinking skills, is aligned with the findings of the Stanford study I cite in arguments with my friends.
I would be open to updating my confidence level if I observe reliable study results of a student cohort that performed better in cognitive tasks when using LLMs compared to when they don't use LLMs.
I'm concerned about the future of education and the young adults who will hold the world up in the future. I advocate for more LLM proofing in classes. From first hand experience, outsourcing my critical thinking, problem solving, and creative skills to an LLM made me a shell of a human that only knew how to copy paste text from a context window to a Google Doc.
I would appreciate your suggestions on other LessWrong (or external) articles on this debate or on the trade offs between efficiency and building skills. Specifically, what are the best (rational) ways to spread awareness about developing strong internal algorithms instead of relying on crutches?