I originally penned a portion of the essay below in 2024, at a time when American exceptionalism was perhaps the most prominent part of the public spectacle. The cultural phenomenon of that time can be best described as being an amalgamation of tech bros thinking they were going to assemble the next Manathan Project after watching Oppenheimer (see the E/Acc movement) and a rampant economy, still reacting to the initial fervor brought by public generative AI models.
I myself had sipped this proverbial Kool-Aid at the time, and had spent many a fortnight penning and debating thoughts on how the US government (and its constituents) should do everything in their power to ensure that this theoretical machine god, regardless of its ramifications, be created within its borders. While I have now realized that such thinking can lead to potentially disastrous outcomes, it is evident that those in the "in-circle" of AI development, which is now restricted not by geography but by exposure, are significantly more aware of the potential long-term, societal risks of creating an unrestricted general intelligence, and are often unaware of how their fellow constituents perceive this technology.
The following essay is meant to be a modern day analogy to A Modest Proposal, in which Johnathan Swift presents a rather grotesque solution to the Irish famine, meaning to highlight the relative apathy that the wealthy in Ireland had for the plight of their fellow countrymen. It is meant to highlight a relatively extreme point of view, that we should delegate a small portion of governance to our autonomous creations and revel in the increased efficiencies that they bring. While this may indeed sound preposterous to those who are fully attuned with the current destructive potential of unrestricted AI progress, it might not sound as catastrophic to an average member of the populace who dreads dealing with the DMV, among other government processes, and does not have the time nor the will to spend thinking about AI doomerism. As such, I have titled this piece A Rational Proposal, as despite its relatively extreme core proposition, it does attempt to put together a cohesive argument that a lot of Americans (and other participants in Western-style democracies) might agree with.
A Rational Proposal: Delegating Governance to our AI Counterparts
Can Machines Think? What was once a question reserved solely for science-fiction buffs, math nerds, and basement dwelling gamers, has now become a fundamental part of both our day-to-day lives, government policy, and our internal musings on the future. AI is no longer limited to scientists and dystopian movies; it is now being used everywhere, from workplaces to college classrooms. Indeed, the increased dependence on tools such as ChatGPT and Claude in the classroom have created a paradigm shift in education, one that has occurred perhaps faster than any change before it. Over 50% of college students are using AI on their assignments, leading some to question if AI is simply a tool, or if it is becoming a replacement to independent thinking, acting as a simulacrum to cognition itself.
With initiatives such as the Department of Government Efficiency highlighting the inefficiencies resulting from governance by a bloated bureaucratic class of humans, one has to wonder if it makes sense to use generative AI tools to automate a small portion of governance. After all, while we may not trust our local chatbot to run the country (yet), most of us can agree that we would much rather see a relatively friendly AI model interacting with us when we go to the DMV or are trying to sort out our tax returns. Generative AI has been posed as a tool to allow for the routine completion of menial tasks so that society can become more efficient, so that the denizens of said society can be left to focus on creative tasks, a tool, according to OpenAI’s Sam Altman, that will continue to surprise us with its capabilities. There is no better example of an archaic remnant of the past than our own bureaucratic governance structure, which, despite being extremely bloated, has never seen (until now) any meaningful attempt at reform.
How GenAI is being used today
Although the question of a machine’s cognitive ability may seem relatively modern, after all, ChatGPT only became public toward the end of 2022, it was actually first posed by Alan Turing, the mathematician who was part of the famous Bletchley Park team of cryptographers that ended up cracking Enigma during World War II, and most famously, is the namesake behind the Turing test, which measures a theoretical machine’s ability to deceive one into thinking it is human through a multi-turn conversation on common topics, almost 75 years ago. Turing posited that once we no longer can tell the difference between flesh and metal, between blood and electricity, the fundamental question of its sentience has been answered, with a resounding Yes.
Through this measurement, the flagship foundational models have already achieved cognition. Indeed, if you were to be presented with a chatbot instructed to converse in contemporary vernacular, it is highly likely you will not be able to tell the difference between its responses and a random human. By all means, the current models have passed the Turing Test: any academically-inclined individual living just 2 decades ago would have anointed these machines as being “alive” and cried out at the possibility of a Terminator-esque Skynet scenario descending upon us.
Yet, it does not feel that way. While college students and software engineers may trust AI to give answers to homework assignments and build rudimentary applications, leaders, whether they be heading small businesses, enterprises, or countries, don’t. Despite the superior cognitive abilities of the foundation models leading the generative AI revolution (consider that the latest batch of reasoning models seem to be able to answer even graduate level questions that are deemed sufficiently hard for subject-matter experts), the actual adoption of generative AI is lagging. The majority of contemporary use-cases, as highlighted by foundational model creator Anthropic, are centered around coding and technical writing, and mainly serves to augment human effort rather than to automate it. While these use-cases certainly have the potential to reduce human labor on certain tasks, they have not resulted in significant societal change.
Civilization-altering means something that fundamentally transforms the human experience, to the point in which human history can be marked as eras pre and post the technology or innovation in question. Historical examples include the printing press, radio, and cable television. More contemporary examples include the internet, the IPhone, and social networks. Generative AI, so far, has been used as an additional tool/replacement rather than changing the status quo. Instead of utilizing code from StackOverFlow or open-source Github repos, programmers are using ChatGPT or Claude Opus to write Python functions. Instead of using online homework tools or the internet, students are using chatbot tools for assignments. While these have resulted in some efficiency improvements, they have not resulted in that one civilization altering moment, that one inkling of fundamental change that will result in the historians of the future terming the years post 2022 as Post-AI. And despite what you might think, it is not the technology that is lagging: it is rather our ability to adopt it and put it to use in something legitimate, something that requires, or rather, invites change.
The decay of our administrative institutions, and a proposal to fix them
It is no secret that our political and governance bodies are decaying institutions. Take any field, be it finance, education, or science, and separate it significantly from reliance on the various politically-inclined branches of society. Witness, then, how innovation begins to permeate, and the same fields go from stagnation to becoming advanced. The majority of technological, industrial, or even cultural progress comes not from government-funded institutions, but rather independent corporations or industries. Contemporary America has been built on this notion, the notion of free-market economics and the avoidance of unnecessary regulations that may impede legitimate progress. Yet, we have neglected to innovate on the one aspect of our lives that is simultaneously extremely important and outdated: the way in which we are governed. Despite exponential growth in the amount of resources (both financial and physical), the amount of actual output that we have seen from the public sector in the United States is minimal. A bloated budget has seen governmental agencies employ numerous workers and resources, without any meaningful progress in how they serve the very party (United States citizens) that pay for their sustenance.
In this article, I outline a simple, yet radical proposition: replace the majority of low-ranking federal agencies and bureaucracies with automated counterparts, powered entirely by foundational models built in an open-source manner. This transformation, like most policies impacting the government, will start as pilots at the municipal or state level. A simple example could be the local DMV office: instead of needing to deal with numerous agents, call centers and outdated recording systems, visitors will be greeted by a friendly language model, fine-tuned on that municipality’s local records and regulations. The language model will be able to do everything from updating records, processing title transfers, and issuing new documents. In order to achieve its goal, and to prevent it from going completely off the rails, it will have access to a limited set of tools, mostly concentrated around content validation, database management, and other functionalities that you might expect an administrator within a DMV office to do. In generative AI, these capabilities are often referred to as tool-use: they involve prompting the language model with a description of tools and functions that it can call when presented with a question, and then asking it to solve a problem or complete some task by using those tools. Of course, until we see a corresponding advancement in the field of robotics and computer vision, actual driving examinations will still need to be carried out by humans. Other departments that do not require manual human to human interaction (the now defunct USAID organization being a prime example) could likely be entirely automated.
The most likely gripe to this proposition is ethical: while the majority of constituents may be able to agree that a generative AI model, once equipped with the proper external scaffolding and tools, is more efficient than its human counterpart, it remains to be seen if it can be more ethical, especially when interacting with a physical, rather than simulated, economy. After all, can we really trust language models who have trouble counting the number of r’s in the word “strawberry”, and are frequently jailbroken to output content outside their safety bounds with simple prompt engineering by seemingly normal individuals with no special resources, properly with the trillions of dollars managed by government agencies? The answer is multifold.
While the predominant view has been to simply point toward malpractice or a lack of morality on behalf of the heads of the departments (a view that might be correct), these findings actually uncover a broader, more significant illness that is permeating our society: a lack of proper oversight and guardrails. Humans make mistakes: like any organism, we have lapses in judgment, likely as a result of our underlying biology. Growing the administrative state has resulted in the exact opposite of what might have been in its originality a well-intentioned effort to introduce additional oversight into government actions. The oversaturation in the number of federal employees have resulted in inefficiency, and subsequent efforts to correct these inefficiencies has resulted in a few bureaucrats having unprecedented control over government spending, regulations, and federal mandates. This phenomenon within our government has resulted in apt comparisons being drawn to the late Roman Empire, which fell due to a bloated administrative state that was unable to adequately serve its own denizens.
A Machine’s hypothetical propensity to become corrupt
The question is not whether an administration of foundational models will be more efficient, or less costly, than the one currently managed by humans. Indeed, it is hard to see how it can get much worse: certainly, the AI models of today will make more logical and sound decisions when presented with a set-budget, and will be more efficient (and likely more friendly) when dealing with administrative tasks. Obviously, initial mistakes made by these agents will be magnified, just as mistakes made by a self-driving car often elicit an overreaction, even if the frequency of said mistakes is orders of magnitude less than that of a human driver. But as time goes on, and our government, and the lives of its denizens, sees a statistically meaningful improvement in quality through the proper allocation of capital, the concerns centered around pure performance will subside.
Rather, the fundamental concern here is rooted in the potential doomsday scenario, one in which machines, composed of silicon and electricity, have taken over our government, our country, and our lives, and have used the very powers we bestowed upon them to render us useless. The solution to this hypothetical doomsday scenario is rooted in the point made earlier: making the development (and capabilities) of these models open-source. Our role (or lack thereof) in the development of open source AI has been brought under the spotlight with the release of R1 by DeepSeek, which temporarily took the hypothetical mandate of intelligence and did so while having its underlying architecture be open source. R1 not only cast doubt on our somewhat artificially fabricated reality in which US based corporations control the AI market and corresponding consumer mind share, but also showed that models developed in an open source manner tend to elicit higher degrees of trust (political and socioeconomic concerns aside) from both developers and users alike.
While an argument for or against the merits of open source AI versus its corporation owned counterpart will likely require a book that is essentially the techno-centric, non-fiction equivalent of War and Peace when it comes to length and internal drama within the main characters, the argument for why the development of an AI model that will play a significant role in the administration of our state and will have the theoretical ability to deploy capital on our behalf needs to be open source is significantly more straightforward: putting the development of our hypothetical governance centric AI in the hands of a traditional for-profit corporation that makes it close source will at best make it impossible for us to understand why it makes mistakes, and at worse, will become subject to the same biases as the current bloated bureaucracy as a result of being “raised” by a small group of disconnected individuals rather than the broader collective.
An open-source governance agent
Contemporary vernacular often correlates open-weights with the development of open source. Indeed, you might see pleas being made to the developers of different generative AI technologies to “release the weights”. In neural networks, and specifically transformer architectures, which are the basis behind the majority of the widely used LLMs today, weights dictate how a neural network maps inputs to outputs during training. Weights play a direct role in influencing the emphasis the model gives to certain words or phrases, typically referred to as tokens in the literature; a slight change in weights can lead to a model interpreting the same sentence in an entirely different manner. For example, the sentence “The bank is crowded on Saturday” can be interpreted entirely differently based on a model’s weights: a slight perturbation can lead to entirely different results. Open sourcing weights not only allows for scientists to reproduce the results claimed in model releases, but also enables developers and other organizations to fine-tune the model for specific use-cases.
However, the development of our theoretical, administrative AI model must go beyond just releasing the weights and the methodology used for its development. Instead, it must be developed entirely in the open. It is likely that such an initiative will likely be led by some sort of company or organization, perhaps operating under a government grant or through independent funding. The creators of the model must not only be held up to the same standards of transparency and openness that we expect from our elected officials, they must be by design forced to adhere to them. From the data used during training to the final deployment architecture, the entirety of the model must be created and deployed in an open manner. It must also be subject to audits, not from traditional consulting firms, but from the broader public, who can review its implementation to ensure that it is continuing to act in the best interests of the constituents it is meant to serve.
The corporatization of AI is a relatively new concept; in fact, it was a group of rebels and misfits, individuals who were on the external fringes of serious academia, who revitalized the field of neural networks in the 20th century. It was not Google, or Amazon, or a billion dollar lab, but yet an independent set of scientists going against the grain, and doing much of their work openly, without restriction. They were ahead of the grain; in fact, it was not until 2012, when Hinton, Ilya Sustkever (previously the chief scientist at OpenAI), and Alex Krizhevsky published their seminal work on utilizing a deep neural network to classify a dataset of images that the corporate giants we are too familiar with took notice. AI has its roots in open source and transparency: we need not be wholly dependent on a singular company, although its resources and validation can certainly be valuable when working with what we can assume to be a slightly distrustful set of government employees. In fact, developing our AI in a safety-first, transparent manner makes it more likely to be adopted in formal legislation: no longer will the threat of corporate bias or the idea of an autonomous “god” being in the hands of a small set of board members loom over the adoption of AI in governance.
Why is such a radical change needed?
This proposal is not meant to be a technological essay in which the superiority and potential of American technology is sung in high praises. Instead, it is meant to serve as a radical repudiation of a system whose inefficiencies are exposing a broader rot within contemporary Western society. The stagnation of our government, of the very institutions we have chosen to lead us, is an indignation and symptom of our culture. Technology, science, and literature have become politicized; indeed, a simple survey of the reactions to any new scientific advancement, cultural artifact, or business endeavor will be vastly different depending on the respondent’s political/groupthink affiliation. This politicization has resulted in the stagnation of western civilizational advancement. As Peter Thiel of Facebook, Palantir, and Founders Fund fame has noted, over the past 20 to 30 years, we have only seen substantial progress in software and computers, with all other fields slowing down. This argument can be extended beyond technology: the great American authors are all men of the past, long gone. The great artists have been gone for even longer. Fashion trends and popular culture have not seen any meaningful change; in fact, if you somehow managed to transport an American from 2005 to the present day, not much about them (beyond their inability to use a smart phone) would be that different from the American of 2025.
This is not a simply techno-libertarian or new right worldview; just last year, The New Republicpenned an article noting how cultural artifacts, whether they be television shows, films, or social media platforms, have promoted an intellectually untaxing and stale aesthetic. While this piece is structured as a criticism of big tech and the role of its algorithms in the suppression of culture, it still recognizes the malady. Our culture, our society, save for the invasion of it by software, has been rendered immobile, and in large part, it is our own doing: we have become too comfortable continuing to do things the way they were.
The Renaissance, which revitalized a Europe which had long been suffering from a period of little to no economic or cultural growth, was not just the result of the DaVincis and Newtons. These visionaries, while exceptional, flourished and innovated in large part due to the societal and cultural shifts that allowed them to do so. The bureaucracies and pro-regulatory administrative states that had characterized the majority of post-Rome Europe were replaced with autonomous city states that utilized, for the time, advanced bookkeeping methods. Private wealth, from families such as the Medics, was spent on fostering innovation and art, rather than state-anointed initiatives. The Renaissance was a fundamental shift in human history, indeed, with many civilization-altering events within it. But it was synthesized from a shift in how the people perceived and interacted with their government, with the very administrative bodies that they trusted to govern them effectively.
Revitalizing American and Western Excellence
The central, utopian future promised by AI is one in which work is automated, one in which we are free to pursue creative pursuits, one in which we leverage omnipotent intelligence to accelerate. Generative AI can very well be the back burner that powers our economy, leads us to Mars, and ushers in a new age of innovation. Our governments are certainly recognizing the fact that this reality is much closer than previously anticipated. The Trump administration has appointed an internal AI czar and has committed an immense amount of funding toward a project meant to accelerate AI progress. In fact, Stargate, the project and firm meant to lead AI The European Union recently held a summit specifically centered around artificial intelligence and tech policy. Recent election results, despite public opinion, have not been the result of a reactionary shift toward the 2016 era of traditional conservative politics. Rather, they are an effort to revitalize the economy, reignite the spirit of innovation that characterized western society in the mid 19th century.
Governance is the first step toward the broader scale adoption of AI, a step that will at once be universally understood (a requirement for fundamental change) and will accelerate its impact in other fields. An administrative state that is run not by a bureaucracy, but by a self-assembling intelligence that properly allocates capital, incentivizes innovation, and updates a somewhat archaic and manual system. Imagine a future in which records for our personal finances and information are no longer maintained by COBOL, a future in which patents and ideas for revolutionary medicines are approved instantaneously rather than requiring double-checking by numerous politically motivated individuals.
More often than not, it is regulation and a lack of opportunity, not a lack of human ingenuity, that curtails innovation. Just as the individuals of the 1400s were no less intellectually capable than their counterparts in the Roman Empire or the Renaissance, we are no less intellectually capable than our peers of the past. Western, or specifically American, exceptionalism historically has been a byproduct of a culture and society that aligns socioeconomic incentives with progress.
If generative AI, which up until now have been little more than assistants, is to become a true civilization altering technology, then it must have its own civilization altering moment. Our governance structures, and the way in which our government is run, is perhaps the best candidate for improvement. Small pilots, starting at the state level with open source AI technologies, will culminate in a society that not only trusts AI, but has the capacity to allow it to reach its potential. In short, changing the way in which we are governed is how we usher in the future, a new era of American and western excellence.
I originally penned a portion of the essay below in 2024, at a time when American exceptionalism was perhaps the most prominent part of the public spectacle. The cultural phenomenon of that time can be best described as being an amalgamation of tech bros thinking they were going to assemble the next Manathan Project after watching Oppenheimer (see the E/Acc movement) and a rampant economy, still reacting to the initial fervor brought by public generative AI models.
I myself had sipped this proverbial Kool-Aid at the time, and had spent many a fortnight penning and debating thoughts on how the US government (and its constituents) should do everything in their power to ensure that this theoretical machine god, regardless of its ramifications, be created within its borders. While I have now realized that such thinking can lead to potentially disastrous outcomes, it is evident that those in the "in-circle" of AI development, which is now restricted not by geography but by exposure, are significantly more aware of the potential long-term, societal risks of creating an unrestricted general intelligence, and are often unaware of how their fellow constituents perceive this technology.
The following essay is meant to be a modern day analogy to A Modest Proposal, in which Johnathan Swift presents a rather grotesque solution to the Irish famine, meaning to highlight the relative apathy that the wealthy in Ireland had for the plight of their fellow countrymen. It is meant to highlight a relatively extreme point of view, that we should delegate a small portion of governance to our autonomous creations and revel in the increased efficiencies that they bring. While this may indeed sound preposterous to those who are fully attuned with the current destructive potential of unrestricted AI progress, it might not sound as catastrophic to an average member of the populace who dreads dealing with the DMV, among other government processes, and does not have the time nor the will to spend thinking about AI doomerism. As such, I have titled this piece A Rational Proposal, as despite its relatively extreme core proposition, it does attempt to put together a cohesive argument that a lot of Americans (and other participants in Western-style democracies) might agree with.
A Rational Proposal: Delegating Governance to our AI Counterparts
Can Machines Think? What was once a question reserved solely for science-fiction buffs, math nerds, and basement dwelling gamers, has now become a fundamental part of both our day-to-day lives, government policy, and our internal musings on the future. AI is no longer limited to scientists and dystopian movies; it is now being used everywhere, from workplaces to college classrooms. Indeed, the increased dependence on tools such as ChatGPT and Claude in the classroom have created a paradigm shift in education, one that has occurred perhaps faster than any change before it. Over 50% of college students are using AI on their assignments, leading some to question if AI is simply a tool, or if it is becoming a replacement to independent thinking, acting as a simulacrum to cognition itself.
With initiatives such as the Department of Government Efficiency highlighting the inefficiencies resulting from governance by a bloated bureaucratic class of humans, one has to wonder if it makes sense to use generative AI tools to automate a small portion of governance. After all, while we may not trust our local chatbot to run the country (yet), most of us can agree that we would much rather see a relatively friendly AI model interacting with us when we go to the DMV or are trying to sort out our tax returns. Generative AI has been posed as a tool to allow for the routine completion of menial tasks so that society can become more efficient, so that the denizens of said society can be left to focus on creative tasks, a tool, according to OpenAI’s Sam Altman, that will continue to surprise us with its capabilities. There is no better example of an archaic remnant of the past than our own bureaucratic governance structure, which, despite being extremely bloated, has never seen (until now) any meaningful attempt at reform.
How GenAI is being used today
Although the question of a machine’s cognitive ability may seem relatively modern, after all, ChatGPT only became public toward the end of 2022, it was actually first posed by Alan Turing, the mathematician who was part of the famous Bletchley Park team of cryptographers that ended up cracking Enigma during World War II, and most famously, is the namesake behind the Turing test, which measures a theoretical machine’s ability to deceive one into thinking it is human through a multi-turn conversation on common topics, almost 75 years ago. Turing posited that once we no longer can tell the difference between flesh and metal, between blood and electricity, the fundamental question of its sentience has been answered, with a resounding Yes.
Through this measurement, the flagship foundational models have already achieved cognition. Indeed, if you were to be presented with a chatbot instructed to converse in contemporary vernacular, it is highly likely you will not be able to tell the difference between its responses and a random human. By all means, the current models have passed the Turing Test: any academically-inclined individual living just 2 decades ago would have anointed these machines as being “alive” and cried out at the possibility of a Terminator-esque Skynet scenario descending upon us.
Yet, it does not feel that way. While college students and software engineers may trust AI to give answers to homework assignments and build rudimentary applications, leaders, whether they be heading small businesses, enterprises, or countries, don’t. Despite the superior cognitive abilities of the foundation models leading the generative AI revolution (consider that the latest batch of reasoning models seem to be able to answer even graduate level questions that are deemed sufficiently hard for subject-matter experts), the actual adoption of generative AI is lagging. The majority of contemporary use-cases, as highlighted by foundational model creator Anthropic, are centered around coding and technical writing, and mainly serves to augment human effort rather than to automate it. While these use-cases certainly have the potential to reduce human labor on certain tasks, they have not resulted in significant societal change.
Civilization-altering means something that fundamentally transforms the human experience, to the point in which human history can be marked as eras pre and post the technology or innovation in question. Historical examples include the printing press, radio, and cable television. More contemporary examples include the internet, the IPhone, and social networks. Generative AI, so far, has been used as an additional tool/replacement rather than changing the status quo. Instead of utilizing code from StackOverFlow or open-source Github repos, programmers are using ChatGPT or Claude Opus to write Python functions. Instead of using online homework tools or the internet, students are using chatbot tools for assignments. While these have resulted in some efficiency improvements, they have not resulted in that one civilization altering moment, that one inkling of fundamental change that will result in the historians of the future terming the years post 2022 as Post-AI. And despite what you might think, it is not the technology that is lagging: it is rather our ability to adopt it and put it to use in something legitimate, something that requires, or rather, invites change.
The decay of our administrative institutions, and a proposal to fix them
It is no secret that our political and governance bodies are decaying institutions. Take any field, be it finance, education, or science, and separate it significantly from reliance on the various politically-inclined branches of society. Witness, then, how innovation begins to permeate, and the same fields go from stagnation to becoming advanced. The majority of technological, industrial, or even cultural progress comes not from government-funded institutions, but rather independent corporations or industries. Contemporary America has been built on this notion, the notion of free-market economics and the avoidance of unnecessary regulations that may impede legitimate progress. Yet, we have neglected to innovate on the one aspect of our lives that is simultaneously extremely important and outdated: the way in which we are governed. Despite exponential growth in the amount of resources (both financial and physical), the amount of actual output that we have seen from the public sector in the United States is minimal. A bloated budget has seen governmental agencies employ numerous workers and resources, without any meaningful progress in how they serve the very party (United States citizens) that pay for their sustenance.
In this article, I outline a simple, yet radical proposition: replace the majority of low-ranking federal agencies and bureaucracies with automated counterparts, powered entirely by foundational models built in an open-source manner. This transformation, like most policies impacting the government, will start as pilots at the municipal or state level. A simple example could be the local DMV office: instead of needing to deal with numerous agents, call centers and outdated recording systems, visitors will be greeted by a friendly language model, fine-tuned on that municipality’s local records and regulations. The language model will be able to do everything from updating records, processing title transfers, and issuing new documents. In order to achieve its goal, and to prevent it from going completely off the rails, it will have access to a limited set of tools, mostly concentrated around content validation, database management, and other functionalities that you might expect an administrator within a DMV office to do. In generative AI, these capabilities are often referred to as tool-use: they involve prompting the language model with a description of tools and functions that it can call when presented with a question, and then asking it to solve a problem or complete some task by using those tools. Of course, until we see a corresponding advancement in the field of robotics and computer vision, actual driving examinations will still need to be carried out by humans. Other departments that do not require manual human to human interaction (the now defunct USAID organization being a prime example) could likely be entirely automated.
The most likely gripe to this proposition is ethical: while the majority of constituents may be able to agree that a generative AI model, once equipped with the proper external scaffolding and tools, is more efficient than its human counterpart, it remains to be seen if it can be more ethical, especially when interacting with a physical, rather than simulated, economy. After all, can we really trust language models who have trouble counting the number of r’s in the word “strawberry”, and are frequently jailbroken to output content outside their safety bounds with simple prompt engineering by seemingly normal individuals with no special resources, properly with the trillions of dollars managed by government agencies? The answer is multifold.
The problem with our institutions today
First, consider the current state of the government and the administrative state. DOGE, less than a month into President Trump’s second term, has found billions of dollars in taxpayer money being funneled toward what can be best summarized as wasteful initiatives. From transgender surgeries in Guatemala to a play based on DEI, USAID was funding organizations that seemed to be utterly at odds with improving the fundamental living conditions of the citizens of the allies it was purporting to help. Other, more familiar institutions seem to suffer from similar problems: the Environmental Protection Agency recently uncovered over 20 billion dollars of waste, while FEMA was found to have spent approximately 7 billion dollars on housing illegal migrants.
While the predominant view has been to simply point toward malpractice or a lack of morality on behalf of the heads of the departments (a view that might be correct), these findings actually uncover a broader, more significant illness that is permeating our society: a lack of proper oversight and guardrails. Humans make mistakes: like any organism, we have lapses in judgment, likely as a result of our underlying biology. Growing the administrative state has resulted in the exact opposite of what might have been in its originality a well-intentioned effort to introduce additional oversight into government actions. The oversaturation in the number of federal employees have resulted in inefficiency, and subsequent efforts to correct these inefficiencies has resulted in a few bureaucrats having unprecedented control over government spending, regulations, and federal mandates. This phenomenon within our government has resulted in apt comparisons being drawn to the late Roman Empire, which fell due to a bloated administrative state that was unable to adequately serve its own denizens.
A Machine’s hypothetical propensity to become corrupt
The question is not whether an administration of foundational models will be more efficient, or less costly, than the one currently managed by humans. Indeed, it is hard to see how it can get much worse: certainly, the AI models of today will make more logical and sound decisions when presented with a set-budget, and will be more efficient (and likely more friendly) when dealing with administrative tasks. Obviously, initial mistakes made by these agents will be magnified, just as mistakes made by a self-driving car often elicit an overreaction, even if the frequency of said mistakes is orders of magnitude less than that of a human driver. But as time goes on, and our government, and the lives of its denizens, sees a statistically meaningful improvement in quality through the proper allocation of capital, the concerns centered around pure performance will subside.
Rather, the fundamental concern here is rooted in the potential doomsday scenario, one in which machines, composed of silicon and electricity, have taken over our government, our country, and our lives, and have used the very powers we bestowed upon them to render us useless. The solution to this hypothetical doomsday scenario is rooted in the point made earlier: making the development (and capabilities) of these models open-source. Our role (or lack thereof) in the development of open source AI has been brought under the spotlight with the release of R1 by DeepSeek, which temporarily took the hypothetical mandate of intelligence and did so while having its underlying architecture be open source. R1 not only cast doubt on our somewhat artificially fabricated reality in which US based corporations control the AI market and corresponding consumer mind share, but also showed that models developed in an open source manner tend to elicit higher degrees of trust (political and socioeconomic concerns aside) from both developers and users alike.
While an argument for or against the merits of open source AI versus its corporation owned counterpart will likely require a book that is essentially the techno-centric, non-fiction equivalent of War and Peace when it comes to length and internal drama within the main characters, the argument for why the development of an AI model that will play a significant role in the administration of our state and will have the theoretical ability to deploy capital on our behalf needs to be open source is significantly more straightforward: putting the development of our hypothetical governance centric AI in the hands of a traditional for-profit corporation that makes it close source will at best make it impossible for us to understand why it makes mistakes, and at worse, will become subject to the same biases as the current bloated bureaucracy as a result of being “raised” by a small group of disconnected individuals rather than the broader collective.
An open-source governance agent
Contemporary vernacular often correlates open-weights with the development of open source. Indeed, you might see pleas being made to the developers of different generative AI technologies to “release the weights”. In neural networks, and specifically transformer architectures, which are the basis behind the majority of the widely used LLMs today, weights dictate how a neural network maps inputs to outputs during training. Weights play a direct role in influencing the emphasis the model gives to certain words or phrases, typically referred to as tokens in the literature; a slight change in weights can lead to a model interpreting the same sentence in an entirely different manner. For example, the sentence “The bank is crowded on Saturday” can be interpreted entirely differently based on a model’s weights: a slight perturbation can lead to entirely different results. Open sourcing weights not only allows for scientists to reproduce the results claimed in model releases, but also enables developers and other organizations to fine-tune the model for specific use-cases.
However, the development of our theoretical, administrative AI model must go beyond just releasing the weights and the methodology used for its development. Instead, it must be developed entirely in the open. It is likely that such an initiative will likely be led by some sort of company or organization, perhaps operating under a government grant or through independent funding. The creators of the model must not only be held up to the same standards of transparency and openness that we expect from our elected officials, they must be by design forced to adhere to them. From the data used during training to the final deployment architecture, the entirety of the model must be created and deployed in an open manner. It must also be subject to audits, not from traditional consulting firms, but from the broader public, who can review its implementation to ensure that it is continuing to act in the best interests of the constituents it is meant to serve.
The corporatization of AI is a relatively new concept; in fact, it was a group of rebels and misfits, individuals who were on the external fringes of serious academia, who revitalized the field of neural networks in the 20th century. It was not Google, or Amazon, or a billion dollar lab, but yet an independent set of scientists going against the grain, and doing much of their work openly, without restriction. They were ahead of the grain; in fact, it was not until 2012, when Hinton, Ilya Sustkever (previously the chief scientist at OpenAI), and Alex Krizhevsky published their seminal work on utilizing a deep neural network to classify a dataset of images that the corporate giants we are too familiar with took notice. AI has its roots in open source and transparency: we need not be wholly dependent on a singular company, although its resources and validation can certainly be valuable when working with what we can assume to be a slightly distrustful set of government employees. In fact, developing our AI in a safety-first, transparent manner makes it more likely to be adopted in formal legislation: no longer will the threat of corporate bias or the idea of an autonomous “god” being in the hands of a small set of board members loom over the adoption of AI in governance.
Why is such a radical change needed?
This proposal is not meant to be a technological essay in which the superiority and potential of American technology is sung in high praises. Instead, it is meant to serve as a radical repudiation of a system whose inefficiencies are exposing a broader rot within contemporary Western society. The stagnation of our government, of the very institutions we have chosen to lead us, is an indignation and symptom of our culture. Technology, science, and literature have become politicized; indeed, a simple survey of the reactions to any new scientific advancement, cultural artifact, or business endeavor will be vastly different depending on the respondent’s political/groupthink affiliation. This politicization has resulted in the stagnation of western civilizational advancement. As Peter Thiel of Facebook, Palantir, and Founders Fund fame has noted, over the past 20 to 30 years, we have only seen substantial progress in software and computers, with all other fields slowing down. This argument can be extended beyond technology: the great American authors are all men of the past, long gone. The great artists have been gone for even longer. Fashion trends and popular culture have not seen any meaningful change; in fact, if you somehow managed to transport an American from 2005 to the present day, not much about them (beyond their inability to use a smart phone) would be that different from the American of 2025.
This is not a simply techno-libertarian or new right worldview; just last year, The New Republic penned an article noting how cultural artifacts, whether they be television shows, films, or social media platforms, have promoted an intellectually untaxing and stale aesthetic. While this piece is structured as a criticism of big tech and the role of its algorithms in the suppression of culture, it still recognizes the malady. Our culture, our society, save for the invasion of it by software, has been rendered immobile, and in large part, it is our own doing: we have become too comfortable continuing to do things the way they were.
The Renaissance, which revitalized a Europe which had long been suffering from a period of little to no economic or cultural growth, was not just the result of the DaVincis and Newtons. These visionaries, while exceptional, flourished and innovated in large part due to the societal and cultural shifts that allowed them to do so. The bureaucracies and pro-regulatory administrative states that had characterized the majority of post-Rome Europe were replaced with autonomous city states that utilized, for the time, advanced bookkeeping methods. Private wealth, from families such as the Medics, was spent on fostering innovation and art, rather than state-anointed initiatives. The Renaissance was a fundamental shift in human history, indeed, with many civilization-altering events within it. But it was synthesized from a shift in how the people perceived and interacted with their government, with the very administrative bodies that they trusted to govern them effectively.
Revitalizing American and Western Excellence
The central, utopian future promised by AI is one in which work is automated, one in which we are free to pursue creative pursuits, one in which we leverage omnipotent intelligence to accelerate. Generative AI can very well be the back burner that powers our economy, leads us to Mars, and ushers in a new age of innovation. Our governments are certainly recognizing the fact that this reality is much closer than previously anticipated. The Trump administration has appointed an internal AI czar and has committed an immense amount of funding toward a project meant to accelerate AI progress. In fact, Stargate, the project and firm meant to lead AI The European Union recently held a summit specifically centered around artificial intelligence and tech policy. Recent election results, despite public opinion, have not been the result of a reactionary shift toward the 2016 era of traditional conservative politics. Rather, they are an effort to revitalize the economy, reignite the spirit of innovation that characterized western society in the mid 19th century.
Governance is the first step toward the broader scale adoption of AI, a step that will at once be universally understood (a requirement for fundamental change) and will accelerate its impact in other fields. An administrative state that is run not by a bureaucracy, but by a self-assembling intelligence that properly allocates capital, incentivizes innovation, and updates a somewhat archaic and manual system. Imagine a future in which records for our personal finances and information are no longer maintained by COBOL, a future in which patents and ideas for revolutionary medicines are approved instantaneously rather than requiring double-checking by numerous politically motivated individuals.
More often than not, it is regulation and a lack of opportunity, not a lack of human ingenuity, that curtails innovation. Just as the individuals of the 1400s were no less intellectually capable than their counterparts in the Roman Empire or the Renaissance, we are no less intellectually capable than our peers of the past. Western, or specifically American, exceptionalism historically has been a byproduct of a culture and society that aligns socioeconomic incentives with progress.
If generative AI, which up until now have been little more than assistants, is to become a true civilization altering technology, then it must have its own civilization altering moment. Our governance structures, and the way in which our government is run, is perhaps the best candidate for improvement. Small pilots, starting at the state level with open source AI technologies, will culminate in a society that not only trusts AI, but has the capacity to allow it to reach its potential. In short, changing the way in which we are governed is how we usher in the future, a new era of American and western excellence.