LLMs may enable direct democracy at scale
American democracy currently operates far below its theoretical ideal. An ideal democracy precisely captures and represents the nuanced collective desires of its constituents, synthesizing diverse individual preferences into coherent, actionable policy. Today's system offers no direct path for citizens to express individual priorities. Instead, voters select candidates whose platforms only approximately match their views, guess at which governmental level—local, state, or federal—addresses their concerns, and ultimately rely on representatives who often imperfectly or inaccurately reflect voter intentions. As a result, issues affecting geographically dispersed groups—such as civil rights related to race, gender, or sexuality—are frequently overshadowed by localized interests. This distortion produces presidential candidates more closely aligned with each other's socioeconomic profiles than with the median voter. Traditionally, aggregating individual preferences required simplifying complex desires into binary candidate selections, due to cognitive and communicative limitations. Large Language Models (LLMs), however, introduce a radical alternative by processing detailed, nuanced expressions of individual views at unprecedented scales. Instead of forcing preferences into narrow candidate choices, citizens could freely articulate their concerns and solutions in natural language. An LLM can rapidly integrate these numerous, detailed responses into a clear and unified "Collective Views" document. Previously, synthesizing a hundred individual perspectives might have required five person-hours; specialized LLMs can now accomplish this task in minutes. Parallel implementations could aggregate millions of voices within an hour, transforming a previously unimaginable task into routine practice. Such rapidly generated collective statements create a powerful mechanism for accountability, making government responsiveness directly measurable against clearly articulated publi
randomness = illegibility, stuff u can't model
focus on what feels random and you'll expand what you can model
but don't attend to all randomness. pick good randomness, randomness which tickles you, whatever you feel that is.
and chase it. though "chase it" isn't right. you can chase a car or rabbit, something discrete and coherent. randomness is more like tv static, fuzzy and weird. "sit with it" would be closer. swim thru, stew in, rest w/in it. and then conjure more colors than you knew.