my blog auto-links to LessWrong and yes, some posts are paywalled. I'd be open to putting the free version of paid posts here if someone on the website sets that up.
Ah, yeah makes sense. I expect it may be a bit tricky to setup since Substack probably doesn't provide a natural RSS for paid content.
Yeah looks like it's hard to do this automatically, but I just copypasted the full essay here as a one-off. I also reset the date of the post because I think this got buried for sort of spurious reasons)
(@niplav, @Three-Monkey Mind, you might consider removing your downvotes since a) the problem is fixed, b) I think this was pretty unintentional on sarah's part, I was the one who setup the RSS crosspost and didn't realize there would be edge cases like this)
Happy to help! Not sure I understand though.
As in, you would be happy to put the post content itself up even for paid posts? I think we can arrange that!
Strong downvote for linking the free part of a paywalled article here.
(Edit: AFAICT the whole thing was posted, so downvote rescinded.)
Yes, it's somewhat ironic that the implication here is that the Web 2.0 idealism of "information wants to be free" is dead, considering that discussions about its replacement, one for which it's still "possible to be better", are happening behind paywalls. And sure, no viable alternative to paywalls seems to have emerged, they may indeed be inevitable, but presumably anything worthy to be called the new "foundation" has to at least be free and accessible to all?
My post on Neutrality was kind of abstract, and I think it’s time to do a follow-up that’s more concrete.
Specifically, the place where I think there’s something to be done is in information technology, and the ways it shapes the perceived “world”.
One way of looking at this: we had the institutional media era in the 20th century (newspapers and magazines, book publishers, broadcast television and radio), then the “Web 2.0” era of blogs and social media as more and more of the world joined the Internet, and now we may be at the beginning of a new information paradigm that’s mediated by LLMs.
Some people are interested in making that new information paradigm “good” in some sense.
As I said earlier:
[most media & internet tech today] just isn’t prepared to do the job of being, y’know, good.
in the sense that a library staffed by idealistic librarians is trying to be good, trying to not take sides and serve all people without preaching or bossing or censoring, but is also sort of trying to be a role model for children and a good starting point for anyone’s education and a window into the “higher” things humanity is capable of.
Ideally we would like information/media that, on balance:
and yet, we also want to avoid overcorrecting towards censorship, bias, priggishness, or overly narrow conceptions of The Good.
Part of what I was talking about in the earlier post is what it even means to strike that balance of “neutrality” and what a realistic, possible version of it would be. It’s impossible to be “neutral” towards all things, but you can be relatively neutral in a way that effectively serves people coming from many perspectives.
This is a good moment to think about what we want the “new world” to look like.
LLMs are new enough that there’s still an early tinkering dynamic going on, with promising tools being built to suit thoughtful power users; but there’s also a big push towards sloppy, lowest-common-denominator stuff that can have worrisome effects (widespread cheating in school, sycophancy distorting people’s worldviews, declining literacy and attention spans).
The “new world” hasn’t been fully built yet, which means it’s not set in stone. It’s possible for it to be better than the Web 2.0 world.
But that’ll take some work. Going from platitudes to products is the hard part; articulating vague aspirational senses into more specific priorities, and then concretizing them further into policies, protocols, and code.
I imagine this “wants to be” a workshop/conference or even a series of them, in which people who are already roughly in this space come together to share & debate ideas and nucleate new projects.
Some examples and archetypes of the people and technologies that seem to belong in this overall “technology for thinking” space:
“AI for Epistemics”
Organizations like the Forethought Institute, the Future of Life Foundation, the AI Objectives Institute, Elicit, Mosaic Labs, Sage, and others, are using the term “AI for epistemics” to refer to the project of developing AI tools to improve the quality of human thinking.
For instance:
Some examples: the AI for Epistemics Hackathon, sponsored by Manifund and Elicit; the AI for Human Reasoning fellowship at the Future of Life Foundation; FutureSearch, an AI forecasting tool; Talk to the City, a collective discussion tool.
There are also projects that are clearly in this vein but don’t necessarily use the AI-for-epistemics tag, like ReviewerZero.ai, a tool for automatically detecting red flags for fraudulent research, and FutureHouse’s automated scientific literature review tools, which proved to be the best “deep research” tools out there the last time I did an informal spot check.
Projects like the Society Library (AI-automated debate mapping) are also “AI-for-epistemics” without the name.
AI Researchers at Top Labs
The big AI labs may not have perfectly aligned incentives with the general public, but if you’re looking for people who work directly on making LLMs give accurate, honest, unbiased, trustworthy answers, you’ll probably find a lot of them at the labs.
Amanda Askell at Anthropic is a prominent example of this archetype; to the extent that Claude seems to have pretty “good values” that are robust to tampering, my impression is that her work on giving the LLM an ethical philosophy deserves a lot of credit.
To an impressive degree, LLMs today usually do give the right answers on factual questions, and reasonable answers on ambiguous questions. Somebody made that happen; and they’re continuing to work on making it better. AI researchers in industry (both inside and outside “alignment” teams) definitely belong in the conversation.
“AI Whisperers” and Independent Researchers/Developers
“AI whisperers” like @repligate (website) are power users who can get creative LLM outputs with prompting, and also build interfaces like Loom that allow users to get a clearer sense of how LLMs actually work (e.g. branching paths of possible answers, not deterministic results).
These kinds of tools and practices are often helpful for escaping the “default use” attractors of LLMs, which can be boring and uncreative, and truly exploring the general-purpose potential of present-day AI.
While “AI whisperers” often think of LLMs as sentient beings rather than mere tools, I think there’s potential for common ground with tool-makers who would like to scaffold better, more beautiful, and more interesting LLM use cases.
Other independent researchers like Chris Pang likewise work on LLM-based tools that give agency to users, such as LLM-based text analysis for generating knowledge graphs.
One way of looking at this space is LLM tools & techniques for power users; instead of hiding the complexity under a bunch of default settings, how can you give the user fine-grained control and understanding of what kind of responses the LLM gives under different settings, or even what the LLM is “picking up” from the user’s prompts?
AI Startups With an Ethics/Taste Focus
Sometimes, LLM-based startups are intentionally independent of the big labs (sometimes even bootstrapped and independent of investors) so that they can deliver something more “tasteful”, more in line with a vision of quality over slop.
Auren is an example of this philosophy for the chatbot-based therapist/guide/coach use case, and Midjourney in the world of image generation.
Cryptography for Attestation and Governance
Zero-knowledge proofs make it possible to prove a statement about a dataset is true, without accessing the dataset directly.
They could be used to improve the trustworthiness of information by verifying provenance (the information really comes from the claimed source) or algorithmic integrity (the information was really generated by such-and-such unmodified piece of code).
This can be applied to LLMs, i.e. providing a proof that an output came from a given model.
More generally, cryptography can be used for governance, reputation tracking, and collaborative decision-making in ways that improve epistemics.
For instance (as one crypto founder described to me), you could imagine a “black box” full of anonymous, hidden comments about the quality of a scientific research result, which a third party could query to find a result like “40/50 scientists verified to work in this field think it’s not real” — a claim none of them may want to make publicly or even pseudonymously.
Tools for Thought
People like Andy Matuschak, Michael Nielsen, Bret Victor, Bill Seitz, and others have been working on technological tools that enhance thinking rather than replacing it.
A lot of the tools that have come out of this culture are note-taking apps (my own favorite is Roam), digital gardens, and spaced repetition tools (like Anki). More personal, DIY, and learning-focused than any kind of media (social or traditional), and arguably healthier, though still practiced by a fairly small community.
Relatedly, the world of human-computer interaction research probably has a lot to teach us about how different tech affordances shape our behavior and thought, and which technological patterns are “healthier” or promote better default habits.
Forecasting and Prediction Markets
Prediction markets (like Manifold, Polymarket, and Kalshi) and forecasting organizations (like Metaculus, Sentinel, and the Rand Forecasting Initiative) care a lot about incentivizing “getting the right answer”, and the tools and techniques that help people predict the future well.
Interestingly, the largest experiment in futarchy today seems to be MetaDAO, a crypto investment fund that decides how to allocate resources according to a prediction market.
A lot of proposals for more “truth-promoting” media platforms involve some kind of forecasting or prediction-market element, though to date I don’t know of any that have been implemented.
(Some) AI Ethics
There’s a lot of critique of AI/LLMs that comes down to “here’s how they’re bad,” some of which is valid, but which doesn’t really engage with the question of how to create a good world in which AI plays a larger and larger role.
I find myself more sympathetic to projects like the Cosmos Institute, which argues for building technologies in ways that promote values like “freedom,” “truth”, and “human flourishing”.
The people who build technologies are not slaves to their incentives; they actually have a lot of agency about what policies and defaults they build into tools.
If we want to make ethically and epistemically “good” information technologies, we need both nontechnical (e.g. philosophy, policy, writing) and technical (evals and experiments) work on defining what makes technologies “good” and to what extent current technologies are succeeding or failing. And to be informative, those independent assessments need to avoid reflexively pro-tech or anti-tech stances.
(Some) Social Media Moderation
Twitter/X’s Community Notes feature is a remarkable example of a moderation tool that has earned trust across the political spectrum, and they seem to have done it with very simple algorithms (showing notes that are agreed upon by users who usually disagree). This is old-school Web 2.0 stuff done right, by a team that actually cares about avoiding censorship and earning trust rather than demanding it by fiat.
Other teams and technologists who have thought deeply about encouraging “healthy” discourse environments and avoiding the more toxic features of social media, without heavy-handed policing of free expression, would be valuable contributors to this nascent “community”.
(Some) Academic Political Science
A friend with a political science background explained to me that the vast majority of “political science” research today is not abstract discourse on Locke and Machiavelli, but data-driven empirical research, often very similar to economics.
If you want to know things like “how social media affects people’s political views”, you may want to ask a political scientist. They have data on that.
Obviously the worlds of social science academia and the tech industry are very different, but there may be room for some fruitful cross-pollination there.
(Some) Librarians, Archivists, Investigative Journalists
While their work isn’t necessarily tech-related, there’s a lot to be learned from people who focus on providing and preserving information as a “public good.”
In fact, when I started talking to people about what it would take to make a better information environment, one of the responses I got was “I used to work on that, at a nonprofit that funded local journalism.” It makes a certain amount of sense; we can’t be well-informed citizens if nobody covers what’s happening at City Hall.
Likewise, if we want LLMs to avoid “presentism” and actually include information from older sources, we need old books and papers and letters to be digitized and used in training data.
Etc.
This is obviously not an exhaustive list. If you have additional thoughts on who’d be a good contributor to the broader conversation on “how computers can help us think real good”, especially if it’s you, I’d love to hear!