Overview: Recent developments in AI will change the world in all sorts of ways. It is likely to revolutionize academic research. This is one proposal regarding how AI might be used to improve the way that we communicate arguments (such as in philosophy). I stand by behind the thought that there are problems with present publication formats that technology could solve, but this particular solution is highly speculative and not super well thought through. Main ideas are bolded.

Problems with the linear paradigm

Current serious argumentative work (such as in philosophy) primarily takes the form of written papers and books, and less commonly lectures and blog posts. Theorists might explain their view in an interactive way in personal contexts, such as at a departmental tea time, but what they send elsewhere, the way that most people might interact with their work, is linear.

Long-form linear argumentative writing is essentially an ancient Greek technology. It hasn't been much improved since. Authors present ideas in sequence, attempting to walk the reader through them in an order that is conducive to understanding. Readers mostly read articles and books straight through. They may occasionally skip around, especially if they are trying to extract details about the argument. However, linear writing isn't particularly suited for this.

Linear presentations have a few issues:

  • The amount of content in a written work is limited by space and the reader's patience. Articles generally consist of less than a few dozen pages. Books may be much longer, but still aren't long enough to capture all of the author's views. Often, complex assumptions and arguments must be glossed over or left out. Authors sometimes publish the same ideas multiple times to serve the needs of different audiences.

  • Authors must present their ideas with at most only a few levels of detail for readers in a single work. (E.g. the abstract, an introductory summary, and the full argument.) Readers cannot easily get access to all and only the level of detail of the arguments they want. They can skim parts they aren't interested in, but they can't get more information about arbitrary sections.

  • Authors have to decide how much background material to include, and what level of sophistication to pitch it to. Many books and articles are written for a professional audience and are inaccessible by those without much experience, or are written for a general audience and gloss over the intricacies.

  • Authors are responsible for anticipating the potential misunderstandings of the reader and figuring out what to say to avoid them without wasting too much time or space.

These issues are particularly noticeable when reading old philosophical literature. There are significant debates about how to correctly interpret Aristotle or Kant (or even Carnap) that could probably be resolved by a simple conversation with the author.

An AI assisted conversational alternative

Large language models are good at predicting what human beings will say, including human experts. They can be trained on specific corpuses to get good at predicting how specific individuals would talk about specific issues. The results produced in the five years since the invention of transformer architectures have been quite impressive, and even without additional computer power or further architectual advances they are sure to improve substantially over time.

Instead of presenting ideas linearly on paper, in the future an author might train AIs to predict what they would say about a topic instead of writing a paper about it. Call a chatbot trained to present and answer questions about a specific argument in the same manner as its originator a 'dialectical avatar'. People may learn from an avatar the same way they would learn from a dedicated tutor who had conducted a detailed interview with author and read everything else they had written.

The training process for creating a new avatar might involve the author explaining their ideas to the avatar and answering questions. The avatar might be shown a body of their existing work and their influences, or perhaps even given access to other avatars to flesh out their intellectual inclinations more broadly. The avatars might prompt the author for details or nuance to understand the argument that they would need in order to accurately explain the view. AI might also be used to figure out what questions would best help the avatar understand nuance in the views. The author can verify that the avatar gives accurate answers to disparate questions before it goes to publication.

Some advantages:

  • An avatar could provide insight into any aspect of an argument to its audience. It could go into far more depth on different arguments than is possible in an ordinary paper or book.

  • An avatar could tailor its answers to the particular needs and interests of the audience. By responding to particular questions, it could focus only on the content that audience cares about. In the same way that humans use context to decide how to frame things, the avatar could make judgments on the fly about what its questioner is interested in.

  • An avatar could tailor its answers to the capacities and background knowledge of its audience. It can avoid technical lingo, or be prepared to explain any use of terminology in the specific context it occurs in. If the audience knows little about neuroscience but lots about statistics, it could briefly gloss the statistical side while going in depth on the neuroscience.

  • The process of training an avatar would prompt authors to think more carefully about various aspects of their view rather than diverting their attention to how best to present that view.

Viability

Large language models can come reasonably close to enabling this functionality already. A LLM trained to respond like Dan Dennett has proven difficult to distinguish from the real person given brief responses to philosophical questions. I suspect that no major new advancements would be needed to make a dialectical avatar work reasonably well given sufficient training. That said, existing models aren't designed for this purpose and wouldn't necessarily perform well at all without substantial tweaking.

LLM aren't yet particularly great at following or producing long and complex arguments. It is not hard to imagine that a system for generating competent avatars could be possible in as little as five years, if anyone wanted to spend the effort and resources to build one.

It would probably take a lot longer before this publication format was culturally feasible in academia. If dialectical avatars could be trained today, my gut is that they would not be used widely at all even if they had clear advantages. Dialectical avatars may not have a chance to be established before they are superseded by other cultural and technological changes, such as the transference of most intellectual labor to AIs.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 2:39 AM

I agree. AI is the new medium.

I posted this on another forum a week ago:

I recently had the idea that AI is the ultimate medium that will finally enable effective many-to-many communication that the Internet promised but hasn't quite delivered.

The best existing example is machine translation between languages with voice recognition and speech synthesis. While you could have a human translator, that doesn't scale. However, this is mostly just one-to-one communication.

In the last couple years, we've seen high-quality AI image generation that allows people without much artistic talent to express themselves visually. We should also see high-quality text-to-video, text-to-3D, and music before too long. ChatGPT allows poor writers to write error-free, with a defined tone (e.g. formally), clearly, and quickly. You could have the AI take your facial expressions or body language into account, or in the more distant future, directly read brain waves.

But the next revolution will be personalized AI assistants that understand each person, their interests, and how they express themselves better than they do themselves (some people already had this reaction to TikTok's algorithm). With these assistants, anybody can create a customized message that would be most effective.

The third part is the AI routing algorithm that enables each person to reach as many other interested parties as possible, which is currently at a very primitive stage (e.g. following somebody on Twitter or Facebook interest groups). Among other things there should urgency/reach/importance/mood preference.

If we put these together, any person should be able to express themselves and have their ideas reach the right people in the most effective format. It would be fun to create an experimental social network that works towards this goal, but as usual, there would be the chicken-and-egg problem of getting enough users.

One consequence of all this is a hit to consensus reality.

As you say, an author can modify a text to communicate based on a particular value function (i.e. "a customized message that would be most effective".)

But the recipient of a message can also modify that message depending on their own (or rather their personalised LLMs) value function.

Generative content and generative interpretation.

Eventually this won't be just text but essentially all media - particularly through use of AR goggles.

Interesting times?!

Some interesting ideas - a few comments:

My sense you are writing this as someone without lots of experience in writing and publishing scientific articles (correct me if I am wrong).

A question to me is whether you can predict what someone would say on a topic instead of writing about it. I would argue that the act of linearly presenting ideas on paper - "writing" - is a form of extended creative cognitive creation that is difficult to replicate. It woudn't be replicated by a avatar just talking to a human to understand their views. People don't write to convert what's in their heads to communicate it - instead writing creates thinking.

My other comment is that most of the advantages can be gained by AI interpretations and re-imagining of a text e.g. you can ask ChatGPT to take a paper and explain it in more detail by expanding points, or make it simpler. So points 2 and 3 of your advantage can be achieved today and post writing.

Point 4 of the advantages "positive spin" is an incentive issue so not really about effective communication.

Point 1 also could be achieved by the AI reading a text. Of course though the AI can only offer interpretations - which would be true with or without an AI interogating an author (e.g. an AI could read all that authors works to get a better sense of what they might say).

So in sum, I can see avatars/agents as a means of assisting humans to read texts. We already have this is in principle possible today. For example, I am already asking ChatGPT to explain parts of text to me and summarise papers - it will just get better. But I don't see in the near term the avatar being a publication format - rather an interface to publications.

The interesting question for me though which is what might be the optimal publication format to allow LLM's to progress science - where LLM's are able to write papers themselves e.g. review articles. Would it be much different from what we already have? Would we need to provide results in a more machine readable way? (probably)

Thanks for your comments!

My sense you are writing this as someone without lots of experience in writing and publishing scientific articles (correct me if I am wrong).

You're correct in that I haven't published any scientific articles -- my publication experience is entirely in academic philosophy and my suggestions are based on my frustrations there. This may be a much more reasonable proposal for academic philosophy than other disciplines, since philosophy deals more with conceptually nebulous issues and has fewer objective standards.

linearly presenting ideas on paper - "writing" - is a form of extended creative cognitive creation that is difficult to replicate

I agree that writing is a useful exercise for thinking. I'm not so sure that it is difficult to replicate, or that the forms of writing for publication are the best ways of thinking. I think getting feedback on your work is also very important, and something that would be easier, faster, working with an avatar. So part of the process of training an avatar might be sketching an argument in a rough written form and then answering a lot of questions about it. That isn't obviously a worse way to think through issues than writing linearly for publication.

My other comment is that most of the advantages can be gained by AI interpretations and re-imagining of a text e.g. you can ask ChatGPT to take a paper and explain it in more detail by expanding points, or make it simpler.

This could probably get a lot of the same advantages. Maybe the ideal is to have people write extremely long papers that LLMs condense for different readers. My thought was that at least as papers are currently written, some important details are generally left out. This means that arguments require some creative interpretation on the part of a serious reader.

The interesting question for me though which is what might be the optimal publication format to allow LLM's to progress science

I've been thinking about these issues in part in connection with how to use LLMs to make progress in philosophy. This seems less clear cut than science, where there are at least processes for verifying which results are correct. You can train AIs to prove mathematical theorems. You might be able to train an AI to design physics experiments and interpret the data from them. Philosophy, in contrast, comes down more to formulating ideas and considerations that people find compelling; it is possible that LLMs could write pretty convincing articles with all manners of conclusions. It is harder to know how to pick out the ones that are correct.

One recent advancement in science writing (stemming from psychology through spreading) has been the pre-registered format and pre-registration.

Pre-registration often takes the form of a form - which effectively is a dialogue - where you have to answer a set of questions about your design. This forces a kind of thinking that otherwise might not happen before you run a study, which has positive outcomes in the clarity and openness of the thought processes that go into designing a study.

One consequence it can highlight that often we very unclear about how we might actually properly test a theory. In the standard paper format one can get away with this more - such as through HARKING or a review process where this is not found out.

This is relevant to philosophy but in psychology/science the format of running and reporting on an experiment is very standard.

I was thinking of a test of a good methods and results section - it would be of sufficient clarity and detail that a LLM could take your data and description and run your analysis. Of course, one should also provide your code anyway, but it is a good test even so.

So in the methods and results, then an avatar does not seem particularly helpful, unless it is effectively a more advanced version of a form.

For the introduction and discussion, a different type of thinking occurs. The trend over time has been for shorter introduction and discussion sections, even though page limits have ceased to be a limiting factor. There are a few reasons for this. But I don't see this trend getting reversed.

Now, interesting you say you can use an avatar to get feedback on your work and so on. You don't explicitly raise the fact that already now scientists will be using LLM's to help them write papers. So instead of framing it as an avatar helping clarify the authors thinking, what inevitably will happen in many cases is that LLM's will fill in thinking, and create novel thinking - in other words, a paper will have LLM's a co-author. In terms of argument then, I think one could create a custom LLM with avatar interface designed to help authors write papers - which will do the things you suggest - give feedback, suggest ideas, along with fixing problems. And the best avatar interfaces will be personalised to the author e.g. discipline specific, and some knowledge of the author (such as learning all their past text to better predict).

And so yes, I think you are a right that will use avatars to help write text similar to what you suggest, and then readers will use avatars to help them read text. I suppose in the medium term I still see the journal article as a publication format that is going to be resistant to change. But LLM's/avatars will be interfaces for production and consumption of them.

Also to add of interest "Creating a large language model of a philosopher"

http://faculty.ucr.edu/~eschwitz/SchwitzAbs/GPT3Dennett.htm

One interesting quote "Therefore, we conclude that GPT-3 is not simply “plagiarizing” Dennett, and rather is generating conceptually novel (even if stylistically similar) content."