Answerers can also split out the breakdown/tacit linked premises for the questioner, like you do in this post, if the questioner has patience for that because the question is somewhat important to them. See also: Arisototle treating questions only fully answered if they separately address four different types of whys.
- Answerers should generally try to figure out enlightened questions and answer those questions. This method is often the one that will be best for the asker's utility.
This takeaway makes sense to me, and I would suggest separating questions into different categories(contexts, characters, etc). In a large classroom, people often need clarifications than solutions, as thinking more than one minute here can be counter productive; Professor would answer directly and shortly, especially when questions are simple. In a problem-solving or thesis writing, people stuck, and enlightened-question-answering would help them a lot; People need different perspectives as well as solutions. In the case of daily life questions, like dentist appointment, the answerer would response, or notify in the morning as the answer is predictable, as you said above.
In above 3 cases, the most benefit of enlightened-question-answering comes to the complex and advanced situations like problem-solving. This reveals that your model gives different viewpoints, over giving knowledges.
You also concerned about people asking fewer questions to this answerer, partially because its answers are off-topic in its full force. Yeah, we don't need diverse viewpoints everytime. It will be frustrating. but...
Correspondingly, I imagine that as AGI gets close, people might ask fewer and fewer questions; instead, relevant information will be better pushed to them. A really powerful oracle wouldn't stay an oracle for long, they would quickly get turned into an information feed of some kind.
To me, this is already happening. First Youtube comes to my mind. Second is Lesswrong.com. The similarity of the two is that I use search bar rarely as contents are already displayed, by AI-recommender or by Human administrator. Surely this is the place where people want diverse viewpoints more and more.
On the other hand I don't use search bar because I don't come with questions. When I have new keywords, I should use search bar to play Baba Yetu or to watch more clip of Thomasin McKenzie.(On the third hand, youtube uses cookies and reflects my recent interest on Sid Meier's Civilization. It is becoming "the good enough"!!) It is not exact to compare your answer model and youtube recommendation, but this may show the changing paradigm of questioning.
Solid points, thank you.
On the latter (around information feeds), I very much agree. Those examples are good ones. I just released a follow up post that goes into a bit more detail on some of this, here:
https://www.lesswrong.com/posts/kfY2JegjuzLewWyZd/oracles-informers-and-controllers
Epistemic & Scholarly Status: Fairly quickly written. I’m sure there’s better writing out there on the topic somewhere, but I haven’t found it so far. I have some confidence in the main point, but the terminology around it makes it difficult to be concrete.
TLDR
The very asking of a question presupposes multiple assumptions that break down when the answerer is capable enough. Questions stop making sense once a questioner has sufficient trust in the answerer. After some threshold, the answerer will instead be trusted to directly reach out whenever is appropriate. I think this insight can help draw light on a few dilemmas.
I've been doing some reflection on what it means to answer a question well.
Questions often are poorly specified or chosen. A keen answerer should not only give an answer, but often provide a better question. But how far can this go? If the answerer could be more useful by ignoring the question altogether, should they? Perhaps there is some fundamental reason why we should desire answerers to act as oracles instead of more general information feeders.
My impression is that situations where we have incredibly intelligent agents doing nothing but answer questions are artificial and contrived. Below I attempt to clarify this.
Let's define some terminology:
Asker: The agent asking the question.
Answerer: The agent answering the question. It could be the same as the asker, but probably later in time. Agent here just means “entity”, not agent vs. tool agent.
Asked question: The original question that the asker asks.
Enlightened question: The question that the asker should have asked, if they were to have had more information and insight. This obviously changes depending on exactly how much more information and insight they have.
Ideal answer: The best attempt to directly answer a question. This could either be the asked question or an enlightened question. Answer quality is evaluated for how well it answers the question, not how well it helps the asker.
Ideal response: The best response the answerer could provide to the asker. This is not the same as the ideal answer. Response quality is evaluated for how it helps the answer, not how well it answers the question.
Utility: A representation of one's preferences. Utility function, not utilitarianism.
Examples
Question: What's the best way to arrive at my dentist appointment today?
The answer to the stated question could be,
The answer to an enlightened question could be,
A good response, knowing the question, but not answering it, might be,
A good response, ignoring the question (or correctly not updating based on it), and optimizing for utility, might be,
The puzzle with the later answers is that they seem like poor answers, although they are helpful responses. The obvious solution here is to flag that this is a very artificial scenario. In a more realistic case, the last response would have been given before the question was asked. The asker would learn to trust that the answerer would tell them everything useful before they even realized they needed to know it. They would likely either stop asking questions, or ask very different sorts of questions.
The act of asking a question implies (it almost presupposes) an information asymmetry. The asker assumes that the answerer doesn't have or hasn't drawn attention to some information. If the answerer actually does have this information (i.e. they can intuit what is valuable to the asker and when), then it wouldn't make sense to ask the question. This is an instance of the maxim of relevance.
So, questions make sense only until the answerers get good enough. This is a really high bar. Being "good enough" would likely require a tremendous amount of prediction power and deep human understanding. The answerer would have to be much more intelligent in the given area than the asker for this to work.
Breakdown
If we were to imagine a breakdown of information conveyed in the above question, we could then identify a more precise and empathetic response from a very smart being.
Students and Professors
Another analogy is that of students and professors. Many students don't ask any questions to their professors, particularly in large classes. They expect the professors will lead them through all of the important information. They expect that the professors are more informed about which information is important.
In many situations the asker is the one trying to be useful to the answerer, instead of it being the other way around. For example, the professor could ask the students questions to narrow in on what information might be most useful to them. I imagine that as the hypothetical empathetic professor improves along a few particular axes, they will be asked fewer questions, and ask more questions. In this later case, the questions are mainly a form of elicitation to learn about the answerer.
Corrigibility
There could well be situations where answerers assume that they could better respond with a non-answer, but the askers would prefer otherwise. This becomes an issue of corrigibility. Here there could be a clear conflict between the two. I imagine these issues will represent a minority of the future use of such system, but these instances could be particularly important. This is a big rabbit hole and has been deeply discussed in the corrigibility and similar posts, so I'll leave it out for this post.
Takeaways
I think that:
Correspondingly, I imagine that as AGI gets close, people might ask fewer and fewer questions; instead, relevant information will be better pushed to them. A really powerful oracle wouldn't stay an oracle for long, they would quickly get turned into an information feed of some kind.
Thanks to Rohin Shah for discussion and comments on this piece