Today if I type into Google’s search bar “how to move a couch into a moving plane using only a pair of avocados” the first result is not particularly on-point:

14 Tips for Moving Furniture | The Family Handyman

https://www.familyhandyman.com › list › 10-tips-for-m...

Mar 8, 2022 — Tip the item backward at an angle and have one person carry the top while the other carries the bottom. This centers the weight and keeps the ...

Missing: planepairavocados

In some not very distant future, I expect the top result will be something like this:

How a pair of avocados was all I needed to move my couch into a moving plane

https://householdhints.gpteeble.com/ > helpful advice > how-tos

Mar 23, 202x — I was surprised at how easy it ended up being to move my couch into a moving plane by using nothing but a pair of avocados and some common ...

Today’s result for “what was that scientific paper about how you could cure pancreatic cancer by eating your own toenails?”

Starving Pancreatic Cancer of Cysteine May Kill Tumor Cells

https://www.cuimc.columbia.edu › news › starving-pan...

Apr 2, 2020 — The compound starves tumors of an amino acid, cysteine, which was found to be critical to the survival of pancreatic cancer cells.

Missing: owntoenails?

The not very distant future’s version:

Role of Unguiphagy in Remission of Pancreatic Cancer

https://oncologytoday.gpteeble.com/ > Journal of Oncological Unguiphagy > latest

Mar 23, 202x — The median life span of patients following diagnosis of advanced Carcinoma of pancreas is usually not more than 3-6 months. However in patients who eat their own toenails, survival rates several times longer ...

Today, the internet can give okay answers to many of our questions. In the near future it will give astonishingly plausible answers to all of our questions. I am concerned.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 3:25 PM

That is a very interesting proposal indeed, a search engine that doesn't return links to existing pages but instead generates them on the fly based on the query and the knowledge it has, like an aunt who just must have an answer to everything, even if it's made up, would be a lot more entertaining and possibly even more useful than existing ones. It would no longer be a search engine, but an answering machine, which certainly fits Google's goal of making knowledge find-able.

Ok, that's not what you meant, but I think it has potential.

That's pretty much exactly what I meant.

I've considered how this could be used to e.g. teach difficult concepts like in math. Collate the best info from different sources, then a reader sees the same idea explained 20 different ways = the reader gets it.

Won't people just choose not to use such gpt-powered search engines, given that their answers will more often than not be useless?

You can use a regular search engine, such as Google, but all results on the first few pages will point to gpt-powered websites.

But the engine presented in this post provides answers to all possible questions, even absurd ones about killing pancreatic cancer with toenails. For gpt-powered websites to match that, they'd need to pre-generate a webpage for every possible question people could ask, which is infeasible.

They don't need to pre-generate a web page for every possible question that people could ask, they just need to generate a page for every question that people do ask. It's already possible to use SEO tools to discover what things people search for. SEO "content creators" already do a weak version of generating pages based on search queries, only they're using low-paid humans to do it. Imagine how much worse the problem of SEO content farms will get when SEO content farmers have access to neural nets that can output nigh-infinite amounts of coherent-looking keyword-stuffed text on demand, for any given search query.

You're imagining that Google stays the same in the way it indexes and presents the web. What if it decides people like seeing magic answers to all their questions, or notices that consumers have a more favorable opinion of Google if Google appears to index all the answers to their questions, and so Google by default asks gpteeble (or whatever) to generate a page for every search query, as it comes in, or maybe every search query for which an excellent match doesn't already exist on the rest of the web.

Imagine Google preloads the top ten web pages that answer to your query, and you can view them in a panel/tab just by mouse-overing the search results. You mouse-over them one by one until you find one that seems relevant, but it's not one that Google retrieved from a web search but one that Google or a partner generated in response to your query. It looks just the same. Maybe you don't even look at the URL most of the time to notice it's generated (the UI has gone more thumbnaily, less texty). Maybe "don't be evil" Google puts some sort of disclaimer on generated content that's noticeable, but the content still seems good enough for the job to all but the most critically discerning readers (the same way people often prefer bullshit to truth today, but now powered by AI; "it's the answer I hoped I'd find"), and so most of us just tune out the disclaimer.

It seems that either (a) the AI-powered sites will in fact give more useful answers to questions, in which case this change might actually be beneficial, or (b) they will give worse answers, in which case people won't be likely to use them. Don't you think people will stop trusting such sites after the first 5 times they try eating their own toenails to no avail? And for the purposes of finding plausible bullshit to support what you already think, I think gpt-powered sites have the key disadvantage of being poor evidence to show other people: it looks pretty bad for your case if your best source is a generated website(normal websites could also be generated but not advertise it, of course, but that's a separate matter). You seem to be imagining a future in which Google does the most dystopian thing possible for no reason in particular.

Google already pivoted once to providing machine-curated answers that were often awful (e.g. https://searchengineland.com/googles-one-true-answer-problem-featured-snippets-270549). I'm just extrapolating.

And the worst part of this coming dystopia is the gpt-designed ads that trick us into further mindless consumerism. Ad revenue is the most likely candidate to replace paperclips as the thing a runaway AI destroys us for.

This realization makes The Matrix battery silliness much scarier as a metaphor for AI that deeply needs human attention.

I pessimistically do not expect such advanced search engine in the near future. What prevent us is the underlying incentives behind those search engines. And not engines, but only **Google **as the dominant. Google wants you to click their ads, to reach the SEO-optimized pages, because that would make you spend more time on Google searching for what you actually need. If it gives you what you want, instantly and effortlessly, will you pay for Google?

There's a recent trend of vertical search tool, which prioritize the accurate and in-depth information demand - which contrast with the horizontal search like Google. I recommend this article about you.com

I believe you meant "in contrast with vertical search like Google".

I agree in general with your point about incentives. However, there is also another angle to consider: other companies. Google will have to keep innovating to some extent; if it doesn't, other rival search services may do so threatening Google's market share (although this threat would probably not be very impactful given the number of people used to Google).

In the first example it sounds like the engine is fabricating a false testimony.  Was that an intentional attribute in the example? I guess fictionalizing will happen lots, but I don't expect Google to use that particular method and jeopardize credibility.

For the second example, I assume there will be heavy selection against fabricating incorrect medical advice, at least for Google.

For genuine best-guess attempts to answer the question? I will be concerned if that doesn't happen in a few years.  What's the matter?

Why do you think there will be heavy selection against things like made-up stories presented as fact, or fabricated/misrepresented medical baloney, when there doesn't seem to be much such selection now?

I mean that Google themselves wouldn't want something that could get them lawsuits, and if they generate stuff, yes they'll have a selection for accuracy.  If someone is interested in AI-Dr-Oz's cures and searched for those, I'm sure Google will be happy to provide.  The market for that will be huge, and I'm not predicting that crap will go away.

Yes Google does select, now.  The ocean of garbage is that bad.  For people making genuine inquiries, often the best search providers can do right now is defer to authority websites.  If we're talking specifically about interpreting medical papers, why don't you think they'll have a selection for accuracy?

I think their selection for authority will be bad because they perform significantly worse than kagi.

Why are you sad? It's fucking awesome!

[-]nim2y10

Sounds like a good timeline to own a physical encyclopedia in.

Then again, a physical encyclopedia seems more useful than maximally-SEO'd non-fact-based results due to its having been compiled and fact-checked by humans to whom the reader is willing to delegate some of their reasoning.

Whatever we call the trait which caused many people to take Zvi's word over the CDC's in early 2020 -- something like that would be an essential skill for not getting gotten. How do we teach that?

I suppose the hope is that then there will be a third tier: "How to move your couch the right way, and why everyone thinks you need avocados for it."