Sometimes you have a lot of data, and one approach to support quick searches is pre-processing it to build an index so a search can involve only looking at a small fraction of the total data. The threshold at which it's worth switching to indexing, though, might be higher than you'd guess. Here are some cases I've worked on where full scans were better engineering choices:

  • Ten years ago I wrote an interoffice messaging application for a small billing service. Messages were stored in MySQL and I was going to add indexing if full-text searches got slow or we had load issues, but even with ten years worth of messages to search it stayed responsive. (This was before MySQL supported FULLTEXT indexes with InnoDB, added in v5.6, and at the time indexing would have meant installing and maintaining Sphinx.)

  • I recently came across someone maintaining a 0.5GB full text index to support searching their shell history, 100k commands. I use grep on a flat file, and testing now it takes 200ms for a query across my 180k history entries.

  • My contra dance search tool ranks each dance in response to your query, with no geospatial indexing, because there are just ~350 dances.

  • The viral counts explorer I've been playing with for work searches the taxonomic tree of human viruses in real time, scanning ~15k names with JS's "includes" command about as fast as you can type.

  • When I worked in ads I would often need to debug issues using production logs, and would use Dremel (Melnik 2010, Melnik 2020) to run a distributed scan of very large amounts of data at interactive speeds. Because queries were relatively rare, an index would have been far more expensive to maintain.

Unless you know from the start that you'll be searching hundreds of millions of records, consider starting with simple scans and only add indexing if you can't get acceptable performance. And even then, if queries are rare and highly varied you may still do better to do the work at query time instead of ingestion time.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 3:44 PM

I've done SO MANY design reviews where my first questions were "how big is the data, what are the query rates and patterns, and what is the worst-acceptible-performance", immediately followed by a reschedule to review a much simpler design.

It's worth mentioning that indexing is work done at insertion time (or at least in advance, outside of the query path), and it's VERY often worth it to save resources at query time.  This can be true EVEN IF it's a bit more resources overall.

At Manifold I find myself adding indexes for smaller collections. Like, in our postgres db we have a table for mana transactions called txns that has only 1 million rows, and I recently added an index so that a particular query would take 0.6 ms instead of 400ms. This is a query that has to run anytime a user loads their feed, and even if we never100x the size of txns we shouldn't delay the feed by almost half a second.

(I am generally for rapid, hacky prototyping)

There seems to be an assumption that the burden of proof is on showing that the index adds enough value to be worth it. My understanding is that the cost of adding an index is quite small though - it's easy, battle tested, doesn't lead to bugs, and doesn't have much impact on performance at insertion time - so it at least seems worth discussing the question of burden of proof instead of assuming an answer to it.

If you are already using a database and think you might want a simple index (ex: on an ID) then sure, just add it. But if feeling like you should have an index pushes you to start using a database, or if you want the support something complicated like full text search, then I don't think it's so clear.

(This post is not anti-index, it is anti-"you should never be doing full table scans in production")

I see. That makes sense.

Maybe this supports or complements your argument: platforms like Azure SQL will tell you when you need an index and even automatically create it for you if you wish. (And automatically undo it if the index doesn’t help or hurts performance) So, other than a primary key index, you could do nothing and let the database decide what’s best.

How about foreign key indexes? :)