The Road to CouchDB 3.0: Shard Splitting

This is a post in a series about the Apache CouchDB 3.0 release. Check out the other posts in this series.

One of the features introduced in CouchDB 2.x was sharding: sharding splits up a single logical database into multiple physical files. In CouchDB 1.x there was a strict 1:1 ratio from databases to database files.

Why sharding then? For large databases, storing all data into a single file makes that file unwieldy. Just thinking about backing up a single multi-gigabyte, or even multi-terabyte file alone should make you uncomfortable.

Sharding also allows for placing each shard of a database on a separate server. This allows you to store more data in a single database than there is capacity on any one node.

There is another reason however: CouchDB’s file format maintains a single serial queue for write requests to that file, mainly for data consistency reasons. With multiple files, write-heavy databases can now write truly in parallel, greatly improving write throughput on sharded databases.

And what’s good for the writer is good for the reader: with sharding, more concurrent read requests can be handled with ease.

The downside for sharding is that requests that need to gather data from each shard must do more internal cluster work to collect all that data. So, it is advisable to use a sharding level that you need for the above reason, but no more. See also the post on partitioned queries.

Sharding in 2.x has one further drawback: you must set the sharding level (q) at database creation time. That is: you must anticipate how big a database will get eventually, and what shard level makes sense for you at that future time.

The shard level could not be changed after the database was created. Even though you could change the default q level for new database creations, any existing databases used the q level they were created with.

Some external tools exist to help with resharding databases, but they require at least a little bit of downtime while the newly-sharded database is built up again, or require a perfectly timed, or cleverly-scripted, load balancer switchover.

CouchDB 3.0 introduces live shard splitting. You can, while the database server is running, and while the database is fully available to your application, split its shards. This allows you to choose a low q level when creating the database, and increase it as your application grows, allowing you to make the best use of your computing resources at any one time.

Note that shard-merging is not supported at this point, and you still have to rely on those external tools if you need this functionality.
Check out the documentation for more details.

The Road to CouchDB 3.0: Introducing Partitions to CouchDB for More Efficient Querying

This is a post in a series about the Apache CouchDB 3.0 release. Check out the other posts in this series.

Apache CouchDB 3.0 comes equipped with a new partitioned database feature, offering  more performant, scalable, and efficient querying of secondary indexes.

Users decide, at database creation time, whether or not to create the database with partitions. All documents in a partitioned database require a partition key, and all documents within a partition are grouped together. Common partition keys could include usernames, IoT device IDs, or locations. Partitioned database documents also have an _id field, but the _id field is in two parts, the partition key and the document key, separated by a “:” character:

Querying against a secondary index for a partition scans only the specified partition range within a single shard.  Compared against a “global” query that requires collating the results of reading a copy of every shard, partition queries require much fewer resources and respond faster at scale. To visualize the difference in database operations, compare a global query on left with a partition query on right:

Querying a partitioned database with a partition key can be done against the Primary Index (_all_docs), as well as a MapReduce view, Search (now available in Apache CouchDB 3.0), or Mango index. Users can combine both partitioned and global indexes within the same database to meet their querying requirements. 

For more information on leveraging Partitioned Databases in Apache CouchDB, check out the following resources: