The Apache CouchDB development community is proud to announce the immediate availability of version 2.1.
Version 2.1 incorporates 10 months of improvements to the already successful 2.0 release.
For CouchDB < 2.0 users, the main improvements in 2.0 still apply for 2.1:
- 99% API compatibility- native clustering for increased performance, data redundancy, and ability to scale
- Easy querying with Mango
- New Admin interface- Major performance improvements around compaction and replication.
Most importantly, CouchDB 2.0 finally fulfils CouchDB’s original vision of a distributed and clustered document database.
The 2017 Annual CouchDB User Survey has revealed that after only 7 months, 68% of our user-base have adopted CouchDB 2.0.
CouchDB 2.1 addresses most of the issues people found with the initial release of 2.0. And aside from a few new features, there has been a major focus on release tooling in a way that the project is now in a position to make more regular and more stable releases going forward. This means faster bug fixes and faster new features for all CouchDB users.
CouchDB 2.1’s flagship feature is what we call the Replication Scheduler. It’s a completely new way of how replications are managed in CouchDB. Replication is CouchDB’s defining feature, and improvements there usually benefit the majority of CouchDB users.
A CouchDB replication is a way to seamlessly synchronise two databases. There are on-off replications and continuous replications for example when you want to have a hot-spare copy of your database on another server or cluster. CouchDB would run multiple continuous replications in parallel in an always-on fashion. That means, even where there are no changes in the source databases, CouchDB maintained a replication management process, HTTP sockets, file descriptors and everything else required to replicate when an update occurs.
That is generally not a problem, until you start having many replications. A common pattern in CouchDB to separate out data that is to be accessed by different users is to have a database per user. Keeping a hot copy of thousands or tens of thousands (or even more) database becomes a major undertaking and CouchDB’s rather brute-force way of managing this (always-on) wasted a lot of resources.
In addition, replication connection pooling as well as performance tuning tuning was pre-replication instead of per-server, so there were a lot of improvements including socket re-use that couldn’t be taken advantage of.
With the Replication Scheduler in 2.1, that all changes. Replications are now managed per server, connections to other servers are managed in a pool that different replications can share. And depending on how many resources CouchDB users want to spend on replication, they can limit the number of allowed concurrent replications. The Replication Manager will smartly cycle through all replications in turn, to make sure everybody gets a fair share, without exceeding server- or cluster-wide limits.
The Fauxton admin UI has been updated to support these changes as well, including:
- New replicator section
- Supports new replication api
- Monitor _replicator db and _replicate replications
- Easier creation of replications
There is now a new conflict editor, that helps to make document conflicts visible and easy to manage.
One of the biggest feedback-items we’ve received about Fauxton since it’s release with CouchDB 2.0 was that the information density could be improved. In 2.1 Fauxton sports a brand new document listing section with alternating json-view, metadata-view and and improved table view.
Other improvements include:
- Fauxton now fully using React
- Fixes to the Cluster setup page
- Improved pagination for _all_docs
- Fixed database encoding issues
- Lots of bug fixes and styling improvements
The least visible improvement in 2.1 is the one that will have the most impact going forward. The team has spend the first half of 2017 with massively improving the project test and build infrastructure. This includes:
- making sure the various test suites we have for CouchDB run reliably in all CI contexts
- adding a whole new CI context in the ASF’s Jenkins setup so we can automatically test multiple operating system versions
- automating the creation of binaries, specifically .rpm/.deb files for releases and development versions
- In detail, the Jenkins setup today tests 4 variants of Linux with 2x FreeBSd BSD and macOS in the works and automatically produces native package manager binaries for all platforms that are being tested.
Binary packages for major releases make it easier for people to upgrade to latest versions. Packages for in-development versions allow the bug reporters to verify their issues being fixed more easily.
This all will allow the project to make more stable releases more frequently.
We Need Your Help: Donate Hardware
We have big plans for our Jenkins pipeline, and we are already intentionally limiting it’s usefulness, in order not to exceed our fair share of resources at the ASF. As a result, we are asking for hardware donations for our CI pipeline.
Specifically we are looking for:
- Colocation facility server, ideally with root access, alternatively VMs on said server.- sudo/root access
- 8+GB RAM
- 200+GB storage, SSD preferred
- must support Docker.
- The current focus is different Linux variants, but we’re also interested in *BSD and other Unix flavours as well as Windows and Mac machines.
We’re happy to list sponsors of CI hardware in future release blog posts as well the main CouchDB website.
New On the Blog
We’ve added two new categories of posts to the CouchDB blog