How CouchDB is creating safer workspaces with Lupin Systems

CouchDB is such a versatile project that there’s no limit to the types of industries it can help assist. In this case, we chatted about the world of occupational health and safety with Mike Wolman from Lupin Systems. Lupin Systems offers web based solutions for regulatory compliance management of buildings, chemical products and the environment.

They have two main applications:

HaRMs: used to manage and generate Safety Data Sheets and Labels (as well as other documents) for materials meeting GHS legislation requirements.

HazMat: used to manage and generate Hazardous Substance Reports/Asbestos Registers for buildings and properties and is used by auditing and consulting firms, as well as property owners or managers to help meet their legal requirements to provide this documentation.

In our interview, Mike talked about the excellence of Lupin Systems’ developers and provided some insight as to how he first discovered CouchDB and his experience working with it.

How did you hear about CouchDB, and why did you choose to use it?

I’d had previous experience with a number of SQL databases including using MySQL and Firebird a fair bit, but my primary database of choice was PostgreSQL. I think I have been using Postgres since around 2000.

I don’t really remember where I heard about CouchDB but kept an eye on it when NoSQL first appeared. When discussing the architecture of our application with developers the choice really came down to CouchDB and Mongo, and our dataset suited NoSQL. We knew we wanted to run a multi master setup for redundancy. We could get away with master/replica but for different geographic locations; this would make the setup and app more complicated.

Initially, we thought we would also take advantage of the CouchDB sync engine to mobile directly, but due to application requirements we are now doing this differently. Using CouchDB allowed us to store attachments natively within the database. An added benefit was that CouchDB dealt with keeping them all in sync. This was a great option using a clustered filesystem or other mechanism to achieve this.

Did you have a specific problem that CouchDB solved?

Yes, CouchDB solved the problem of geographic master/master, redundancy, attachment syncing, i.e., no need for cluster filesystem.

For the folks who are unsure of how they could use CouchDB–because there are a lot of databases out there—could you explain the use case?

We use CouchDB as our primary data store for all data and files. Our data consists of a lot of loosely structured information. We also store attachments within CouchDB. It’s great because we have not had any issues with this and are able to store files from a few kb up to about 10mb.

We have over 100 CouchDBs being synced between 3 different data centers. These range from a few thousand documents to over 2 million in each database.

Over the course of 7 years, I think we have only had one issue which was caused by a corrupt view. The dependability is fantastic compared to other databases, with MySQL in the past I had to run MySQLcheck more times than I can remember (haven’t used MySQL in about 10 years so take that with a pinch of salt). As for Postgres, I have a vague memory of an issue a long time ago but cannot remember specifics or if it was hardware or Postgres itself.

What would you say is the top benefit of using CouchDB?

I would say the top benefit of using CouchDB is its simplicity.

It is simple to setup master/master replication, it’s simple to migrate/upgrade to new server(s), simple to backup, and monitor, as well as manage over long periods of time.

Additionally, I like that it sort of provides a clustered filesystem without any setup. I also find it very useful to be able to view previous versions of a document.

What tools are you using in addition for your infrastructure? Have you discovered anything that pairs well with CouchDB?

For our main app we use Ruby on Rails.

For indexing/search:

  • We have initially used SOLR (ES was not released by then).
  • Upgraded to ElasticSearch (but hit issues with memory use).
  • Upgraded to Postgres when json/jsonb datatype was added. It’s much more memory-friendly than ES and provides so much more flexibility query-wise, including joins and anything else SQL-wise you might want. Plus, Postgres ODBC allows excel/access to query NoSQL data and produce reports etc.

We’ve also paired CouchDB used on large projects with:

  • Nginx + Passenger
  • Puma
  • DocX4J
  • Redis (for sidekiq and other short term bits)
  • FreeBSD
  • Nagios

What are your future plans with your project?

We are working on a mobile version of one of the apps. It’s backed by SQLite but syncs to CouchDB. It will be interesting to see it come together.

 

For more about CouchDB visit couchdb.org or follow us on Twitter at @couchdb

Have a suggestion on what you’d like to hear about next on the CouchDB blog? Email us!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s