Couchbase sponsored this post.
Not so long ago, applications were built almost exclusively as a single and indivisible unit.
This monolithic style was a legacy of a time when data capacity was limited, databases were designed for a single unit and the application was being used by a single device.
Development teams that built applications using a linear and sequential waterfall approach worked well … until it didn’t. Using this approach, making changes was harder in large and complex applications with tight coupling. Scaling components independently was not an option, and code changes could affect the whole system, so each change needed to be thoroughly coordinated. Often, this lengthened the overall development process and made implementing a new technology difficult as it would require the entire application to be rewritten.
The Need for a New Paradigm
As more and more applications were being built for the web for access in browsers and eventually mobile phones, exponentially larger numbers of users were using apps with more frequency and from more places. It was becoming clear that a new paradigm was needed for storing and accessing data. What came next was an increased demand from these users to have a richer experience with their applications.
In response, companies wanted to deliver better experiences both digitally and in real life. More applications were built to engage with users and customers. Simultaneously, storage and processing power became much more affordable. This resulted in a data explosion.
Development teams had to adapt fast, and new approaches to application development started emerging and growing in popularity. Methodologies like agile, scrum, kanban, minimal viable product, among others, entered our vocabulary.
This led to a macro trend of what we now call microservices development, which breaks down applications into a collection of smaller independent units. These units carry out every application process as a separate service. So all the services have their own specific function, logic and in many cases, their own database.
Consequently, development teams are now able to build applications more quickly. Services can be deployed and updated independently, providing developers with more flexibility. A bug discovered in one microservice has an impact only on a particular service and does not influence the entire application. It is also much easier to add new features to a microservice application than a monolithic one.
By separating an application into smaller and simpler components, microservices are easier to understand and manage. Additionally, the microservices approach provides the advantage that each element can be scaled independently. So often, the process is more cost-effective and efficient than scaling the entire monolith application.
The Challenges of Data Sprawl
Microservices are not perfect, however, and bring their own challenges. Connections between multiple modules and databases create cross-cutting concerns with logging, metrics, observability and more. And testing/troubleshooting can be difficult across services. Most importantly, this type of architecture can lead to big challenges with data sprawl.
Database sprawl can result in problems with data movement, duplicate data, security, data integration, latency, information inconsistency and increased cost. Teams need to have domain knowledge and multilingual programming skills. Different licenses need to be secured, all with different models and compliance terms that complicate compatibility. Supporting more types of databases cause technical and operations challenges that slow down development.
The Promise of Multimodel
The way multiple databases were being handled was becoming untenable. At this point, some database companies decided to consolidate multiple data-access methods and other integrated services into their databases to reduce the negative effects of data sprawl. Enter stage left multimodel, a database management system designed to support multiple data models within a single, integrated backend.
This system provides unified data management, access and governance, among other key features. Note: Hacks such as bolting on one database to another are not true multimodel.
Multimodel brings all the benefits of polyglot persistence, without the disadvantages of it. Essentially, it does this by supporting a document store (JSON documents), a key/value store and other data storage models (multiple databases) into one database engine that has a common query language and a single API for further access. This allows you, for example, to use a single query where before multiple ones were needed. This improves efficiency and memory performance.
Are multimodel capabilities needed in every application case? No, of course not, but they are applicable in many cases and having them in place helps future-proof an application. So now organizations can get the best of the monolithic and microservice approaches supported by a single database.
Feature image via Pixabay.