Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
Compliance / Data

Temenos Benchmarks Show MongoDB Fit for Banking Work

A benchmarking test by a financial services software provider has found that MongoDB database can provide the power and speed needed for banking transactions, suggesting new possibilities of using the NoSQL database in the fast-paced financial sector
Jul 26th, 2022 1:56pm by
Featued image for: Temenos Benchmarks Show MongoDB Fit for Banking Work

A benchmarking test by a financial services software provider has found that the MongoDB database can provide the power and speed needed for banking transactions, suggesting new possibilities of introducing the NoSQL database in the fast-paced financial sector.

Temenos set up a production-grade benchmark for a single MongoDB instance and found it was easily able to execute 74,000 transactions per second (TPS), all with a consistently less-than-a-second millisecond response time, reported Tony Coleman, the chief technology officer of Temenos, in a session at MongoDB World last month in New York. Moreover, it achieved this stellar performance using far fewer servers than the traditional cluster-based systems offered by Oracle.

This benchmark could be an important milepost for banking, which has been evolving from being a very conservative, slow-moving industry to a fiercely competitive one. “Under pressure from demanding consumers and nimble new competitors, development cycles measured in years are no longer sufficient,” said Boris Bialek, MongoDB global head of industry solutions. “Temenos is at the forefront of making this happen.”

More Transactions, Fewer Servers

It was at a 2018 Google event, just after Mongo announced support for ACID transactions for MongoDB 4.0, when Coleman walked up to the MongoDB stall, and indicated this was a feature the banking software company could use.

Tony Coleman

The Geneva-based Temenos serves over 3,000 financial institutions in more than 125 countries across the world. The company offers software for conducting financial transactions, either as a service or as a software package.

The company had found their customer base changing with the times, Coleman said, speaking in a follow-up interview with The New Stack.

Twenty years ago, banks were set on using the Oracle or IBM Websphere, but more recently, the financial institutions looking for advice from Temenos on building stacks that could make them more competitive.

At the heart of all banking are ACID transactions, which guarantee that a transaction, once executed, gets committed to the database. So the news of ACID support introduced MongoDB to the world of high finance.

Coleman knew that MongoDB had other advantages that would appeal to this community, including the ability to run across multiple clouds, to be able to run on-premises if need be, or to be able to run in an active-active configuration across multiple zones.

So Temenos put Mongo to the test.

Engineering Byte
One competitive advantage Temenos enjoys, in the view of CTO Tony Coleman, has been a declarative approach to developing new features. Rather than forking the code for each client, and managing the subsequent sprawl of code, company developers add the new features with the ability to turn them on or off through configuration toggles.

In a mock scenario, Temenos’ research team created 100 million customers with 200 million accounts through pushed through 24,000 transactions and 74,000 MongoDB queries a second. Even with that considerable workload, the MongoDB database, running on an Amazon Web Services’ M80 instance, consistently kept response times under a millisecond, which is “exceptionally consistent,” Coleman said. (This translated into an overall response time for the end user’s app at around 18 milliseconds, which is pretty much so fast as to be unnoticeable).

Coleman compared this test with an earlier one the company did in 2015, using all Oracle gear. He admits this is not a fair comparison, given the older generation of hardware. Still, the comparison is eye-opening. In that setup, an Oracle 32-core cluster was able to push out 7,200 transactions per second.

In other words, a single MongoDB instance was able to do the work of 10 Oracle 32 core clusters, using much less power.

Come Correct with the Data Model

One key aspect in achieving such a blazing throughput is getting the correct data model. “It’s like security. You can’t afford to get it wrong,” Coleman said.

“Implementing a good data model is a great start. Implementing a great database technology that uses that data model correctly, is vital.”

This is one area in which standard SQL database systems are at a disadvantage to NoSQL varieties such as MongoDB.

“If you put indexes on a relational table, it goes slower when you insert because it has to do more work. I mean, that’s, that’s just physics,” Coleman said. “So if you then want to have a very flexible query model, and you have to have 10 indexes, then every insert, you are then paying the overhead for your queries.”

In contrast,  “if you target high-value queries — the ones that are customer facing the ones that get done all the time — it’s really worth investing in a specific query-optimized model because it’s massively more efficient,” Coleman said.

MongoDB’s document-based database system “is a really great fit” for banking, Coleman said. A SQL database may capture a series of transactions as a set of rows, one per transaction. In contrast, a document-oriented model offered by the likes of MongoDB could capture all the transactions, say for a single day, as an array, which is easier and quicker to query.

“If I’ve got 30 days’ worth of entries in my in my cash flow data, when it comes to the end of the month, if I want to see my cash flow, I’m either querying 30 rows and putting them together or I’m reading one,” Coleman said.

In Beta

MongoDB currently supports the Temenos Infinity product while the Temenos Transact platform, which is underpinned by Atlas, will be available in the near future.

Currently, Temenos is on the cusp of using MongoDB in production, now that the testing has been largely completed. “The goal is to take MongoDB into the banking cloud,” Coleman said. The company has found that MongoDB can replicate standard SQL workloads with 22% fewer resources.

Currently, Temenos is storing at least some financial data with a PostgreSQL database as Binary JSON (BSON) where it can be easily migrated over to MongoDB.

The company is moving towards an event-driven, loosely-coupled architecture, with data flowing through event streams, and an event store to keep the canonical versions of all the data. The company does not use event sourcing, which is still too complex for developers; instead, the state change of the objects are stored, along with the events themselves, Coleman added.

“It makes so much sense on so many levels to move to this event-driven architecture,” Coleman said.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.