Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
Cloud Native Ecosystem / Data

Does Your Database Really Need to Move to the Cloud?

When it comes to application modernization, sometimes caching on-prem beats throwing cash into the cloud.
Oct 24th, 2022 10:39am by
Featued image for: Does Your Database Really Need to Move to the Cloud?
Image by engin akyurt on Unsplash.

Is the phrase “application modernization” simply shorthand for application refactoring and re-platforming to the cloud and dumping whatever legacy database you’re currently using?

It’s often presented as the only way for companies with legacy databases — which is, arguably, most companies — to meet users’ demands for real-time, or near real-time, experiences and cope with the massive amounts of data this assumes. Those experiences can be the instant provision of an Uber, an immediate credit decision, or a recommendation that takes the pain out of choosing a partner’s present.

The same dynamics apply when it comes to internal data and applications, with users expecting the real-time experiences they enjoy as consumers when it comes to checking inventory levels or getting an executive-level overview of key company performance metrics.

There’s a simple equation underlying this, Sanjeev Mohan, founder and principal at data and analytics research firm SanjMo, told The New Stack. “The faster the performance of a database, the higher the customer loyalty and engagement,” Mohan said. “It’s not just the performance of queries. It’s also how fast can you write” — or load data into the database, or insert and update.

But as legacy databases are hit with ever more data, he added, “your tables become very large — then performance may start degrading.”

Tough Times, Tough Choices

Few organizations are immune to the pressures of keeping up with real-time demands. But not everyone can meet them by executing a seamless transition to a new cloud native database.

This may simply be because an organization’s data is not going anywhere; for example, regulatory obligations could mean data simply has to stay on-premises for the foreseeable future.

It may be that an organization has indeed looked at moving to the cloud but balked at the logistics of migration. Or it’s taken a hard look at the numbers and realized that they don’t quite add up, particularly if operations need to scale up considerably.

This is always a challenge, but the deluge of economic bad news is undoubtedly leaking into the cloud world, meaning costs are rising after years when prices per unit of performance only seemed to go down.

“I think the world will stay in this sort of hybrid state for a lot longer than people believed it would two or three years ago, when money was incredibly cheap,” Ryan Powers, head of product marketing for Redis Enterprise Cloud, told The New Stack.

So for many IT leaders, said Powers, the real question is, “How to modernize with as little refactoring and re-architecting as possible?”

After all, he points out that while “an increasingly significant portion of your applications needs to be in real time, the reality is a good portion of them do not.”

When That Legacy Database Is Still a Keeper

This is where a caching layer, deployed alongside the legacy database, can come into play. As Powers put it, “You need something that’s a flexible data layer that can be used as a buffer alongside whatever databases you’re using.”

Redis Enterprise, for example, can be used for cache pre-fetching, where the application reads data held in memory, rather than reading directly from disk, speeding up queries, particularly on read-heavy workloads. It also offers write-behind caching, so that data processed by the application is written to the cache layer in real time, with the core system of record database updated asynchronously, after the fact.

Both approaches hold the promise of massively reducing latency, and improving user experiences. Another way it is applied is in generating secondary indexes to speed up queries on secondary keys, something that can be time-consuming and complex when using legacy databases such as MySQL or Oracle.

When it comes to global-scale, multicloud and hybrid use cases, it’s important to consider how you ensure data remains consistent across regions while ensuring applications are running as quickly as possible, Powers added. Redis Enterprise offers Active-Active Geo Distribution, to allow local speed read and writes while ensuring consistent data is replicated across regions, with less than a millisecond of latency.

So, even if the long-term goal is full application modernization, Powers said, “There are places where you can still use Oracle or MySQL, and patch us alongside, to fix it in the interim, while you’re making these transitions.”

In these cases, he argued, “The modernization is around speed, it’s around scale, it’s around total cost of ownership.”

So, the question of how to modernize your database becomes far more nuanced than whether you can afford the time and money to embark on a complete refactoring and re-platforming project.

That said, there is a financial aspect to this approach, beyond the raw cost of the caching layer, Mohan pointed out: “I can maintain my investment in my legacy system, use that as a system of record, but then I can have much faster retrieval.”

Raising the Limit, Without Going Anywhere

Maintaining a legacy database alongside a caching layer helps keep a lid on future license and infrastructure costs. Once you’ve reached the limits of your current installation, said Mohan, you face the pain of buying more licenses from your legacy vendor, and beefing up your hardware to match. “But with cache, I can offload some of the workload to an in-memory database.”

So when do you know you’re ready for this modernization approach? First, you should consider whether there are use cases you’re struggling to support with existing relational databases that are just not fast enough, said John Noonan, senior product marketing manager at Redis.

You’ll know that’s the case, Noonan told The New Stack, “if you have users that are waiting, whether or not they’re internal, or their customers”.

He cited the example of a financial sector customer that was having trouble implementing real-time payment processing, because of the number of Oracle tables that needed to be updated when processing transactions. Redis Enterprise was inserted into the payment process to speed the transactions through, with the data subsequently being piped into the system of record, after the transaction was complete.

But it’s also a question of scale. Noonan recalled another customer example, of a company that operated a fantasy sports platform in India, which would see huge usage spikes when the team lineups for cricket matches were posted 30 minutes before a match. This led to massive slowdowns, as its SQL-based system just couldn’t scale in response.

So it implemented the lineup posting function in memory with Redis Enterprise to break that bottleneck, with the data translated back into their relational database once the match has ended.

These were precisely the sorts of problems New Zealand e-commerce company Blackpepper saw with what its CEO Alain Russell described as its “incredibly write-heavy setup” using RDS, Elasticsearch, Redis and Dynamo through Amazon Web Services (AWS).

“We were running into scaling issues under heavy loads and hitting incredibly high costs to scale the RDS instances to handle loads,” Russell told The New Stack. The firm also ran into problems keeping data in sync.

Testing showed that using Redis Enterprise as a primary data store could deliver speed improvements of 20 to 30 times on some of the firm’s common jobs. It also solved the data syncing problem. “Moving this to Redis Enterprise has simplified our architecture, simplified the way we debug and given us a single data store to look at,” Russell said.

Ultimately, many organizations may want to get to the cloud, and even ditch their legacy data infrastructure completely, said Mohan. But re-architecting and modernizing will always be easier said than done. Caching gives companies an opportunity to stretch their legacy platform, at least in the medium term.

“So your modernization strategy is: step one, lift and shift to the cloud. Step two, implement a caching solution and become a little bit more cloud-friendly,” said Mohan. “Step three could be, replatform your legacy environment to a cloud native offering.”

The beauty of this approach is, perhaps, that you can simply take step two and still benefit.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.