TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem / Kubernetes / Microservices

The New Stack Context: In-Memory Computing Meets Cloud Native Computing

Hazelcast's Senior Solution Architect Mike Yawn talks about the potential of in-memory computing to supercharge microservices and cloud native workloads.
Aug 28th, 2020 3:00pm by
Featued image for: The New Stack Context: In-Memory Computing Meets Cloud Native Computing

Welcome to The New Stack Context, a podcast where we discuss the latest news and perspectives in the world of cloud native computing. For this week’s episode, we spoke with Mike Yawn, a senior solution architect at Hazelcast, about the potential of in-memory computing to supercharge microservices and cloud native workloads.

The New Stack editorial and marketing director Libby Clark hosted this episode, alongside TNS senior editor Richard MacManus, and TNS managing editor Joab Jackson.


Episode 131: in-Memory Computing Meets Cloud Native Computing

Yawn recently contributed a post to TNS explaining how in-memory technologies could make microservices run more smoothly. Hazelcast offers an in-memory data grid, Hazelcast IMDG, along with stream processing software Hazelcast Jet. We wanted to know more about how in-memory could be used with microservices. While in-memory offers caching just like key-value databases, such as Redis, it also offers additional computing capacity, which can help process that data on the fly, Yawn explained.

“The standard definition of a memory data grid is that you can pool together the RAM of a bunch of different systems so you can cluster multiple nodes together. If you have four systems with 8GB [each], then you can effectively have a 32GB system and you can go up to dozens or hundreds of nodes, depending on your memory usage,” he said.

“It’s not just about pooling the RAM. You get to use the additional processors of those systems and the additional network bandwidth of the different interface cards. So it really does let you scale out by adding lots of inexpensive systems rather than scale up to very expensive supercomputers,” said Yawn.

He also spoke about the growing deployments of Kubernetes among the company’s users, which tend to be on the higher end of enterprise users, such as banks.

“We support Kubernetes because our users tell us it’s important to them,” Yawn told us.

Later in the podcast, we discussed some of the other posts of the day:

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.