Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
Frontend Development

Google Touts Web-Based Machine Learning with TensorFlow.js

TensorFlow.js, a JavaScript library for machine learning, is rapidly growing in popularity, Google's Jason Mayes told us in an interview.
Feb 7th, 2023 8:20am by
Featued image for: Google Touts Web-Based Machine Learning with TensorFlow.js
Image via Shutterstock 

TensorFlow.js is a JavaScript library for training and deploying Machine Learning (ML) models in the browser and on Node.js. It was launched by Google nearly five years ago, but its popularity has increased in recent years thanks to the practice of using ML in programming — part of the generative AI trend engulfing the technology industry currently.

To find out more about TensorFlow.js and how web developers use it in their projects, I spoke to Jason Mayes, who leads the Developer Relations team for Web ML at Google. “Web ML” is a broader term that basically means using ML inside the browser (or on Node.js). But the main part of Mayes’ remit is the TensorFlow.js team, so I began by asking him about the main use cases for ML on the web.

Why Do ML over the Web?

First off, he mentioned privacy. One common use case is for processing sensor data in ML workloads — such as data from a webcam or microphone. Using TensorFlow.js, Mayes said, “none of that data goes to the cloud […] it all happens on-device, in-browser, in JavaScript.” For this reason, TensorFlow.js is being used by companies doing remote healthcare, he said.

Another privacy use case is human-computer interaction. “With some of our models, we can do body pose estimation, or body segmentation, face keypoint estimation, all that kind of stuff,” Mayes said.

Lower latency is another reason to do ML in the browser, according to Mayes. “Some of these models can run over 120 frames per second in the browser, on an NVIDIA 1070 let’s say,” he said. “So that’s kind of [an] old generation graphics card and [yet it’s] still pushing some decent performance there.”

Cost was his third reason, “because you’re not having to hire and run expensive GPUs and CPUs in the cloud and keep them running 24/7 to provide a service.”

“The fourth reason people come to us is that it’s in JavaScript,” he said, noting the obvious popularity of that language. “Previously, TensorFlow was aimed at academics and researchers…this kind of stuff over in Python-land. Which is great, nothing wrong with that! But I think embracing the JS side of things could open machine learning up to much more people than ever before — and a lot more creatives, artists and musicians are starting to use us in JS-land.”

How JS ML Compares to Python ML

However, this begs the question of how TensorFlow.js compares to using TensorFlow in its more familiar Python environment?

“All the benefits I just gave you are pretty much impossible to achieve server-side,” Mayes replied. “And even if you don’t want to go on client-side, we can run in Node.js on the backend, on the server-side. And the reason you choose Node.js over Python is that even though Node and Python are just wrappers around the original TensorFlow that’s C++, [the] pre- and post-processing parts of the model execution can be accelerated by the just-in-time compiler of JavaScript.”

He mentioned that HuggingFace, one of the leading NLP service companies, runs its ML workloads in TensorFlow.js for the speed benefits.

“Python is actually not very efficient at running,” he continued. “It’s good for academia, for trying things out — it’s got a lot of libraries to use out of the box. But I think we’ll have the same thing replicated in our own community going forward, and you’ll see the performance benefits.”

As another example of this in action, he pointed to LinkedIn. “If you go to a LinkedIn web page on your mobile phone, that’s actually delivered by a TensorFlow.js model on the backend running in Node,” he said. This resulted in a “15% performance gain over their Python equivalent model, which means they save millions of dollars a month by just doing that.”

More Speed: WebGL and WebAssembly

As with many other leading web applications, TensorFlow.js is making use of the latest hardware acceleration technologies. WebGL and WebAssembly are both in production, while WebGPU is in testing.

“We’ve got WebGL to do graphics card acceleration,” he explained. “Essentially, using textures and shaders to do mathematical operations — which is a bit of a hack, but it works. And then we’ve also got WebAssembly to go faster on the CPU. We’ve also got the new emerging WebGPU standard, which is currently behind a flag in Chrome Canary and other browsers, but eventually, it will become the thing in browsers to use. And I think we’re seeing around 2x-plus performance [gain] in WebGPU — bear in mind, with WebGL right now we’re getting hundreds of frames per second already.”

More about Web ML

TensorFlow.js is clearly the main web-based ML tool at Google, but I asked Mayes what else comes under the umbrella term “Web ML”?

“So Google’s obviously heavily invested in ML and from my perspective, working on this Web ML side, it offers a unique selling point — if you will — to our [ML] ecosystem… currently, there’s no PyTorch.js, for example,” he said, referencing Meta’s ML platform.

Google offers what Mayes calls “a path to the web” for machine learning, “which researchers and others can embrace, to get those benefits that we spoke about before.”

He also works with “other teams [at Google] that might touch on web-based deployments of machine learning, like the MediaPipe models that are also able to run in the web browser.” He’s referencing an open source project called MediaPipe, for using ML in “live and streaming media.”

Future Growth

TensorFlow.js has been growing “3x year-on-year” according to Mayes. It’s only going to get bigger, as ML and AI apps continue to ramp up in popularity. Indeed, just this week Google itself released a ChatGPT competitor called Bard. I asked Mayes how big he thinks web-based ML tools like Tensorflow.js will get?

“I think Web ML is the real Web3,” he said, echoing a catchphrase he has been using on social media. “I’m not saying crypto is bad or anything, but I think […] it’s like a teenager trying to find itself right now. And I think Web ML can have an impact on industries [and] companies right now.”

“I believe that if we continue on this 3x path of growth, we could be the most widely used form of ML in the future within the TensorFlow ecosystem. That’s my personal belief. But if we continue this growth upward, I don’t see why not — because there are a lot more JS developers out there.”

To keep that momentum going, Mayes said that Google is “always looking for interesting models we can port from Google research to the web, to make it easier to use.”

If you’re a developer interested in learning more about web-based ML, check out this series of tutorials featuring Jason Mayes.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.