Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.
AI / Data / Open Source

Follow the AI/ML Leaders: De-Risk Your Projects

In the world of AI and machine learning, the choice of a database can significantly affect the success of your project.
Jun 23rd, 2023 7:02am by
Featued image for: Follow the AI/ML Leaders: De-Risk Your Projects
AI-generated feature image.

The recent artificial intelligence tsunami has created a lot of pressure to move fast just to keep up. Some might be inclined to sacrifice stability and quality to get rolling quickly with the most cutting-edge tools. Happily, it doesn’t have to be that way.

In the world of AI and machine learning (AI/ML), the choice of a database can significantly affect the success of your project. One of the key factors to consider is the risk associated with the scalability and reliability of the database system. Apache Cassandra, a highly scalable and high-performance distributed database, has proven to be an industry leader in this regard. It offers features that significantly lower the risk associated with AI/ML projects, making it a preferred choice for many organizations.

Large-scale users of Cassandra, like Uber and Apple, exemplify how this database system can effectively lower the risk in AI/ML projects. Uber uses Cassandra for real-time data processing and for holding the feature store directly in Cassandra for predictions. The ability to start small and scale up as needed, coupled with high reliability, enables Uber to manage vast amounts of data without the risk of system failure or performance degradation. Many newer systems built for AI workloads are trying to build scalability around a particular feature, but users that do AI at scale have been using Cassandra for years.

Scalability and Performance

AI/ML applications often deal with vast amounts of data and require high-speed processing. Planning for when you need capacity is a difficult task. The best plan? Just avoid it. Instead, go with a database that can scale quickly when you need it and never leave you with overprovisioned capacity.

Cassandra’s core ability to scale horizontally still sets it apart from many other databases. As your data grows, you can add more nodes to the Cassandra cluster to handle increased traffic and data. It’s just that simple. This feature is particularly crucial for AI/ML applications, which deal with growing data sets.

Uber is a hyperscaler, and each new product it introduces keeps pushing its scale requirements farther. As one of the largest users of Cassandra, Uber leverages this scalability to handle its ever-increasing and changing data needs. Cassandra’s high write and read throughput makes it an excellent choice for the real-time data processing required in Uber’s AI and ML applications.

Real-Time Processing

Real-time data processing is a critical requirement for any modern application. Milliseconds count when users are looking for the best experience. AI/ML applications often need to analyze and respond to data as it arrives, whether it’s for real-time recommendations, predictive analytics or dynamic pricing models. Cassandra, with its high write and read throughput, is well-suited for such real-time processing requirements. Cassandra’s architecture enables it to handle high volumes of data across many commodity servers, providing high availability with no single point of failure. This means that data can be written to and read from the database almost instantly, making it an excellent choice for applications that require real-time responses.

Uber Eats is a practical example. The application needs to process data in real time to provide you with food recommendations and estimated delivery times. This real-time processing is made possible by Cassandra’s high performance. Not only that, default replication makes infrastructure failures transparent to end users, which keeps them happy and using the application. The constant influx of changing data and wild cycles of usage are where Cassandra shines. Organizations that use Cassandra spend more time worrying about the right application features and far less about the database that supports them.

Going Global with Data

With Cassandra, data is automatically replicated to multiple nodes, and these replicas provide redundancy. If one node fails, the data can still be accessed from the replicas. This feature ensures that your AI/ML applications remain up and running, even in the face of hardware failures or network issues.

But Cassandra’s distributed architecture not only contributes to its high fault tolerance, it also helps you stay close to your users. Some users almost take its default global data replication for granted.

Companies like Apple and Netflix often talk about their active-active architectures that span many geographies around the world that it’s not even unusual. Besides fault tolerance, the user-centric aspect of this amazing ability is data locality. If you have users in North America, Asia and Europe, centralizing data in one location will lead to agonizing latencies for some subset of your users. The solution is to replicate data into each location and give everyone a short latency window for data.

De-Risking Your Project

Choosing the right technology stack is a significant part of de-risking any project. With Cassandra, you can start small and scale up as needed, providing a cost-effective solution for your project. Cassandra has proven its reliability over time, with some companies running their Cassandra clusters for over 10 years without turning them off. Newer technology with features developed specifically for AI is being added, but some of the heaviest AI/ML workloads have been managed quietly and consistently with Cassandra for quite some time. That said, it’s becoming an even more relevant choice for AI/ML workloads today.

Cassandra’s scalability, performance, real-time processing capabilities and longevity have made it an excellent choice for AI/ML applications. As AI applications continue to evolve and become more integral to business operations, the need for robust, reliable and efficient databases like Cassandra will only grow. By choosing Cassandra, you’re not just selecting a database; you’re future-proofing your AI/ML applications.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.