Timescale’s Dynamic PostgreSQL Makes Data Provisioning Elastic
After the push to the cloud came with some breathtakingly large bills, the FinOps movement is gaining momentum as companies try to optimize their cloud costs and still bring business value. That pressure increasingly is trickling down to developers as well as tech leaders and bean counters.
“We’ve moved from a model where you allocate a fixed compute and fixed disk allocation, like I want to rent 500 gigabytes of storage from Amazon, to something where we can build infrastructure where you buy the base, but rent the peak,” said Mike Freedman, Timescale co-founder and CTO.
It’s cost-effective, it’s scalable and frees users from worry about either under- or over-provisioning resources for their cloud loads, Freedman said.
“[You might say], ‘I know I’m going to have a certain amount of load, but I want it to almost instantaneously scale as my load might go up or down, either because my load is a little bit bursty or because it has natural, different traffic patterns throughout the days or throughout the week.’”
This dynamic provisioning with its managed service allows businesses to pay only for the compute and storage they use, rather than forcing them to pay at a static peak level. By allowing customers to choose a minimum and maximum CPU range, customers have the flexibility and scalability of serverless without the big bills and latency concerns.
Pay for What You Use
“In the cloud, you are moving from traditional CapEx to an on-demand OpEx expenditure model, where cost management and budget are no longer one-time activities. So, the business must craft its cost optimization and governance strategy for reducing its overall cloud spending by identifying mismanaged resources, eliminating waste, reserving capacity for higher discounts and right-sizing the services at scale.”
StormForge’s Reid Vandewiele also has urged companies to free developers from setting and having to monitor CPU and memory requests in Kubernetes, arguing that this should be automated in the platform.
Timescale’s dynamic provisioning sounds a lot like serverless architecture, though Freedman points to differences.
Serverless, he explains, is designed for stateless workloads, not databases. He maintains that serverless is more suitable for intermittent workloads that don’t rely on in-memory data caching rather than more continuous workloads that might have spikes, such as for Black Friday, or for variable workloads like a fitness app that has peaks at certain times of day.
This Dynamic PostgreSQL infrastructure consistently supports your baseline, then scales for peaks up to a defined max. It’s offering a 30-day free trial on AWS.
The company also recently added pay-as-you-go storage. Rather than having to figure out the best disk size for your use case, users are billed according to the amount of data they use. It allows users to start small and grow according to their needs. If they delete data, the bill shrinks, as it does if they apply compression or tier data to object storage.
Betting on Familiar Postgres
The soaring interest in generative AI has niche databases like Pinecone, Weaviate, Qdrant, and Zilliz cropping up that specialize in storing vector embeddings. But betting that customers would rather rely on time-tested and familiar Postgres than add yet another niche database for vector data, Timescale has added vector capabilities through the community-built extension pgvector.
On top of pgvector, Timescale has added specialized indexes, including an approximate nearest neighbor (ANN) index inspired by the DiskANN algorithm as well as offering pgvector’s hierarchical navigable small world (HNSW) and inverted file (IVFFlat) indexing algorithms. It maintains these improve performance even beyond its standard Postgres rates.
Timescale’s hypertables enable it to efficiently find recent embeddings, constrain vector search by a time range or document age, and store and retrieve LLM response and chat history.