TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Observability

How to Fight Latency:  The Killer of Apps

May 31st, 2016 2:52pm by
Featued image for: How to Fight Latency:  The Killer of Apps

Dave Ginsburg
Dave Ginsburg is Teridion's chief marketing officer, bringing to Teridion 25+ years of experience spanning corporate and product marketing, product management, digital marketing, and marketing automation. Previously, Ginsburg worked at Pluribus, Extreme, Riverstone, Nortel, and Cisco.

For any cloud-based application provider, time is money, and for the user, added time generates frustration, degrading the quality of experience. Across the global internet, an increasing number of services are delivered remotely over the public cloud, and the latency of the interaction can make the difference between financial success and failure.

It is commonly understood that TCP performance drops as latency increases, and any packet loss, however small, compounds this. For example, just .1 percent packet loss will reduce throughput from about 6 Mbps at 50ms to only 2 Mbps at 250ms. And even with no loss, web page abandonment grows substantially when the load time is greater than 4 seconds.

Given that minimizing latency, and by association, determining a path across the global Internet with minimal packet loss is still critical, how can we accomplish this? Using the $20 billion real-time ad serving market as an example, we’ll first look at the different components, and then introduce an approach to mitigate latency.

Lower latency equates to faster rendering, conversions, and ultimately revenue. But the solutions apply to any vertical with many of the same characteristics, including finance, gaming, real-time communications, and even social networks.

It is no surprise that ad serving occupies more and more of the typical web page, and today it is a cat-and-mouse between those serving the ads and those attempting to block them. It is no wonder that this is the case, looking at the composition of some typical websites.

Teridion-example

Teridion

Separate from the content (video, image, etc.) served, just the underlying mechanisms of ad serving generates upwards of half a petabyte of request data daily. This consists of a complex set of interactions between the advertiser, the demand-side platform (DSP), the sell-side platform (SSP), the publisher, and ultimately the consumer. Each step introduces potential latency, and over the WAN, packet loss.  The actual servers, of course, introduce delay as well.

Ideally, the solution will actively monitor all possible paths from the user to the server, selecting the optimal path in real-time.

Looking only at the customer to sell-side flow as part of deep linking, consider an end user in India accessing an ad publisher in California. Though at first glance one could attempt to duplicate footprint across multiple geographies, in many cases due to costs, application architecture, or otherwise, this is not practical. Users across the globe are homed to a single location. Over the public Internet, this exchange takes 1.6 seconds, consisting of the following components:

Teridion-Timeline-01

In the chart above,

DNS (ms): Is the time it took Catchpoint’s synthetic node to resolve the DNS for the base URL.
Connect (ms): Is the time it took to establish a connection with the server for the base URL.
Wait (ms): Is the time between the connection with the server being established and receiving the first Byte of response for the base URL
Load (ms): Is the time it took to download the entire response for the base URL. Also referred to as Receive Time.
Response (ms): The time it took from the request being issued to receiving the last Byte for the base URL.

Other than the DNS query, which can be optimized for the first example as well, the connect, request, and load phases are very sensitive to end-to-end latency and may be optimized. The optimized path is now under 400ms:

Teridion-Timeline-02

So how do we accomplish this? To be truly effective, this requires both monitoring as well as the ability to take action. Although there are multiple RUM (Real User Monitoring) platforms available, in almost all cases they either only report on the problem, or make a suggestion to the administrator as to what action to take. One without the other is only half of the solution.

Ideally, the solution will actively monitor all possible paths from the user to the server, selecting the optimal path in real-time.

Teridion is a sponsor of The New Stack.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.