TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
API Management / DevOps / Software Development

Improving Cloud Native DevEx: The API Gateway Perspective

The API gateway is the front door to your applications and can also support your move to the cloud
Mar 8th, 2023 10:40am by
Featued image for: Improving Cloud Native DevEx: The API Gateway Perspective

Cloud native development is all the rage because, if done right, it leads to reduced development time, faster feedback loops for developers and better quality features and applications for end users. The cloud native paradigm opens the door to more flexibility because of its distributed nature and allows for scalability in increasingly dynamic environments.

Many development teams already take advantage of this flexibility in production and see immediate value. However, this doesn’t mean that cloud native doesn’t have its implementation roadblocks. The cloud native approach adds new and different complexities, which aren’t insurmountable, but do require a new way of thinking.

Reducing complexity to deliver on the productivity promise of the cloud means that other parts of the development organization cannot be left behind — in fact, they will be integral to cloud native adoption. Developers must understand container and cloud technologies to interact with the underlying infrastructure. But the infrastructure — which also includes API gateways, service meshes and some parts of the virtualized networking stack — is not typically the developer’s responsibility.

Instead, architects and platform engineers will need to build pathways for developers to continue shipping software with speed and safety in mind. QA specialists also must be central to this effort. The focus will be on enablement teams, in Team Topologies speak. Easing the cloud native journey for developers is a team effort.

If the Cloud Is So Complex, Why Move There?

If it’s difficult to address cloud native complexities, why do so many development teams and organizations want to move to the cloud? From a business and developer perspective, the ability to scale and evolve quickly means faster time to results (or to market), and it has significant benefits, from greater organizational efficiency to more revenue opportunities. The trick, of course, is to get over the initial speed bumps (complexity).

This requires teams working together and identifying the processes, workflows and tools that will work for their team with a view to achieving their specific cloud native goals: Where are they trying to go? What value does the team expect to derive?

In fact, cloud technologies like Kubernetes and microservices pose complexity that make cloud native unrealistic for many organizations, particularly those operating in traditional and risk-averse industries.

In part, this is why the industry has made a wide swing toward the idea of developer platforms and platform engineering as a fix-all, promoting the idea that a productized approach to developer experience in the form of “slapping a new label on old practices,” according to consultant and author Sam Newman, will suffice.

It’s a fundamental misinterpretation of the cloud native landscape to imagine that a single platform would solve complexity, especially given the points of departure Newman highlights: Kubernetes was never meant to be developer-friendly, and the landscape of tools in the Cloud Native Computing Foundation (CNCF) ecosystem contains “a bewildering array of options” for anyone looking to build a solution on top. It’s easy to get lost.

Solve for Developer Enablement and Experience: Reduce Complexity

But it’s not really about the platform: It’s always down to the developer experience and supporting developer team productivity and the promise of cloud native. That is, what combination of processes and tools will get applications out into the world safely and faster, with the least amount of friction and cognitive load for developers? And once the developer experience is addressed (which is an ongoing activity), what are the goals for cloud native development?

Setting Measurable Goals for Reaching the Cloud

Cloud experimentation can be useful for figuring out the possibilities, but bringing the cloud into production means getting serious about what it must do for a development team (and overall organization) that more traditional development patterns cannot. Why is the cloud right for your organization and team? And can the team and organization cope with the changes and complexities the cloud introduces?

Once you understand what you want to achieve and know how to go about measuring it, your path will become clearer.

Let’s use some typical DevOps-focused metrics to look at performance measurements as an example. Most organizations use DORA or Accelerate metrics to understand their performance in key areas, such as deployment frequency (DF), lead time for changes (LT), mean time to recovery (MTTR) and change failure rate (CFR). All are critical to understand if you’ve gone into cloud native development in search of faster, safer software shipping.

Getting developers connected to these metrics to understand their work better is a way to build insight into their work without forcing them to run the metrics. (DevOps teams would usually still do this.)

Cloud Native Responsibilities

Where do the different code-run-ship responsibilities reside in this new cloud native paradigm? In recent years, debates about responsibility have shifted, with some arguing that developers should be responsible for full code, ship, run responsibilities and others arguing that developers should continue to focus solely on coding.

What we’re seeing is an in-between area in which developers need to understand what is going on throughout the full software life-cycle but don’t need to actually worry about the infrastructure making it all work behind the scenes.

This is where platform discussions have frequently arisen as a way to empower developers with more responsibility. The goal isn’t always to make the developer do more outside of their own focus area — it should be to pave a path to faster and easier completion of the responsibilities that do fall within their purview.

A practical example of an area where a developer needs to have insight and access, but not 100% responsibility is the API gateway. An API gateway is a necessity for most modern app development. It is the front door to your applications and systems. It’s how every single user request is deployed, routed and secured. A developer wants to code and release services quickly while being able to configure API endpoints dynamically. They don’t need the extra burden (usually outside their core capabilities) of setting up the platform for incident minimization and stronger security, which are critical to API gateway operations.

Cloud Native API Gateways

A cloud native/Kubernetes native API gateway simplifies the delivery of secure, high-performance microservices traffic management at scale.

What other considerations inform decisions around selecting a cloud native API gateway?

Service Discovery: Monoliths, Microservices and Meshes

One of the biggest questions in adopting a cloud native approach to service connectivity and communication is: “Which technology — API gateway or service mesh — should I use to manage how microservice-based applications interact with each other?” The answer isn’t completely cut and dried. These technologies differ in the way they work and should be considered from the end user’s experience — how to achieve a successful API call within a specific environment. Prospective users must understand the differences and similarities between the two technologies to determine when one should be used instead of the other, or both.

One of the benefits of moving to a service-based approach is the ability to release fast and without requiring “big bang” approaches to deployment. To keep the lead time for changes low and the deployment frequency high, API gateways and service meshes must be easily configurable and available for self-service use by developers.

Balancing the Load: The Curse of Configuration Changes

Load balancing addresses distributing network traffic among multiple backend services in the most efficient way to ensure scalability and availability. In Kubernetes, there are various choices for load-balancing external traffic to pods, each with different tradeoffs. Understanding various load-balancing strategies and implementations is required to make the right choice and get started.

Teams also need to be clear about the impact of misconfiguration for this type of infrastructure. Without appropriate thought given to the increased dynamism in a cloud native environment, it’s all too easy to impact change failure rate and MTTR.

Easing Adoption: AWS EKS and API Gateways

At Ambassador Labs, we’ve helped thousands of developers get their Kubernetes ingress controllers up and running across different cloud providers. If you are using Amazon EKS Anywhere, the recommended ingress and API gateway is Emissary-Ingress. Overall, AWS provides a powerful, customizable platform on which to run Kubernetes. However, the multitude of options for customization often leads to confusion among new users and makes it difficult for them to know when and where to optimize for their particular use case.

You can see in the EKS Anywhere docs that the Amazon team has invested heavily in developer experience, with their eksctl-command line tool reducing the complexity (and cognitive load) for packaging and deploying supporting infrastructure and applications.

Moving to the Cloud: Focus on Speed, Safety and Self-Service

Going cloud native is a big decision, and not necessarily the right decision for every organization. Many technical and organizational considerations underpin the pros and cons. Once an organization reaches a level at which they are ready to implement developer self-service facilitated by platform and ops teams, cloud native is a viable choice for speed and safety.

It’s then that other considerations enter the frame. What technologies support the adoption of cloud native practices and technologies? As mentioned earlier, the API gateway is the front door to your applications and can also support the move to the cloud. In the end, it’s highly likely that no part of your technology stack will remain untouched when moving to the cloud. Be sure to frame any changes around your goals, such as increasing speed and safety; your metrics, such as improving your DORA measurements; and the developer experience, such as improving the ability for developers to self-serve based on paved paths defined by the platform enabling teams.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Ambassador Labs.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.