TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%

Use Service Virtualization to Trim SaaS Testing Costs

With service virtualization, you’ll actually be able to curb costs while also getting more testing coverage around your application. SmartBear's Daniel Giordano explains.
Sep 14th, 2018 6:00am by
Featued image for: Use Service Virtualization to Trim SaaS Testing Costs

Daniel Giordano
At SmartBear Software, Daniel Giordano is Product Marketing Manager, ReadyAPI, the integrated suite of applications for API testing that delivers accurate, fast, safe, and on time web services. Dan drives the go-to-market strategy, product messaging, and sales enablement for the ReadyAPI platform. He has also served as Marketing Manager, CrossBrowserTesting and Digital Marketing Manager at SmartBear as well as Digital Marketing Manager for Todays Growth Consultant. Dan has a bachelor’s degree from Wesleyan University. He is a frequent speaker for SmartBear’s webinar series on testing.

Here was the scenario: FakeUser+20456@gmail.com, and then a few lines later, FakeUser+20455, and then you guessed it — FakeUser+20454. This was an odd pattern, so the VP walked over to the development and product team and asked about it. They mentioned that they had been using that email domain for the new automated testing initiative on the login forms whenever they push new code. They used some “stubs” for this sort of thing locally, but once it went to Test and Staging, the teams would use some of these fake emails on their UI tests.

The VP’s mind was already racing but was unsure if they could see the calculations going on in their head.

“Does our test environment have live connections to all of our SaaS subscriptions and APIs like Twilio, Intercom, or Mixpanel, she asked.

He answered eagerly, “Yes, of course — our test environments are as close to our production environments as possible.”

And then she looked at the bill… Huge spikes in charges coinciding with the automated testing initiative.

This scenario is all too real in a world where hooking up a new SaaS tool to your entire business and software pipeline can take just minutes, and costs are small enough for each transaction that they amount insidiously slow. And it’s not to say that these products don’t offer value for their services, they absolutely do. But in a world where test automation is required for continuously delivering a quality experience, it’s really difficult to absorb these exorbitant costs for testing and development in agile environments.

How Virtualization Can Help

To combat these challenges, we can actually use a practice that has been around for decades but applies it to our new service-oriented world: Service Virtualization. We use service virtualization to decrease costs of third-party integrations and even internal services that may be resource heavy in our development and testing pipeline. Service virtualization works by emulating a service, complete with customizable response behavior and data. It helps increase system availability for QA and engineering organizations, but it also solves a critical challenge for organizations and SaaS providers — taming the SaaS sprawl.

That means any web service integration or connection with an outside vendor that you are currently paying for could theoretically be replaced in all stages before production — meaning you will only be paying for the service when you are also earning from the service. This can be huge for agile teams that need to watch infrastructure and resource budgets closely.

Where to Start

Let’s start with what to start virtualizing — any service that costs a lot of money on a per call basis — either because of third-party pricing or expensive internal resources and processing power. It would be better to start with a modern REST API, and even better — one with an OpenAPI, or Swagger, Specification. This is because the OpenAPI Specification (OAS) allows a special amount of flexibility when virtualizing an asset.

Once we’ve determined what service we’d like to virtualize, we can start virtualizing it. There are usually three ways to go about virtualizing an API with a service virtualization tool, like ServiceV Pro: recording, importing, and manual creation.

Creating a Virtualized API

Recording a Live Service

This is great for APIs that are currently in use in your application. You’ll be able to record traffic and responses from APIs as you use them in real-world settings. Imagine testing four or five of your APIs with different parameters — and then being able to replay all of those responses at any time — even when the “real” API is no longer available.

Importing an API Definition

This method is a great fit for teams that use definition-driven development to enhance their API lifecycle with schema contacts like OAS/Swagger. You can import your API definition, and the virtualization solution will stand it up complete with your example request and responses.

Starting from Scratch

You can also generally start building your API in a virtualization tool. RESTful architecture actually makes it pretty simple to start building GET, PUT, DELETE, and POST endpoints and adding parameters and queries on the fly.

Configuring

One of the great things about service virtualization is that you’ll be able to actually breathe life into the asset with realistic data and conditions. You can import data via a file or database to build out virtual APIs that have really dynamic data behind them. That means you’ll be able to do more edge-case testing and not just have a few different queries tested — test them all!

Once you have all your data injected in our API, we can take the realism to the next level. We’ll actually be able to edit and configure network options — like server capacity or latency. This is huge for both performance testing and functional testing. If you are depending on chained API calls to execute something, you better make sure they fire in order without a delay.

Deploying

Now that we have our virtualized API ready to go, we can deploy it. Of course, like most other mocking and stubbing solutions, you can deploy it locally in just a few seconds to your own machine for use. But where virtualization really shines is its ability to be shared throughout the organization and the delivery process. You can deploy the virtualized APs to other on-premise servers or even a cloud, like AWS or Azure.

That means that you’ll be able to reuse the assets with several teams and create a reliable catalog of virtualized APIs for your organization. Flexibly deploying the virtualized API also allows you to deploy the APIs on Test or Staging servers throughout your CI/CD process. Yes, that means no more enormous Twilio bills, just from your engineering team doing due diligence in the testing phase.

With service virtualization, you’ll actually be able to curb costs while also getting more testing coverage around your application. For agile teams that have automated most of their testing, this can mean thousands of dollars a year in savings and an increase in customer satisfaction and revenue.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.