Here was the scenario: [email protected], and then a few lines later, FakeUser+20455, and then you guessed it — FakeUser+20454. This was an odd pattern, so the VP walked over to the development and product team and asked about it. They mentioned that they had been using that email domain for the new automated testing initiative on the login forms whenever they push new code. They used some “stubs” for this sort of thing locally, but once it went to Test and Staging, the teams would use some of these fake emails on their UI tests.
The VP’s mind was already racing but was unsure if they could see the calculations going on in their head.
“Does our test environment have live connections to all of our SaaS subscriptions and APIs like Twilio, Intercom, or Mixpanel, she asked.
He answered eagerly, “Yes, of course — our test environments are as close to our production environments as possible.”
And then she looked at the bill… Huge spikes in charges coinciding with the automated testing initiative.
This scenario is all too real in a world where hooking up a new SaaS tool to your entire business and software pipeline can take just minutes, and costs are small enough for each transaction that they amount insidiously slow. And it’s not to say that these products don’t offer value for their services, they absolutely do. But in a world where test automation is required for continuously delivering a quality experience, it’s really difficult to absorb these exorbitant costs for testing and development in agile environments.
How Virtualization Can Help
To combat these challenges, we can actually use a practice that has been around for decades but applies it to our new service-oriented world: Service Virtualization. We use service virtualization to decrease costs of third-party integrations and even internal services that may be resource heavy in our development and testing pipeline. Service virtualization works by emulating a service, complete with customizable response behavior and data. It helps increase system availability for QA and engineering organizations, but it also solves a critical challenge for organizations and SaaS providers — taming the SaaS sprawl.
That means any web service integration or connection with an outside vendor that you are currently paying for could theoretically be replaced in all stages before production — meaning you will only be paying for the service when you are also earning from the service. This can be huge for agile teams that need to watch infrastructure and resource budgets closely.
Where to Start
Let’s start with what to start virtualizing — any service that costs a lot of money on a per call basis — either because of third-party pricing or expensive internal resources and processing power. It would be better to start with a modern REST API, and even better — one with an OpenAPI, or Swagger, Specification. This is because the OpenAPI Specification (OAS) allows a special amount of flexibility when virtualizing an asset.
Once we’ve determined what service we’d like to virtualize, we can start virtualizing it. There are usually three ways to go about virtualizing an API with a service virtualization tool, like ServiceV Pro: recording, importing, and manual creation.
Creating a Virtualized API
Recording a Live Service
This is great for APIs that are currently in use in your application. You’ll be able to record traffic and responses from APIs as you use them in real-world settings. Imagine testing four or five of your APIs with different parameters — and then being able to replay all of those responses at any time — even when the “real” API is no longer available.
Importing an API Definition
This method is a great fit for teams that use definition-driven development to enhance their API lifecycle with schema contacts like OAS/Swagger. You can import your API definition, and the virtualization solution will stand it up complete with your example request and responses.
Starting from Scratch
You can also generally start building your API in a virtualization tool. RESTful architecture actually makes it pretty simple to start building GET, PUT, DELETE, and POST endpoints and adding parameters and queries on the fly.
One of the great things about service virtualization is that you’ll be able to actually breathe life into the asset with realistic data and conditions. You can import data via a file or database to build out virtual APIs that have really dynamic data behind them. That means you’ll be able to do more edge-case testing and not just have a few different queries tested — test them all!
Once you have all your data injected in our API, we can take the realism to the next level. We’ll actually be able to edit and configure network options — like server capacity or latency. This is huge for both performance testing and functional testing. If you are depending on chained API calls to execute something, you better make sure they fire in order without a delay.
Now that we have our virtualized API ready to go, we can deploy it. Of course, like most other mocking and stubbing solutions, you can deploy it locally in just a few seconds to your own machine for use. But where virtualization really shines is its ability to be shared throughout the organization and the delivery process. You can deploy the virtualized APs to other on-premise servers or even a cloud, like AWS or Azure.
That means that you’ll be able to reuse the assets with several teams and create a reliable catalog of virtualized APIs for your organization. Flexibly deploying the virtualized API also allows you to deploy the APIs on Test or Staging servers throughout your CI/CD process. Yes, that means no more enormous Twilio bills, just from your engineering team doing due diligence in the testing phase.
With service virtualization, you’ll actually be able to curb costs while also getting more testing coverage around your application. For agile teams that have automated most of their testing, this can mean thousands of dollars a year in savings and an increase in customer satisfaction and revenue.
Feature image via Pixabay.