TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%

A Step-by-Step Guide to Modernizing IBM i Applications

Moving mainframe workloads to the cloud with no downtime is challenging, but not impossible. Here is one method for doing it.
Apr 28th, 2022 10:00am by
Featued image for: A Step-by-Step Guide to Modernizing IBM i Applications
Feature image via Pixabay.

Matthew Romero
Matthew Romero is the technical product evangelist at Skytap, a cloud service to run IBM Power and x86 workloads natively in the public cloud. Matthew has extensive expertise supporting and creating technical content for cloud technologies, Microsoft Azure in particular. He spent nine years at 3Sharp and Indigo Slate managing corporate IT services and building technical demos, and before that, he spent four years at Microsoft as a program and lab manager in the server and tools business unit.

Many applications running on the IBM i-series mainframes are business-critical systems that have been operating for decades, such as airline flight reservation systems.

Not only are these applications highly complex (and have typically been customized heavily over time), but they often cannot be shut down, even briefly.

When organizations decide to move them to the cloud — whether to modernize them or for improved security, disaster recovery, scalability, flexibility, etc. — the initial migration often must be accomplished without any downtime.

Many organizations are understandably reluctant to even attempt to migrate these applications for this very reason, but sometimes circumstances beyond IT’s control — like an expensive hardware refresh or a service provider discontinuing support for the Power platform — force their hand. 

Moving these types of workloads to the cloud with no downtime is challenging but not impossible.

Here is one method for doing it, based on a recent customer deployment that my team and I worked on.

The goal of this migration was a live migration of on-premises IBM i workloads into Azure with zero downtime. This was accomplished using the Azure Data Box physical storage device, a backup and restore system from Commvault, and IBM i High Availability (HA) and Disaster Recovery (DR) tools from Mimix. Here’s the role each of these tools served in the overall migration.

Bulk Upload of Large Files

Customers with many terabytes of data need a way to move that data to the cloud efficiently.

In our recent engagement, my team used an Azure Data Box physical storage device for the initial bulk data import of large IBM i logical partitions (the term “LPAR” is used in IBM Power Systems and is analogous to a virtual machine).

With datasets larger than 40TB, this approach helps avoid overloading the user’s network connection. Azure ships the box to the user, who loads it up with their data and then sends it back to Azure to be uploaded to the relevant Azure data center.

Streamline the Restore Process

After the bulk upload of files is completed, the team will need to migrate smaller LPARs and restore any files that inevitably get missed during the initial upload.

Commvault backup and DR technology works well for IBM i applications. My team used Commvault to create an IBM i ISO bootable image of the source LPARs being backed up to the Azure Data Box.

This approach streamlines the process of restoring the data into Azure and deduplicates and compresses the datasets, which in turn reduces the overall transfer times. Commvault can also be used with a direct connection to migrate smaller LPARs into Azure. This is the backup option for anything not included in the initial bulk upload.

Final Syncing and Disaster Recovery

To switch from the old on-premises workloads to the new ones in the cloud without any downtime, the team will need a system to sync any final changes between the two in real time.

It’s good practice to set up disaster recovery for the new workloads before they go live to guard against any unforeseen issues. My team used Mimix for both functions in our recent deployment.

Once the data was fully hydrated into the cloud, Mimix was used to sync any final changes before the cutover. We also set up a Mimix replication relationship between the preproduction workload in the cloud and a disaster recovery workload located in a different Azure region. The final cutover was not performed until full cloud-based DR was in place.

Here is a detailed list of steps for a migration process using these three pieces of technology.

Lay the Groundwork

1. Validate the existing network and establish connectivity between the source LPARs on premises and the landing zone within the cloud. In our recent deployment, we used Global Reach-enabled ExpressRoute with transitive routing. Using FTP over a fast private network connection is also a viable option.

2. Set up the landing zone in the cloud. In cases like this one where IBM i applications are being lifted and shifted to the cloud without refactoring, some solution that recreates the IBM POWER environment in the cloud must be used. The exact options and details vary depending on the cloud provider of choice. 

3. Set up Commvault, including the Commvault Media Agent server (Linux) in the on-premises iSeries environment, the Commvault Commserve Server (Windows) and the Commvault Media Agent server next to the landing zone in the cloud. 

4. Configure and order the Azure Data Box in the Azure Portal. Note that the Azure Blob storage account must be configured in advance. It cannot be changed once the Data Box has shipped. Once this is all finished, ship the box to physical address of the user’s data center. 

Prepare the Source Data for Migration

5. Configure the Data Box to the user’s network and enable NFS once it arrives. The media agent server will talk to the IBM i and mount the NFS share from the NFS location on the Data Box. 

6. Take a flash copy of the source production LPARs. This allows the flash copies to be put into a full restricted state to reduce any undue load on production and mitigate the risk of missing any locked files being backed up during the file transfer process.  

7. Install the IBM i Commvault agent on the flash copies. This allows the flash copy LPARs to talk directly to the Commvault IBM i Media Agent servers.

8. Initiate the Commvault backup process, which will create a deduplicated and compressed backup to the Data Box repository. This reduces the amount of data that needs to be stored and uploaded. 

Move into the Cloud

9. Ship Azure Databox to Azure. When it arrives, the Databox will automatically load into the Azure Blob storage container created in step #4. This in turn loads the IBM i backup data into Blob storage container. The data has reached the cloud!

10. Restore the IBM i backup data into the cloud landing zone using Commvault.

11. Configure Mimix replication between the on-premises (source) production LPARs and the preproduction target LPARs in the cloud.

Establish DR in the Cloud and Execute Cutover

12. If cloud-based DR needs to be up and running for the cloud workloads before cutting over, then clone the running preproduction IBM i instances and copy them to the target DR site. In Azure, this can be done over the Azure backbone. Then set up a new DR sync relationship using Mimix between the running preproduction LPARs and the new DR LPARs.

13. Once the cloud-based DR and the Mimix relationship between the source and preproduction LPARs is in a steady state, cut over from on-premises source LPARs to the preprod IBM I instances in the cloud. These are the new production workloads. 

14. Celebrate a successful transition to the cloud!

Not only does this method accomplish the migration with zero downtime and DR redundancy in place, but it also allows movement of very high volumes of data to the cloud in an efficient way. Many customers with older IBM Power hardware in their data centers also get an immediate performance boost when moving to the cloud.

Many of our customers find that this process is smoother than they expected it to be, and some have even said they wish they had done it sooner. While this type of migration is a complex task that shouldn’t be taken lightly, recent advances both in the cloud providers and from third-party vendors make it less scary than many IT teams and CIOs assume.   

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Skytap.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.