How to Migrate Your Mainframe to the Cloud
Mainframes are big, powerful and reliable computers that enterprises use to perform large operations, for example, handling the transactions of a credit card company, managing the database of an insurance company and so on. The biggest advantage of mainframe computers is that it can handle a huge number of users and several input/output devices at the same time. It acts as the central data repository or hub, and would be connected to other terminals and workstations. Mainframe computers were hugely popular for the past few decades, especially in the 1970s and 1980s. But as technological advancements were made, mainframe computers began to wage a losing war to their more sophisticated competitors.
Are you still stuck in the legacy mainframe system? It could stifle innovation, and keep you out of competition in the long run. Perhaps, you would think upgrading your mainframe systems might bring you back in the loop, after all, you have made significant investments in those expensive mainframe systems. But does that bring you on the mark. Does it provide with the agility, reliability and scalability that’s so demanded in today’s requirements?
Additionally, people who are technically adept at using the mainframes are reaching the edge of retirement. It would be difficult to find suitable talent to replace them because most of the new talents are gravitating towards the newer and more popular technologies. And eventually, in the long run, you may not be able to find people who know how the systems are used, locate the source code of the applications, and even decipher how these applications are run. And if small changes are made to the mainframe, it could lead to problems that IT teams will find hard to troubleshoot or fix.
The current situation for many large-scale companies:
For decades, many of the large enterprises have been running their software applications on mainframes, some of them through acquisitions, mergers and even inherited ones. And maintaining these antiquated applications lead to cost issues and other challenges. Failing to modernize could lead to the mainframes turning into a liability over time, where they end up costing more, but not being able to make use of the advanced technologies. So, not migrating could be a risky decision.
Cloud computing has attracted a lot of hyperbole because it can address the challenges mainframes pose.
By migrating the mainframe to the cloud, you are moving the workload to open system environments. The advantage is that there will be no change in the existing legacy program business logic. It is possible to effectively leverage critical data, enjoy a flexible, transparent and modern environment and save money on mainframe contracts.
Now, you have three main options before you consider the migration strategy: Rewriting, Upgrading or Open Frame Rehosting.
Rewriting - You can move the mainframe applications to the open, cloud environment through a complete code conversion. This may look good for a while, and many have actually gone forth with it by rewriting a business application. It requires proper interpretation of the existing business logic, and then rewriting them. In practical application, this may pose many challenges that you would be unprepared for. However, it’s not a permanent solution.
Upgrading - In this option, you can upgrade your hardware and use a high-capacity mainframe machine. This will improve capacity and performance, but it also increases maintenance costs, not to speak of the license renewal and payment. But issues of inflexibility are still going to crop up.
Open Frame Rehosting - In open frame rehosting, you enjoy the benefits of rewriting, but minus the issues. It costs substantially less money and time for you, with practically very little risk for business logic errors. Open frame comes with a set of sophisticated tools that can migrate data and recompile mainframe applications. The applications will work as they have always worked, with no change in user functionality, and no change in business logic. There is complete data modernization and the application will be responsive to user demands, evolving and complying with their changing requirements.
As the amount of data generated is exponentially high, you need to plan a data migration strategy that’s well planned and smart. The best method should not disrupt the organization’s workflow. Here are three main strategies that you can consider to ensure a successful outcome.
The Big Bang Approach - As the name suggests, the big bang approach is when the migration is done at a single time, maybe over the weekend. When the new system is ready, the old one goes completely offline. The old and the new system will not run simultaneously. There is an interval between bringing up the new cloud system and the legacy system so no production or operation takes place during that time.
Pros: This is a good approach because you don’t have to synchronize the new systems later.
Cons: Big enterprises will not benefit from this approach, especially if they have to be online 24/7. And since time is so limited, you will run into problems if glitches occur and you are not able to solve it quickly.
The Parallel Approach - The new system will be readied while the old system is maintained. Updates will be sent to both systems until the migration is complete and finished. Once the migration is completed and everything is validated, the legacy system can be switched off forever.
Pros: If and when any glitch occurs, you can switch back to the legacy system.
Cons: Nothing in particular; it is the best of the three approaches.
The Phased Approach - The migration process is done in small increments, according to the convenience of the enterprise. It could be per-module, per-volume, or per-subsystem basis. The planning should be immaculate and an error can cost you dearly. The dependencies between the modules are mapped accurately, so the modules don’t get lost in the legacy or target system.
Pros: It is less risky than the big bang approach. Bugs can be worked out easily as the transfer is in increments.
Cons: The modules could turn orphans, if the dependencies are not properly mapped.
Cloud migration is no longer just an IT strategy, it also represents the core of an organization’s business strategy. Mainframe applications affect critical business processes across the enterprise. For that reason, it is particularly important that the core migration project team not be limited to IT operations professionals and application developers. It should include executives and business stakeholders who understand the business processes that the mainframe applications support, and architects and application support staff who have the “tribal knowledge” about how the current environment works.
One of the crucial things to remember during the planning process is defining a foundational architecture for the target platform in the cloud. In order to do this successfully, you have to match the platform features and tools as per the needs of the enterprise. This includes performance, user account management, scalability, high availability, security, compliance, networking, operations, archiving, process automation, access control, and disaster recovery.
While planning the migration process, you have to ensure that the tools are right for translating source code and replicating parts of the mainframe environment onto the cloud. You might have to invest time and effort to evaluate and select the best tools for this.
And an important factor is to consider the licensing options. This must be done quite early in the planning stage, before you start with the engineering and code conversion work. Licenses can really affect the cost savings, so negotiate with the vendor before you start with it. Make sure the application and tool vendor have adapted the licensing schemes to work with the cloud platforms.
It is also important to have a detailed roadmap before you move with the execution part. This also includes capturing all dependencies of the mainframe migration has across the organization. Being aware of the dependencies helps prioritize what needs to be done and mitigates implementation risk.
Most of the large-scale IT companies have almost a similar execution strategy. Here’s how they go about it.
Before starting the actual migration, the IT team checks the foundational infrastructure on the target platform. It involves the following: setting up tools to simplify the migration, ensuring network and access controls, leveraging “code as infrastructure,” starting up internal processes to provide operational support for security, logging and networking among other key functions.
IT teams can use automatic tools to port code in possible areas. There will be native tools in the cloud platform to help with batch process configuration, performance elasticity, backup and recovery possibilities, transaction processing and so on.
Needless to say, testing processes must be done regularly and at multiple levels. You can automate the testing of the desired outcome of application performance to ensure all aspects of testing are covered. The IT team can build test automation for data assets, components, application functionality, integrations and data obfuscation to ensure worse-case scenarios are covered, including security issues, network failures, system malfunction, etc.
There are new application usage patterns that the end users must have experience in. At this stage of execution in cloud migration, arrangements must be made to train them so they can make full use of the cloud platform and tools.
We explained the different processes in migration - phased, big bang or parallel. Now you can roll out the migrated application. If you want the mainframe decommissioned soon, then the big bang is the way to go.
At the end of the day, you get an environment where the migrated applications work on the modern, virtualized platform with the same or higher levels of reliability, capability performance of legacy mainframe applications.
The target platforms are much more standardized and hence offer better migration tools for a seamless move. The users just need to apply the right strategy and best practices to make it work.
Third party migration providers are always the best choice because they will definitely have the tools for it. They will have the documentation and in-depth knowledge to tune you into the new environment. And most enterprises do not have any regeneration tools in-house. Additionally, third-party providers come with transition-disruption "insurance”, so they will know how to deal with crisis if and when they occur.
Migrating to cloud from mainframe keeps you in the face of competition. Here are the advantages of migration at a glance.
Scalability - Keeps up with your enterprise’s organizational periodic requirements. You can easily allocate resources accordingly.
Cost saving - Helps you remain in the loop of competition. Keeps both capital expense and operating expense in check.
Automated business applications - Keeps cloud applications updated in the backend.
Operational flexibility - Greater flexibility when testing and deploying applications.
Highly secure data storage - The security is really good, and at a fraction of expenses companies endure on-premise.
Extensive mobility - Employees can access applications on the move.
While your decision to migrate is commendable, there are some risks that you should be aware of. Here are the most common ones:
Each application’s dependencies require application dependency mapping in detail. This is because the applications have been in the mainframes for too long, and the workloads in them become complex as they grow. You will have to use ADS or Application Discover Software (both agent and agent-less) and automate the process. These are commonly found as part of NPM and APM suites.
Migrated applications have their own problems, and sometimes you need to of refactoring or re-architecting on a code level. All this depends on the developer’s skill level, what kind of changes are needed and the language. Developer skill might be a problem because they might lack the new refactoring/coding solutions, especially if they have expertise only in the legacy mainframes. To mitigate this challenge, business need to turn to third-party mainframe providers with a nice track record (experienced in the best mainframe migration projects, possessing the best tools and processes).
Enterprises and IT leaders must be able to convince all the stakeholders the benefits of migration because without the stakeholder buy in it would be difficult to difficult due to resources, cost structures and operational effects.
There are other risks that you should be aware during the migration process.
⁃ Making poor migration process choices
⁃ Poor project phasing and lack of proper tools for potential phase reversal
⁃ Difficulty in understanding server modification history due to attrition-based mainframe knowledge drain
⁃ Unexpected challenges that call for end-state platform expertise (which you may not possess)
If you are still thinking about continuing with the mainframe, then consider these questions:
⁃ What are the maintenance costs in maintaining the older mainframe?
⁃ Is there any software in the mainframe that was authored by companies that are no longer in existence?
⁃ Can you meet the needs of mobile users?
⁃ Are the mainframe systems documented well?
⁃ Will there be personnel with sufficient knowledge and skills to manage the mainframe a few years from now?
Mainframe systems are unique with different data stores, versions, subsystems and specific languages. Migrating to the cloud can really bring down the operational costs and cut down capital. This increases business flexibility and responsiveness to changing business requirements. However, you need to plan well, understand the risks and have a great partner for the migration to go smoothly.
We can help you seamlessly migrate your mainframe to the cloud.
Related Articles