ABIS Infor - 2012-10

Migration of a mainframe environment

Gie Indesteege (ABIS) - 30 august 2012

Abstract

You can imagine a lot of reasons why people or companies move. ABIS took the decision to change mainframe, a.o. for economical reasons. For clarity: we speak about a migration of mainframe to mainframe! But what kinds of aspects enter into this migration of a mainframe environment?

What is a mainframe environment?

Admitted, the term mainframe sounds corny; one uses more often the expression 'enterprise server'. But this emphasizes the fact that the amount of individual servers - application servers, database servers, mail servers, security servers, web servers - now is being consolidated on that on single 'enterprise server'.

This implies that, when speaking about a mainframe environment, we not (only) think about the hardware, but also about the network and communication-infrastructure, and especially about the software, operating system - subsystems and applications. And finally, do not forget the (business) data.

How do we tackle the migration?

The process of choosing another mainframe, based on our requirements, will not be handled in this article. But, as a matter of fact, one didn't take chances with this.

A good inventory of the current mainframe situation is already required:

  • which hardware configuration: how much CPU capacity (MIPS), real storage (MB), I/O channels and disk storage are we using now?
  • which version of z/OS?
  • which (versions of) subsystems DB2, CICS, IMS, ... ? which configuration and customisations where implemented?
  • which (versions of) tools and utilities like QMF, SAS, print facilities, RDz, ... and their configurations?
  • how is the set up of the development environment for TSO/ISPF and RDz?
  • which applications did we create in COBOL, PL/1, REXX, CLIST, ... ? what about the batch processing (JCL and scheduling)?
  • which (how many) data did we gather, for our training business, in databases, SDS and PDS, VSAM, HFS, or other file systems?
  • what about the integration of the mainframe with the distributed applications via DDF, FTP, print, mail, 3270, ...? and hence the set up of the network (internet, intranet, extranet)?
  • and last but not least, what about the security in RACF and in the network environment?
  • Enough material for half an encyclopedia, because, yes, documentation is important.
  • Next we look at the (new) features and differences in the new environment:
  • what are the capabilities of DB2 V10? and of z/OS V1.11?
  • how can be improve the security set up in RACF?
  • can we continue using the actual naming conventions?
  • what do we have to keep/change in the implementations of for instance the logon procedure?
  • what are the constraints, for which we have to find another solution, like printing?
  • which customisations and configurations have to be done? or can we copy/reuse the ones from the actual environment?

All this is a tougher nut to crack, because the new environment is still unknown.

And the we come to planning and realisation.

The ABIS-method

Because the new mainframe environment is already installed and disposes of a active z/OS environment, as well as pre-configured subsystems like CICS and DB2, we can focus on the transfer of data and applications. However, the communication set up and the correct implementation of the security definitions is a major focus point too.

Data transfer:

For the migration of the physical MVS datasets we use ADRDSSU to create dump datasets, logically grouped. Next, these dump datasets are prepared with a TSO XMIT for subsequent file transfer. FTP is done via an intermediate PC, because of (network) security constraints. At the other mainframe, files are converted again via a TSO RECEIVE, and finally restored on disk with ADRDSSU.

And yes, we encounter all kinds of troubles like security restrictions, maximum file size, network time-outs, ...

For the transfer of the (logical) DB2 data we use the DB2 UNLOAD/LOAD utilities instead of ADRDSSU, but the remaining part of the procedure is similar. Of course we first have to set up the necessary objects in the DB2 catalog. To this end, we generate the required DDL out of the original catalog by means of the database tooling in Rational Developer for System z (RDz). Running this DDL against the new database environment yields (almost) the correct environment, except for some indexes, triggers and stored procedures.

Again we are confronted with security, missing objects, and ... differences of DB2 version and of configuration like for instance the storage groups.

Applications:

Because the ABIS business is based primarily on the central DB2 for z/OS, all COBOL programs are recompiled, relinked and rebound. We provide a clean separation between test and production environment. Also the distributed applications, based on Java, C, PHP and Perl, and the communication via DRDA are set right.

The course environment, including samples and demo applications, is fixed where appropriate.

Network:

For ABIS, the change to a new mainframe implied as well the set up of a new network environment, this time managed by ourselves. Configuration of firewall and routers, change of network-provider, set up of DMZ, update of internet definitions, configuration and set up of gateway for DRDA, ... We did it, although we were sometimes in a cold sweat. But that's subject of another article.

Security:

Migration of the RACF definitions can be done partially by taking an extract or the existing RACF database (userids and groups, dataset profiles). But after that, you have to consider adjustments and closing the 'holes'. And that is a continuous process to be monitored.

Customisation

And then we see some nice little snakes in the grass appearing on the way to a new environment.Our new mainframe is located in another time zone, speaks another language (code page), has a different set up of WLM, behaves different regarding SMS, and there are differences in subsystem versions. There still needs to be some more doctoring.

Hence a closer look is taken at the configurations like for instance:

  • DSNZPARM for DB2 and a proper subsystem
  • CSD for CICS, and a proper address space, with connection to WebSphere MQ and DB2
  • proclibs and procedures for the batch/JCL environment
  • compile parameters for COBOL and PL/1
  • TSO logon and ISPF customisation
  • scheduling of backup procedures and replications
  • SAS configuration and connection to DB2
  • distributed applications: DRDA (DDF), RDz, FTP, remote print, web services

Conclusion

The adventure to switch from the old well-known mainframe environment to a new (still) unfamiliar infrastructure has been a very instructive process. All possible aspects were addressed and solved to a large extent thanks to the multidisciplinary knowledge of the ABIS instructors. As a matter of fact, the support of the 'old' and 'new' mainframe system team has been indispensable.

But ABIS is again ready to offer his mainframe (and -knowledge) to all those who did not yet deprecate the mainframe.

And for those who are facing the same adventure: ABIS is ready to share his experiences and offer advise and training where required.