Explain Catalog Management On The Mainframe Information Technology Essay

Batch jobs have historical dating to when a punched was first used in 1890 during the U.S. Central Bureau after being introduced by Herman Hollerith. They refer to a lot of multiple tasks which have to be executed simultaneously or else they can be referred to as ‘jobs that can run without end user interaction, or can be scheduled to run as resources permit, are called batch jobs.’ Hence, an application program which is capable processing large files and generates results is considered to be a batch job (Ebbers, Kettner, O’Brien, & Ogden, 2009).

Batch processing programs co not require any even the slightest human assistance for them to execute tasks and thus they are not common in both the UNIX systems and personal computers although they be correlated to tasks run by CRON command in UNIX and an AT. The z/OS mainframe’s batch processing is run on scheduled time as needed be execute multiple tasks. Besides, the z/OS mainframe’s batch processing can be compared to Intel’s operating system. In this one, a number of different printing jobs are submitted by various users. These print jobs are queued for processing and each is selected on priority basis to be worked on.

However, such executions are not possible without making use of z/OS professionals. The z/OS professionals applies Job Control Languages (JCL) to signal z/OS about which programs needs to worked on as well as the particular files they will require during execution (Pomerantz, Weele, Nelson & Hahn, 2008)

.

To make this possible the job control language specifies certain features of the batch job to the z/OS system. Such characteristics of the batch job includes; the identity of the person submitting the job, the desired program to use, the location of input and output, and the specific time to run the job. For batch processing to be successful, it has to be facilitated an application program referred to as Job Entry System (JES). It aids z/OS in receiving jobs, scheduling them for processing, and influencing how the job output. On the other hand, Job Entry System can be defined as a constituent of the operating system that avails supplementary job, task, and data management functions for instance setting up, job flow, as well as ‘the reading and writing of input and output streams on auxiliary storage devices, concurrently with job execution (a process called spooling).’ The Job Entry Subsystem has two versions; JES2 and JES3. However, JES2 is the commonly used version and their essential roles include; accepting job, queue jobs awaiting execution, ‘queue jobs an initiator’, accept jobs done while still operating and queue the output, and optionally relay it to the printer and /or ‘save it on spool for PSF, Info Print, or another output manager to retrieve’ In batch processing, a job undergoes various tasks, for it to be fully processed which include; ‘input, conversion, processing, output, print/punch (hard copy), and purge’ (Ebbers, Kettner, O’Brien, & Ogden, 2009).

At the input phase, Job Entry Subsystem allows jobs from an input device and/or from other job entry networks. It allocates an identifier -JCL statement- to each task and afterwards it selects tasks for processing from the spool. In the conversion phase, the JES makes use converter program to analyze JCL statements. It then fuses it with library JCL. In the third phase, JES2 picks awaiting jobs in the job queue and conveys them to the initiator.In the fourth phase, JES2 controls the system-product output (inclusive of to-print messages, and data sets) to facilitate processing a job. In the firth phase, hardcopy, JES2 chooses a job output to be processed amongst output queues by using a certain criteria; ‘output class, route code, priority, and other criteria.’ In the final phase, purge phase, JES2 frees the spool space allocated to a job and this leaves a room for assigning of consequent jobs and lastly, it signals the system administrator on the fate of the job (Ibid, 2009).

Another common type of processing is referred to as online processing. It has come to focus nowadays with the emergence of online businesses that demands instant data processing. This is made possible by creating the clientele databases which has got personal and other essential details. Unlike in batch processing, information/data is accessed on real-time basis for online processing because updating the system is easier. Hence, the online processing is more efficient. Online services have become more apparent in hotel, automobile, and airline industries.

The online data processing is transactional in nature because updates in the system are immediately revealed. Nevertheless, to access online services, all the computer systems must be interlinked through internet; otherwise it is not possible unlike in batch processing.

In the mainframe computing systems, online tasks are achieved through transaction processing by making use of transaction management systems such as CICS and IMS for z/OS. A transaction refers to ‘an exchange, usually a request and response, that occurs as a routine event in running the day-to-day operations of an organization’ are may be characterized by small/big amount of data, a few users, and their execution in large numbers (Ibid, 2009).

CICS relies upon the ‘multitasking and multithreading capabilities of z/OS to allow more than one task to be processed at the same time, with each task saving its specific variable data and keeping track of the instructions each user is executing.’ Multitasking is advantageous in that quite a number of users are able to access the same information at the same time. On the other hand, multithreading necessitates a copy of an application program to be developed by several transactions simultaneously and it dictates all transactional application programs to be serially reusable at entry and exit points. Online transactional systems are characterized by having many users, being repetitive, sharing of data, minimal interactions, low costs, and being repetitive. However the major attribute of online transactions is that the interaction period between the user and the data system is very brief lasting even for a few seconds.

Because online data processing happens interactively, it requires real-time response, ‘continuous availability of the transaction interface to the end user’, data integrity and security among others (Ibid, 2009). Furthermore they include services such as ATM (withdrawals, deposits, and inquiries), debit/credit card payments, and online shopping. If the online transactions are compared to an operating system, it looks like it borrows some attributes such as organizing and transmitting jobs, limiting accessibility of the system resources by users, managing memory use and controlling concurrent data access by the users. Online processing has to comply with atomicity -tasks done wholly or not, consistency, isolation, and durability attributes.

Customer Information Control System (CICS) is subsystem in z/OS responsible with managing resource sharing, data integrity, and prioritize execution of data on real time-basis. It transmits requests to responsible database managers besides assigning its real storage. On the other hand, an Information Management System (IMS) is both a transaction and database manger applied in online processing. Besides, it consists of a set of system services. As a transaction manager, it provides users -either real persons and/or specific programs- with access to programs running under Information Management System. On the other hand, while providing services as a database manager, it facilitates accessibility of data processed in IMS applications through hierarchical database models. In addition, it provides security of the data -back-up and /or recovery. In the z/OS system, IMS offers a lot of advantages which includes; multitasking in each address space, utilizes z/OS cross-memory services to interlink various address spaces, automatically registers itself in the z/OS subsystem thus enabling detection of faults, and also makes use of z/OS systems (Ibid, 2009).

Read also  Some Human Resources Functions Information Technology Essay

In this specific study of this question, whereby it is questioning the would-be batch and online processes of done by ABC university, they include,

Online processes; registering students on opening days and offering online fees payments -whereby the students’ database reflects fees paid and the balance, creating and updating student, teaching staff, and subordinate staff databases, computerizing services offered in the library whereby the books issued and re-collected can be known instantly, and computerizing the academic records of students to access the results online (IMS Application Programming: Database Manager, SC18-7809).

Batch processes; the best example is marking and recoding of examination papers before feeding the data into the online system. Besides, such processes as making a roll-call of its subordinate staff and compiling those records before being analyzed and transmitted on the online database.

Question No.2. What is partitioning on the mainframe and how can it be achieved? Partitioning is function of forming a number of different configurations or compartments from a single configuration or compartment. The current CPC designs have been made more complex by inclusion of features such as I/O connectivity and configuration, I/O operation, most specifically partitioning of the whole system. The specific work of partitions is to create distinct and separate logical machines (servers). The z/OS mainframe system can be partitioned is capable of being partitioned into distinct logical computing systems whereby the system resources for instance processors, memory, and I/O channels are shared among many autonomous logical partitions. This partitioning is controlled by the logical partition hypervisor and it is normally together with the Standard Processor Resource/ Systems manager (PR/SM) attributes on all mainframes. The logical partition hypervisor is a software layer used to manage a number of ‘operating systems running in a single central processing complex’ (Ebbers, Kettner, O’Brien, & Ogden, 2009). In most cases, the mainframe utilizes the Type-1 hypervisor -it is application software that runs openly on given hardware thus acting more or less the same as its control program. Besides, type-2 hypervisor is another version which runs inside the environment of an operating system. Each and every logical partition supports its own operating system that is loaded by a separate initial program load (IPL) operation.

For quite a long period of time, the number of logical partitions in a mainframe was limited to 15 in contrast to nowadays mainframes which can be partitioned up to 60 different logical partitions. However, the number of logical partitions is limited to less than 60 by factors such as size of memory, I/O availability, and available power for processing. Hence, every logical partition is regarded as a remote and discrete server that supports an instance of an operating system. Operating system may be of any version provided that it is supported by this hardware. Precisely, a single mainframe is capable of supporting operations of various operating systems environments (Ibid, 2009).

To realize partitioning, administrators of the system allocates a couple of dedicated processors to a logical partition whereby each partition makes use of symmetric multi-processing to transmit processors from the logical into the physical partition. To facilitate this, there must be a communication channel between the mainframe and the other external devices such as disk drives. Thus, the logical partitions have channel path identifiers allocated to each or shared amongst them which acts as a link between the control unit and channel subsystems. It is however possible for a mainframe with a single central processor to serve quite a number of logical partitions and this is necessitated by a hypervisor. It assigns portions of processor to each logical partition whilst the hypervisor is responsible for allocating some portions of the processing time to every thread, task, and or procedure. Usually a logical partition is capable of being assigned at least one of the portioned processes.

The specifications for partitioning control are in part contained in both the input/output control data set (IOCDS) and the system’s profile. Both the Input/output control data set and profile are located in the notebook-like place in the computer referred to as the Support Element (SE). This Support Element is usually used to monitor and configure both the hardware and the operating system and it is bound to the central processor complex within a system. In addition, a derived Support Element is used to back-up in case a need arises.

Nevertheless, Hardware Management consoles (HMCs) can be connected to the Support Element. These are ‘desktop personal computers used to monitor and control hardware such as the mainframe microprocessors.’ The only advantage it renders is convenience and multi-control of various mainframes as opposed to Support Element. The Hardware Management Console functions by corresponding with Central Processor Complex via its Support Element. If and when undertakings are executed in Hardware Management Console, commands are conveyed to one or more support elements. Subsequently, they are then transmitted to the respective Central Processor Complex and the advantage of Hardware Management Console is that it is capable of supporting more than one Central Processor Complex (Pomerantz, Weele, Nelson & Hahn, 2008)

.

An operator who is uses Hardware Management Console set up the mainframe by choosing and loading a profile as well as the input/output control data set. This results in creating logical partitions and configuring channels with device number, multipath information, and logical partition assignments among others which came to be referred to as Power-on Reset (POR). The operator of the mainframe changes the amount and design of logical partitions alongside I/O configuration appearance by frequently loading both an unusual profile and input/output control data set. Therefore, the operator does not often interrupt with the performance of application programs and operating systems.

The logical partitions have the following attributes (inclusive of benefits)

Logical partitions are mostly comparable to separate mainframes as they independently run their own operating systems.

System administrators are capable of assigning at least one system processor to be used by a logical partition. On the other hand, the can permit all the processors to be used on specific logical partitions or all of them.

Operating system in each and every logical partition ‘is IPled separately.’ It means that each logical partition has a replica of the operating system and Hardware Management Console. Therefore, if the system in a certain logical partition is has stopped working and/or is being maintained, cannot interfere with the other logical partitions.

The operating system consoles for at least two logical partitions are located in distinctly unrelated locations. For instance, I cases where Linux operating system is being used as it does not have its own operator console especially in z/OS consoles.

There is usually no realistic difference between, for instance, equal number of mainframes that are operating a z/OS and logical partitions on that similar mainframe doing the same and the variations cannot be noted by neither the operator, z/OS system, nor other applications.

The only insignificant difference results because the z/OS is capable of obtaining performance and utilizes information across the mainframe and vigorously shifts the resources.

Some of the above benefits would accrue to any organization -For Instance ABC University. Large organizations which have got a lot of complex data -heterogeneous and enormous does not need to buy different operating systems, for its personnel. It is capable of interlinking all of them to common data storage accessible by all without interference. Hence, improve on efficiency. This will also go a long way in reducing money spent to procure small mainframes. Another advantage cited by Ebbers, Kettner, O’Brien & Ogden (2009) is that ‘mainframes require fewer staff when supporting hundreds of applications. Since centralized computing is a major theme using the mainframe, many of the configuration and support tasks are implemented by writing rules or creating a policy that manages the infrastructure automatically. This is a tremendous savings in time, resources, and cost.’

Read also  A Web Based Meeting Scheduler Information Technology Essay

Question No.3. Mainframe Concepts

a) What is virtual storage? Besides the physical storage -central and auxiliary-, the Z/OS provides another kind of storage referred to as virtual storage. Each user therefore has an access this storage instead of directly accessing the physical storage. As a result, the Z/OS offers a unique user interface for all its users to network concomitantly without any kind of interference while processing a number of tasks. Ebbers, Kettner, O’Brien & Ogden (2009) have defined virtual memory to mean that each application running in Z/OS can access to the main storage as stipulated by the ‘architectures addressing scheme’ but this is subject to limitation of the number of bits available to in the main address storage.

Therefore, virtual memory offers advantage to complex applications which have to access their codex and data in the main storage. In other words, virtual storage can be referred to as a combination of both the auxiliary and the real storage.

b) What is an address space? It refers to the range of virtual addresses that an operating system assigns to its user or an independently running application. It is where transmittable virtual addresses responsible for executing tasks and storing data are found. The virtual address ranges from zero to the highest exponential number applicable to the particular architecture of an operating system. In the case of a mainframe user, an address space can be referred to as the space whereby particular programs and their data are readily accessible. Every user in the z/OS is accorded a distinctive address space besides his/her programs and data being separately maintained. The user is capable of multitasking by making use of Task Control Blocks which are capable of allowing of running various programs simultaneously. In other words, the z/OS address space is comparable to UNIX procedure whereas address space identifier can be compared to the procedure ID (PID). Moreover, Task Control Blocks are much or less the same to UNIX links whereby every operating system is capable of multitasking the processing of large workloads (Ibid, 2009).

Advantages of using multiple virtual address spaces in z/OS

Virtual address space allows a certain range of addressing that is usually greater than the capacity of the central storage.

It facilitates virtual addressing capability to each task in the system through assigning each task its own unique virtual address space.

A large number of multiple address spaces avails the system with a big capacity for virtual addressing.

The reliability and error-recovery of the system are simplified by the confinement of errors within a particular address space with exception to the common address storage.

Programs in distinct address spaces are fairly protected by separating data in their own address spaces (isolation): A particular user’s address space is separated from the other users’ address space thus enhancing the security of the operating system. These unique address spaces, or rather private areas are of two types; the 24-bit addressing space and 31-bit addressing system. However, each private area has a common area accessible by users in other address spaces. Hence, the importance of offering protection to both the users and the operating system (Ibid, 2009).

In the z/OS, each continual task is allocated a single address space. Besides, every user registered through telnet, TSO, rlogin and/or FTP are also allocated a single address space. For various functions of the operating systems, there are quite a number of address spaces for instance, automation, operator communication, security, and networking among others (Ibid, 2009).

c) What is dynamic address translation? Also called DAT. It is the process of transforming a virtual address space at some point in storage reference into the consequent real address. In case the virtual address is in the main storage, the dynamic address translation process can be stepped up by utilizing the translation look aside buffers. On the other hand, a page error occurs when the virtual address is not in the central storage. This forces the z/OS to respond by retrieving the page from the auxiliary storage. This process makes it possible for a mainframe computer to present any one of a number of various virtual addresses which are not in the main storage. The z/OS corrects any fault -region, page, or segment- with respect to the specific points at which errors are found in the Dynamic Address Translation process (Ebbers, Kettner, O’Brien & Ogden, 2009).

Dynamic Address Translation is executed by both the hardware and software by making use of page, segment, and region tables as well as ‘translation lookaside buffers.’ It also facilitates sharing of read-only programs and /or data by various address spaces. The reason being, virtual addresses in various address spaces are capable of being translated into a common frame of the main storage. This eliminates replication of data and programs into many copies for each address space.

d) Explain the format of a virtual address. Virtual address is divided into four major fields that include: The Region Index (RX) -bits 0-32; The Segment Index (SX) -bits 33-43; The Page Index (PX) -bits 44-51; The Byte Index (BX) -bits 52-63. Ebbers, Kettner, O’Brien & Ogden (2009). states that the ‘virtual address space can be a 2-gigabyte space consisting of one region’, or a 16-exabyte space (Ibid,2009). The Region Index for a 2-GB address space must however be all zeros and in case it is not an exemption is recognized. The Region Index is further divided into three fields which include; The Region First Index (RFX) -bits 0-10; The Region Second Index (RSX) -bits 11-21; The Region Third Index (RTX) -22-32. Three scenarios are notable: (i) in case the Region Third Index is the left most significant part in a virtual address, it is capable of addressing 4 terabytes, (ii) in case the Region Second Index is the left most significant part, then it is capable of addressing 8 petabytes, and (iii) in case the Region First Index is the left most significant part, then it is capable of addressing 16 exabytes.

d) What is paging in z/OS? The z/OS utilizes a series of tables to determine the specific location of a page -whether it is in auxiliary or real storage. The z/OS does so by checking the virtual address table of the page in question instead of finding it all through the physical storage. It then relocates that specific page to the central and/or out auxiliary storages if need be. Hence, ‘the movement of pages between the auxiliary storage slots and central storage frames’ is what is referred to as paging and precisely outlines the purpose of virtual storage in z/OS. In executing tasks, it is only the required pieces of the application that paged in to the central storage and the pages remain in there until additional ones are required by the same program. The z/OS applies the “least Used” algorithm in selecting pages for auxiliary storage. This means that those pages which have not been used for a while are most likely not going to be used in the recent future (Ibid, 2009).

Question No. 4. Explain catalog management on the mainframe.

a).VTOC. It stands for volume table of contents. It is used together with a catalog on each Direct Access Storage Device (DASD) in managing the storage and placement of data sets. The volume table of contents refers to a structure that holds the data set labels as well as their location information and size, required by the z/OS in particular formats for disks. ICKDSF is a specific standard utility program used to build the label and volume table of contents. If and when a disk volume is initialized by this utility program, the owner has the chance to identify the location and range of the Volume Table of Contents. Usually, the size is quite variable ranging 1 to 100 tracks dependant on the anticipated use of the volume. The more the data on the disk space, the larger the space demanded in the Volume Table of Contents and besides, it has entries for all the free space on the volume (Ibid, 2009).

Read also  Principles Of Information Security And Governance Information Technology Essay

Apportioning ‘space for a data set causes system routines to examine the free space records, update them, and create a new’ Volume Table of Contents entry. The data sets are an integral number of tracks/cylinders which start at the beginning of the tracks/cylinders. The volume table of contents can also be created from with an index -refers to a data set named SYS1.VTOCIX.volser with entries organized in alphabetical order by the name of the data set and pointers to the VTOC entries. In addition, the volume table of contents has frees pace bitmaps on the volume and it allows the user to find the data set much faster.

b). Master catalog. A catalog is used alongside Volume table of contents on each Direct Access Storage Device (DASD) in managing the storage and placement of data sets. It describes data set features in addition to indicating the location of data sets in the volumes. Upon cataloging of a data set, it can be referred to by a specific name without the user stating the specific location of data set. However the cataloging of data sets is optional -they can be cataloged, re-cataloged, and un-cataloged. Data sets are automatically cataloged in all systems that are managed by Direct Access Storage Device (Ibid, 2009).

However, cataloging of data sets on magnetic tape simplifies the users’ jobs although this is not a necessity. The master catalog alongside the user catalog keeps the locations of data sets whereby data sets for both disk and tape can be cataloged. It is important to note that; z/OS identifies three basic pieces of information -data set and volume name as well as the volume device type. The volume device type and the volume name are in most cases of no importance to an end user and/or application program. System catalog is often used to store and repossess the location of volume and unit location of a data set. Basically, the catalog provides the unit device type and the volume name for any cataloged data set. Z/OS system usually has at least one master catalog and incase there is only one catalog, then it is automatically the master catalog and the location entries for all data sets is stored there.

A typical z/OS system uses master catalog to improve on efficiency and flexibility in preference to a single catalog. Usually, master catalog acts as storage for the data set HLQ -which is referred to as an alias. In addition, the master catalog stores the data set name wholly and their location with a prefix SYS1.A1. There are two kinds of alias (HLQ) entries for a master catalog; IBMUSER and USER. IBMUSER includes ‘the data set name of the user catalog containing all the fully qualified IBMUSER data sets with their respective location.’ This also applies for the USER alias. If and when upon a request to SYS1.A1, the location information, unit (3390), and volume (WRK001) are usually sent back to the requestor (Ibid, 2009).

c). Alternate master catalog. In some instances, a z/OS may lose its master catalog and /or it is corrupted. Cases like these are serious threats that needs prompt rectification so as not to lose data. Therefore, it is with reason that quite a number of system programmers have always set aside a back-up for the master catalog and normally stipulate the alternate master catalog while starting the system. To ensure safety of the alternate master catalog, the programmers normally keep it on a different volume. This acts a mitigation of any situation in which that both volumes may become unavailable (Ibid, 2009).

d). User catalog. A z/OS system uses user catalog besides the master catalog instead of a single catalog to improve its efficiency and reliability. User catalogs store the location and name of a data set – date set name, volume name, and volume device type or unit. Usually in catalog management, the name of the user catalog contains the location of all the data sets prefixed by HLQ (alias). The user catalog is classified by the master catalog /into two HLQ entries; IBMUSER and USER. Consequently, the statements which describes the IBMUSER is usually inclusive of the data set name of the user catalog consisting all the fully qualified IBMUSER data sets with their respective location which is also applicable to the to the ESR HLQ (Ibid, 2009).

e) Generation data group. Generation data groups (GDGs) refer to successive updates and/or generation of related data which can be cataloged. Every data set in a generation data group is usually referred to as ‘generation or generation data set (GDS).’ Hence, a generation data group (GDG) can be referred to as an assortment of previously correlated non-VSAM data sets that are arranged chronologically. In other words each set of data has a past relationship with the others in the same grouping. In every generation data group (GDG), the generations have similar or dissimilar DCB qualities and data set arrangements. In case the features and arrangement of the total number of generations in that group are identical, then it is possible to retrieve the generations as a single set of data.

Benefits of grouping cataloging related data sets

It is possible to refer to the data sets in any particular group by an ordinary name.

The z/OS system is capable storing and arranging the generations chronologically.

In case some of the generations are rendered obsolete or time barred, they can automatically be deleted by the z/OS system.

Subsequently, the generation data sets have many a times been referred to with absolute and relative names with respect to their age. The habitual of catalog management of a particular operating system applies the absolute generation name whereas the previous sets of data have smaller absolute numbers. For instance, ‘the relative name is a signed integer used to refer to the latest (0), the next to the latest (-1), and so forth, generation'(Ebbers, Kettner, O’Brien, & Ogden, 2009). An illustration by Ebbers, Kettner, O’Brien, & Ogden (2009). states that if the name of a data set is LAB.PAYROLL (0), then it will be perceived to imply that it is the most recent set of data within the group. Consequently, LAB.PAYROLL (-1) would on the hand mean the second most recent set of data and so on.

Relative numbers such as +1, +2, +3, +4, and so on are used catalog new generation sets of data. In catalog management, the generation data group base is assigned to a specific catalog before cataloging the specific sets of generation data. Each and every generation data group base entry represents a generation data group.

Ebbers, Kettner, O’Brien, & Ogden (2009) states that ‘for new non-system-managed data sets, if you do not specify a volume and the data set is not opened, the system does not catalog the data set.’ Hence, new sets of data that are managed by z/OS system should always be cataloged when assigned with the volume from within storage group.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)