Data Conversion and Migration Strategy

1. Data Conversion & Migration Strategy

The scope of this section is to define the data migration strategy from a CRM perspective. By its very nature, CRM is not a wholesale replacement of legacy systems with BSC CRM but rather the coordination and management of customer interaction within the existing application landscape. Therefore a large scale data migration in the traditional sense is not required, only a select few data entities will need to be migrated into BSC CRM.

Data migration is typically a ‘one-off’ activity prior to go-live. Any ongoing data loads required on a frequent or ad-hoc basis are considered to be interfaces, and are not part of the data migration scope.

This section outlines how STEE-Infosoft intends to manage the data migration from the CAMS and HPSM legacy systems to the BSC CRM system.

STEE-InfoSoft will provide a comprehensive data conversion and migration solution to migrate the current legacy databases of CAMS and HPSM. The solution would adopt the most suitable and appropriate technology for database migration, using our proven methodology and professional expertise. STEE-InfoSoft’s data migration methodology assures customers the quality, consistency, and accuracy of results.

Table 11 shows STEE-InfoSoft data migration values proposition using our methodology.

Table 11: STEE-Infosoft data migration values proposition

Value

Details

Cost Effective

STEE-InfoSoft adopts a cost-effective data migration solution. Minimal downtime can be achieved for the data migration. Extensive use of automation speed up work and makes post-run changes and corrections practical. Error tracking and correction capabilities help to avoid repeated conversion re-runs. Customization enables getting the job done the correct way

Very Short Downtime

Downtime is minimized because most of the migration processes are external to the running application system, and do not affect its normal workflow. It further reduces downtime by allowing the data conversion to be performed in stages.

Assured Data Integrity

Scripts and programs are automatically generated for later use when testing and validating the data.

Control Over the Migration Process.

Creating unique ETL (Extract, Transform and Load) scripts to run the extract and load processes in order to reduce the downtime of the existing systems. Merging fields, filtering, splitting data, changing field definitions and translating the field content. Addition, Deletion, Transformation, and Aggregation, Validation rules for cleansing data.

1.1. Data Migration Overview

Data migration is the transfer of data from one location, storage medium, or hardware/software system to another. Migration efforts are often prompted by the need for upgrades in technical infrastructure or changes in business requirements

Best practices in data migration recommends two principles which are inherent for successful data migration:

  1. Perform data migration as a project dedicated to the unique objective of establishing a new (target) data store.
  2. Perform data migration in four primary phases: Data Migration Planning, Data Migration Analysis and Design, and Data Migration Implementation, and Data Migration Closeout as shown in 1.1.

In addition, successful data migration projects were ones that maximized opportunities and mitigated risks. The following critical success factors were identified:

Perform data migration as an independent project.

Establish and manage expectations throughout the process.

Understand current and future data and business requirements.

Identify individuals with expertise regarding legacy data.

Collect available documentation regarding legacy system(s).

Define data migration project roles & responsibilities clearly.

Perform a comprehensive overview of data content, quality, and structure.

Coordinate with business owners and stakeholders to determine importance of business data and data quality.

1.2. STEE-Info Data Migration Project Lifecycle

Table 12 lists the high-level processes for each phase of the STEE-Info Data Migration Project Lifecycle.

While all data migration projects follow the four phases in the Data Migration Project Lifecycle, the high-level and low-level processes may vary depending on the size, scope and complexity of each migration project. Therefore, the following information should serve as a guideline for developing, evaluating, and implementing data migration efforts. Each high-level and low-level process should be included in a DataMigrationPlan. For those processes not deemed appropriate, a justification for exclusion should be documented in the DataMigrationPlan.

Table 12: Data Migration Project Lifecycle with high-level tasks identified.

Data Migration Planning Phase

Data Migration Analysis & Design Phase

Data Migration Implementation Phase

Data Migration Closeout Phase

Plan Data Migration Project

Analyze Assessment Results

Develop Procedures

Document Data Migration Results

Determine Data Migration Requirements

Define Security Controls

Stage Data

Document Lessons Learned

Assess Current Environment

Design Data Environment

Cleanse Data

Perform Knowledge Transfer

Develop Data Migration Plan

Design Migration Procedures

Convert Transform Data (as needed)

Communicate Data Migration Results

Define and Assign Team Roles and Responsibilities

Validate Data Quality

Migrate Data (trial/deployment)

Validate Migration Results (iterative)

Validate Post-migration Results

During the lifecycle of a data migration project, the team moves the data through the activities shown in 1.2

The team will repeat these data management activities as needed to ensure a successful data load to the new target data store.

1.3. Data Migration Guiding Principles

1.3.1. Data Migration Approach

1.3.1.1. Master Data – (e.g. Customers, Assets)

The approach is that master data will be migrated into CRM providing these conditions hold:

The application where the data resides is being replaced by CRM.

The master records are required to support CRM functionality post-go-live.

Read also  Information technology and mis

There is a key operational, reporting or legal/statutory requirement.

The master data is current (e.g. records marked for deletion need not be migrated) OR is required to support another migration.

The legacy data is of a sufficient quality such so as not to adversely affect the daily running of the CRM system OR will be cleansed by the business/enhanced sufficiently within the data migration process to meet this requirement.

Note: Where the master data resides in an application that is not being replaced by CRM, but is required by CRM to support specific functionality, the data will NOT be migrated but accessed from CRM using a dynamic query look-up. A dynamic query look-up is a real-time query accessing the data in the source application as and when it is required. The advantages of this approach are;

Avoids the duplication of data throughout the system landscape.

Avoids data within CRM becoming out-of-date.

Avoids the development and running of frequent interfaces to update the data within CRM.

Reduces the quantity of data within the CRM systems.

1.3.1.2. ‘Open’ Transactional data (e.g. Service Tickets)

The approach is that ‘open’ transactional data will NOT be migrated to CRM unless ALL these conditions are met:

There is a key operational, reporting or legal/statutory requirement

The legacy system is to be decommissioned as a result of the BSC CRM project in timescales that would prevent a ‘run down’ of open items

The parallel ‘run down’ of open items within the legacy system is impractical due to operational, timing or resource constraints

The CRM build and structures permit a correct and consistent interpretation of legacy system items alongside CRM-generated items

The business owner is able to commit resources to own data reconciliation and sign-off at a detailed level in a timely manner across multiple project phases

1.3.1.3. Historical Master and Transactional data

The approach is that historical data will not be migrated unless ALL these conditions are met:

There is a key operational, reporting or legal/statutory requirement that cannot be met by using the remaining system

The legacy system is to be decommissioned as a direct result of the BSC CRM project within the BSC CRM project timeline

An archiving solution could not meet requirements

The CRM build and structures permit a correct and consistent interpretation of legacy system items alongside CRM-generated items

The business owner is able to commit resources to own data reconciliation and sign-off at a detailed level in a timely manner across multiple project phases

1.3.2. Data Migration Testing Cycles

In order to test and verify the migration process it is proposed that there will be three testing cycles before the final live load:

Trial Load 1: Unit testing of the extract and load routines.

Trial Load 2: The first test of the complete end-to-end data migration process for each data entity. The main purpose of this load is to ensure the extract routines work correctly, the staging area transformation is correct, and the load routines can load the data successfully into CRM. The various data entities will not necessarily be loaded in the same sequence as will be done during the live cutover

Trial Cutover: a complete rehearsal of the live data migration process. The execution will be done using the cutover plan in order to validate that the plan is reasonable and possible to complete in the agreed timescale. A final set of cleansing actions will come out of trial cutover (for any records which failed during the migration because of data quality issues). There will be at least one trial cutover. For complex, high-risk, migrations several trial runs may be performed, until the result is entirely satisfactory and 100% correct.

Live Cutover: the execution of all tasks required to prepare BSC CRM for the go-live of a particular release. A large majority of these tasks will be related to data migration.

1.3.3. Data Cleansing

Before data can be successfully migrated it data needs to be clean, data cleansing is therefore an important element of any data migration activity:

Data needs to be in a consistent, standardised and correctly formatted to allow successful migration into CRM (e.g. CRM holds addresses as structured addresses, whereas some legacy systems might hold this data in a freeform format)

Data needs to be complete, to ensure that upon migration, all fields which are mandatory in CRM are populated. Any fields flagged as mandatory, which are left blank, will cause the migration to fail.

Data needs to be de-duplicated and be of sufficient quality to allow efficient and correct support of the defined business processes. Duplicate records can either be marked for deletion at source (preferred option), or should be excluded in the extract/conversion process.

Legacy data fields could have been misused (holding information different from what this field was initially intended to be used for). Data cleansing should pick this up, and a decision needs to be made whether this data should be excluded (i.e. not migrated), or transferred into a more appropriate field.

It is the responsibility of the data owner (i.e. MOM) to ensure the data provided to the STEE-Info for migration into BSC CRM (whether this is from a legacy source or a template populated specifically for the BSC CRM) is accurate.

Read also  System development life cycle

Data cleansing should, wherever possible, be done at source, i.e. in the legacy systems, for the following reasons:

Unless a data change freeze is put in place, extracted datasets become out of date as soon as they have been extracted, due to updates taking place in the source system. When re-extracting the data at a later date to get the most recent updates, data cleansing actions will get overwritten. Therefore cleansing will have to be repeated each time a new dataset is extracted. In most cases, this is impractical and requires a large effort.

Data cleansing is typically a business activity. Therefore, cleansing in the actual legacy system has the advantage that business people already have access to the legacy system, and are also familiar with the application. Something that is not the case when data is stored in staging areas. In certain cases it may be possible to develop a programme to do a certain degree of automated cleansing although this adds additional risk of data errors.

If data cleansing is done at source, each time a new (i.e. more recent) extract is taken, the results of the latest cleansing actions will automatically come across in the extract without additional effort.

1.3.4. Pre-Migration Testing

Testing breaks down into two core subject areas: logical errors and physical errors. Physical errors are typically syntactical in nature and can be easily identified and resolved. Physical errors have nothing to do with the quality of the mapping effort. Rather, this level of testing is dealing with semantics of the scripting language used in the transformation effort. Testing is where we identify and resolve logical errors. The first step is to execute the mapping. Even if the mapping is completed successfully, we must still ask questions such as:

How many records did we expect this script to create?

Did the correct number of records get created?

Has the data been loaded into the correct fields?

Has the data been formatted correctly?

The fact is that data mapping often does not make sense to most people until they can physically interact with the new, populated data structures. Frequently, this is where the majority of transformation and mapping requirements will be discovered. Most people simply do not realize they have missed something until it is not there anymore. For this reason, it is critical to unleash them upon the populated target data structures as soon as possible. The data migration testing phase must be reached as soon as possible to ensure that it occurs prior to the design and building phases of the core project. Otherwise, months of development effort can be lost as each additional migration requirement slowly but surely wreaks havoc on the data model. This, in turn, requires substantive modifications to the applications built upon the data model.

1.3.5. Migration Validation

Before the migration could be considered a success, one critical step remains: to validate the post-migration environment and confirm that all expectations have been met prior to committing. At a minimum, network access, file permissions, directory structure, and database/applications need to be validated, which is often done via non-production testing. Another good strategy to validate software migration is to benchmark the way business functions pre-migration and then compare that benchmark to the behaviour after migration. The most effective way to collect benchmark measurements is collecting and analyzing Quality Metrics for various Business Areas and their corresponding affairs.

1.3.6. Data Conversion Process

Mapped information and data conversion program will be put into use during this period. Duration and timeframe of this process will depend on:

Amount of data to be migrated

Number of legacy system to be migrated

Resources limitation such as server performance

Error which were churned out by this process

The conversion error management approach aims to reject all records containing a serious error as soon as possible during the conversion approach. Correction facilities are provided during the conversion; where possible, these will use the existing amendment interface.

Errors can be classified as follows:

Fatal errors – which are so serious that they prevent the account from being loaded onto the database. These will include errors that cause a breach of database integrity; such as duplicate primary keys or invalid foreign key references. These errors will be the focus of data cleansing both before and during the conversion. Attempts to correct errors without user interaction are usually futile.

Non-fatal errors – which are less serious. Load the affected error onto the database, still containing the error, and the error will be communicated to the user via a work management item attached to the record. The error will then be corrected with information from user.

Auto-corrected errors – for which the offending data item is replaced by a previously agreed value by the conversion modules. This is done before the conversion process starts together with user to determine values which need to be updated.

One of the important tasks in the process of data conversion is data validation. Data validation in a broad sense includes the checking of the translation process per se or checking the information to see to what degree the conversion process is an information preserving mapping.

Read also  EHR Implementation Issues

Some of the common verification methods used will be:

Financial verifications (verifying pre- to post-conversion totals for key financial values, verify subsidiary to general ledger totals) – to be conducted centrally in the presence of accounts, audit, compliance & risk management;

Mandatory exceptions verifications and rectifications (on those exceptions that must be resolved to avoid production problems) – to be reviewed centrally but branches to execute and confirm rectifications, again, in the presence of network management, audit, compliance & risk management;

Detailed verifications (where full details are printed and the users will need to do random detailed verifications with legacy system data) – to be conducted at branches with final confirmation sign-off by branch deployment and branch manager; and

Electronic files matching (matching field by field or record by record) using pre-defined files.

1.4. Data Migration Method

The primary method of transferring data from a legacy system into Siebel CRM is through Siebel Enterprise Integration Manager (EIM). This facility enables bidirectional exchange of data between non Siebel database and Siebel database. It is a server component in the Siebel eAI component group that transfers data between the Siebel database and other corporate data sources. This exchange of information is accomplished through intermediary tables called EIM tables. The EIM tables act as a staging area between the Siebel application database and other data sources.

The following figure illustrates how data from HPSM, CAMS, and IA databases will be migrated to Siebel CRM database.

1.5. Data Conversion and Migration Schedule

Following is proposed data conversion and migration schedule to migrate HPMS and CAMS, and IA databases into Siebel CRM database.

1.6. Risks and Assumptions

1.6.1. Risks

  1. MOM may not be able to confidently reconcile large and/or complex data sets. Since the data migration will need to be reconciled a minimum of 3 times (system test, trial cutover and live cutover) the effort required within the business to comprehensively test the migrated data set is significant. In addition, technical data loading constraints during cutover may mean a limited time window is available for reconciliation tasks (e.g. overnight or during weekends)
  2. MOM may not be able to comprehensively cleanse the legacy data in line with the BSC CRM project timescales. Since the migration to BSC CRM may be dependent on a number of cleansing activities to be carried out in the legacy systems, the effort required within the business to achieve this will increase proportionately with the volume of data migrated. Failure to complete this exercise in the required timescale may result in data being unable to be migrated into BSC CRM in time for the planned cutover.
  3. The volume of data errors in the live system may be increased if reconciliation is not completed to the required standard. The larger/more complex a migration becomes, the more likely it is that anomalies will occur. Some of these may initially go undetected. In the best case such data issues can lead to a business and project overhead in rectifying the errors after the event. In the worst case this can lead to a business operating on inaccurate data.
  4. The more data migrated into BSC CRM makes the cutover more complex and lengthy resulting in an increased risk of not being able to complete the migration task on time. Any further resource or technical constraints can add to this risk.
  5. Due to the volume of the task, data migration can divert project and business resources away from key activities such as initial system build, functional testing and user acceptance testing.

1.6.2. Assumptions

  1. Data Access – Access to the data held within the CAMS, HPSM and IA applications are required to enable data profiling, the identification of data sources and to write functional and technical specifications.
  2. Access connection is required to HPMS and CAMS, and IA databases to enable execution of data migrations scripts.
  3. MOM is to provide workstations to run ETL scripts for the data migration of HPMS and CAMS, and IA databases.
  4. There must not be any schema changes on legacy HPMS and CAMS, and IA databases during data migration phase.
  5. MOM is to provide sample of production data for testing the developed ETL scripts.
  6. MOM business resource availability;

Required to assist in data profiling, the identification of data sources and to create functional and technical specifications.

Required to develop and run data extracts from the CAMS & HPSM systems.

Required to validate/reconcile/sign-off data loads.

Required for data cleansing.

  1. Data cleansing of source data is the responsibility of MOM. STEE-Info will help identify the data anomalies during the data migration process; however STEE-Info will not cleanse the data in the CAMS & HPSM applications. Depending on the data quality, data cleansing can require considerable effort, and involve a large amount of resources.
  2. The scope of the data migration requirements has not yet been finalised, as data objects are identified they will be added on to the data object register.
Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)