The Chaos Report By Standish Group

Recent research show that many of the IT projects have ‘failed’, in the combination of budget and/or schedule overruns and/or for not meeting users’ requirements. The well known and now widely quoted Chaos Report by Standish Group declared that IT projects are in chaos.

Table 1 provides a summarized report card on project outcomes based on the Report. Type 1 projects are those completed on time and within budget, with all required functions and features initially specified. The ”challenged” projects, though completed and operation al, suffered budget overruns and/or program slips, and offered fewer functions and features than originally specified. The ”impaired” projects are those cancelled or abandoned at some point during the development cycle. It is anticipated that many of the IS projects would continue to be ‘challenged’ or ‘impaired’. The truly ‘successful’ stories from the outset will be relative rare.

The Standish study defined project failure as either a project that has been cancelled or a project that does not meet its budget, delivery, and business objectives. Conversely, project success, is defined as a project that meets its budget, delivery, and business objectives. With this definition, the average IT project success rate in the Standish study was an abysmal 16.2%.

Success, for IT projects is not a ‘black and white’ concept. It can be viewed as a combination of project implementation success and systems success. Systems success can be separated into three levels: technical development, deployment to the user and delivery of business benefits or treated as a four-dimensional construct consisting of the success of the development process, success of the use process, quality of the product, and impact on the organization. The research done by DeLone and McLean (1992) propose six major dimensions of project success, which they refine to include: project’s system quality, information quality, service quality, user satisfaction and net benefits to organization.

The DeLone and Mc lean (1992) research clearly state that quality level at each phase at the project life cycle affects projects success. It is not only final deliverable quality that describes the project success or failure, the quality of the processes; systems used in each phase to achieve the final deliverable make impact too on the project success. According to the Knopefel (1989), Quality is the absence of deviations from the planned new state and the usefulness of this new state for the future operations. Typical examples of quality are safety and reliability, beauty and comfort, economic and financial performance. Quality is achieved if the system possesses the expected attributes and system can be a combination of processes used in the project.

The Yaseen and Marashly (1989) research shows that the project managers of IT companies in the developing countries experience difficulty in the quality control management function, i.e. managing the implementation of the project within a certain quality level conforming to predetermine specifications. The reason behind the problem, in the researcher’s opinion is the absence of a well structured framework for the quality control in each phase of project life cycle. The research shows that in the absence of suitable quality control measure at the each phase of project life cycle increases the repetition of error. Dtivid S. Alberts examined the impact of programming errors on the life cycle cost of IT Projects. Such errors may occur early in the development phase, or later during the operational phase when software maintenance (i.e., change) is required. He found that about half of the IT project life cycle costs are attributable to errors which are made with equal probability in these two phases. Our inability to hold lower the software error levels, especially in the development phase, is further supported by the work of Marc Bendick. He analyzed several software products having from thirty to two hundred thousand lines of coding. He discovered that the average cost of ‘repeating” an error made during the development phase but not discovered until the software attained operational, was 139 times greater than the cost of writing that one line in the first place. So the research by the two researcher clearly states that in absence of proper quality control framework just one line could cause a huge different in project budget. And overrun of project budget is one of the major factors behind the project failure. If quality control measures are strongly placed and executed during project life cycle phases then the control on the overrun of the budget could be possible. Also using of quality techniques like DMAIC (Define, Measure, Analyse, Implement and Control) of Six sigma help in better understanding and recording of customer perception. It is an effective way to measure the deliverable and do the check whether the project is on the right track or not?

Read also  A Review Of The Sony E Business Information Technology Essay

Project Management Life Cycle and PMBOK:

According to the Project Management Book of Knowledge (2004), the field of project management represents a broad spectrum of management disciplines encompassing the theory and practice of general management and application specific knowledge domains. Depending upon the scope and nature of project, project managers interface with various stakeholders from external clients, vendors, and suppliers to internal team members, enterprise staff, and executive management (Christenson & Walker, 2004).

Brought in at the inception of a project, project managers need to visualize the outcome while thinking in abstract terms, thrive in uncertainty setting the pace, and manage conflicting priorities among stakeholders while charting the direction to the outcome by eliminating obstacles throughout the project management life cycle of initiation, planning, execution, control, and closure (Bucero, 2004).

The five major project management life cycle steps are plan, define, construct, test and deploy for each case. Each of these project management steps yields standard deliverables, such as business requirements, system requirements, information flow diagrams, test cases, system architectures, etc.

Leading organisations use selection, implementation and evaluation processes uniformly at an enterprise level and within each business unit of their organisation. By contrast, there is very little or no uniformity in how risks, benefits, and costs of various IT projects are evaluated. Moreover, many organisations appear to approach the whole management of IT in an unstructured or ad hoc manner throughout its life cycle. Such approaches have evolved due to a limited understanding of the relationship between IT project implementation and traditional business performance metrics.

For each stage of the IT project life cycle, the organisation needs to estimate direct and indirect IT project costs. The organisation should set up a series of activity cost matrices for each stage of the IT project life cycle. Ownership costs include all direct and indirect costs that can be attributed with the initiation, design, development, operation and maintenance of the proposed IT project. Therefore, all costs for the proposed IT project, over its entire life cycle, must be included in the costing process.

Selection of IT projects:

There have been numerous examples where IT projects have failed to meet expectations. This is sometimes due to a lack of prior assessment of risks and returns before management commitment is made and funding approval is provided. A well-structured IT project structure during initiation phase helps to ensure that an organisation selects those IT projects that will best support organisational needs and identifies and analyses the IT project’s risks and proposed benefits before a significant amount of funds and resources are allocated. The absence of quality control tools and techniques make project vulnerable to failure at initiation phase only. If the Quality tool like “Root Cause Analysis (RAC)” can implement in the

To date, many researchers have focused on developing generic appraisal approaches, which can deal with all types of IT projects, in all circumstances. This has resulted in the development and use of traditional appraisal techniques. However, these appraisal techniques fail to accommodate the intangible benefits and risks associated with IT projects. According to Farbey et al., the IT project selection process should go beyond traditional ‘business value’ techniques and introduce concepts of quality control at each phase, maintain value and risk.

Several methods have been proposed to help organisations make good IT project selection decisions. However, many reported methods have several limitations and tend not to provide a means to combine tangible and intangible ‘business value’ and risk criteria. Others are too complex in structure and have little appeal to practitioners.

To overcome the limitations of existing frameworks Stewart suggest a five-step IT project selection process (SelectIT):

The IT project proposals are ranked using the ranking index method. Once the projects are ranked according to their index value, the organisation can select which project(s) will be implemented. Naturally, the number of projects selected will depend on the designated budget.

Within most sectors of government and private industry there are suggestions that IT investments are often accompanied by poor vision and implementation approaches, insufficient planning and coordination and are rarely linked to business strategies. The successful implementation of new and innovative IT requires the development of strategic implementation plans prior to IT project commencement. Effective planning should go some way to reduce the current gap between output and expectation from IT investments. Only recently, there has been growing interest in developing planning frameworks to aid IT implementation.

Frederick P. Brooks observes that planning of a software project is its most important aspect. He recommends budgeting one-third of the total resources to it. One-sixth of the project funds should be allocated for coding and another quarter each to testing of initial program modules and the final system. Brooks also warns against crash assignment of additional staff to a software project which has slipped in its schedule. Indoctrination of the new team members to the project will only eat into the time of the old team, thus delaying the project even further.

Read also  The Very Large Scale Integration Information Technology Essay

As Brooks stated that Planning phase is most important phase of IT project life cycle and if we integrate some quality improvement tool in planning phase then they will make sure that project management plan become very strong and effective as they can be used in risk management activities throughout the project life cycle to identify the probable cause behind the repeating risks.

Yaseen and Marashly (1989) showed a conceptual framework for the quality control at different phases of project life cycle. This quality control conceptual framework can be used in IT project life cycle to implement quality tools at different phases to reduce the repeatability of errors.

System:

The system intended to be quality controlled – in any phase of project development cycle – is generally composed of three parts:

Input

Process

output

Quality control (QC):

This activity is generally composed of three successive actions:

Measuring

Comparing

correcting

These three parameters after being detailed could be structured in the ‘quality control cube’ as shown in Figure 2.Each of these three segments is broken down into three successive actions.

From Figures I and 2 it should be noted that the quality control activity is illustrated in three faces of the quality control cube:

Face A represents the system quality control (Figure 3) and contains three segments

(i) input QC,

(ii) process QC,

(iii) output QC.

Each of these three segments is broken down into three successive actions. Namely

Measuring, comparing and correcting the input quality (3 cells)

Measuring, comparing and correcting the process quality (3 cells)

Measuring, comparing and correcting the output quality (3 cells).

The other two faces of the cube (faces B, C in Figure 2) are the two references for conducting the quality control activity presented in face A of the cube.

DEVELOPING THE QC CUBE:

The QC cube is developed in parallel with the project development phases. These phases are presented in Figure 6. In the first three phases of the project development cycle (i.e. study, design and contracting & procurement), the specs of both the system and QC (faces B and C of the QC cube) are developed, while in the fourth phase (implementation) the system QC (face A of the QC cube) is activated.

Hence the following specs are sequentially indicated and developed in the successive project phases as follows:

The responsibilities of the three participants regarding the specs (Table 1) and the system quality control (Table 2) are summed up in following figure:

Development of the Quality control cube according to Project life cycle phases

Overview of Six Sigma

Six Sigma was developed by Motorola Corporation in the early 1980s as a method of reducing product failure and eliminating defects. Six Sigma is defined as a data-driven methodology for eliminating defects that seeks to drive manufacturing and service efforts so that there are six standard deviations between the mean and the nearest specification limit in any process. Six Sigma provides organizations measurable ways to track performance improvement and identify processes that are working well. Organizations using Six Sigma focus on managing processes at all organizational levels, linking senior executives’ priorities to operations and the front line in a team-based, collaborative approach (Van Tiem, Dessinger, & Moseley, 2006, p.694).

The six sigma methodology is a rigorous and focused application of quality improvement tools. The term six sigma refers to the idea of striving for near perfection at the infinitesimal rate of 3.4 problems per million opportunities. The centerpiece of the six sigma approach is the DMAIC (define, measure, analyze, improve, and control) project cycle (Brue 2002).

Beyond its role in quality assurance, six sigma helps the organization become more profitable. While it uses many traditional quality methods found in TQM, six sigma’s ability to measure project success in terms of profit or cost savings is appealing to many executives.

Six Sigma as a methodology

From a methodology perspective, Six Sigma is a roadmap to help improve customer satisfaction, reduce process related defects, and thereby reduce costs (Siviy, 2001). The methodology supports project prioritization and selection based on the project’s relationship to variables such as executive strategies within the organization, risk associated with the project, and estimated benefit resulting from the project. The methodology consists of phases and toll gates – or check points – at the conclusion of each phase to help ensure that all work is complete. The Six Sigma Academy (2002) cites the five phases of Six Sigma as Define, Measure, Analyze, Improve, and Control, or DMAIC.

Read also  The Process Control Management In Linux Information Technology Essay

The Define phase, according to the Six Sigma Academy (2002), begins with a problem. This phase helps to clarify what the problem is and why the problem requires a solution. The common chain of activities in the Define phase includes constructing a business case, clarifying how the problem is linked to the customer, understanding the current process, and forming the project team. The Define phase consists of tools such as voice of the customer (VOC), which includes reviewing customer complaints and using

interviews and surveys to gain a better understanding of the customer’s perception of the problem; critical to quality (CTQ) trees, which are intended to provide more clarity around the VOC; and SIPOC mapping, which is a high level process map that takes into consideration the suppliers, inputs, outputs, and customers of a process. Once activities are completed in the Define phase, a toll gate review of the work is completed, and the project team then moves to the next phase of the project, which is the Measure phase. In the Measure phase, statistical tools are applied to establish and validate the measurement system that will be used to measure the process for both baseline and target performance of the process (Six Sigma Academy, 2002). The common chain of activities in the Measure phase includes developing process measures; collecting data from the process; checking the quality of the data; understanding the current performance of the process; and determining the potential capability of the process (Brook, 2004). The Measure phase of Six Sigma is filled with statistical tools and techniques. Data must be correctly categorized so that the relevant statistical models can be applied. For example, a normal distribution is most relevant to continuous data while a Poisson distribution is more relevant to count data. Minimum sample sizes of data are computed so that statistical results of the data have greater significance. The measurement system itself is tested using gauge repeatability and reproducibility (GR&R), which is a statistical study to quantify precision errors in the measurement system. Time series plots and histograms are used to provide an understanding of the current process performance. Future process capability is computed by taking into consideration upper and lower specification limits of the current process and details from the histogram, to arrive at a potential sigma level.

Once these activities in the Measure phase have been completed and approved through a toll gate review, the project team moves on to the next phase, the Analyze phase. The Analyze phase of Six Sigma is a tool box of tools and techniques to identify the critical factors of a good process as well as the root causes of process defects (Brook, 2004). The typical flow of activities in the Analyze phase is analyzing the current process; understanding why defects are in the process; and analyzing the data from the process and verifying the reasons for defects, particularly any cause and effect relationships. This phase also includes many statistical tools and techniques, beginning with detailed process maps, failure mode and effect analysis (FMEA), brainstorming, and Ishikawa diagrams to analyze the current process and to search for root causes of defects. Data are then analyzed graphically using histograms, dot plots, time series plot, box plots, scatter plots, and Pareto charts to understand what the data is saying. Data are also analyzed statistically using normality testing, statistical process control (SPC), hypothesis testing, simple and multiple regression, and design of experiments to help understand the causes of process defects (Six Sigma Academy, 2002). Once the causes of process defects have been identified, the project team, after completing a toll gate review, proceeds to the next phase of the project.

The Improve phase of Six Sigma focuses on developing, selecting, and implementing the best solution to improve the process (Brook, 2004). The typical flow of activities in the Improve phase consists of generating potential solutions, selecting the best solutions, assessing the risks of each solution, and finally, implementing the best solution. Statistical tools and techniques in this phase include brainstorming, benchmarking, solution screening, and FMEA to generate potential solutions, as well as analysis of variance (ANOVA), regression analysis, and simulation to help determine optimum settings for process outputs (Six Sigma Academy, 2002). The final phase of the Six Sigma methodology is the Control phase. During this phase, controls are implemented so that the process improvement can be sustained over time. This phase employs tools such as control plans, hypothesis testing, statistical process control (SPC), and control logs.

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)