Examples of Software Crisis’

Keywords: software crisis examples, software crisis real life examples

Software Crysis

What was it?

The software crisis which happened during the 1960s, 1970s and the 1980s, happened because of companies were discovering the potential of the computer softwares over the manual systems. This lead to companies demanding more and more from the programmers which for a programmer working alone was a bit impossible to cater alone. For instance as the programmer which was working on the the software could not coop with all the demands of the customer in time lead for the delivery of the software to be late of the date settled. As there was not enough planning lead to over budget from that stipulated in the beginning of the contract. Most of the times the software produced by the programmer didn’t reach the specification and the functionality requested by the customer apart from that the programmer would give a sheet full of the bugs in the software which was accepted by the customer at the time as said by Booch it was “a malady that has carried on this long must be called normal.” i.e. it was acceptable that software’s bought by the customer were of a low standard. Apart from that the software was accompanied by a poor documentation. But there was a time when the customer saw that instead of profit they were spending money on software which the manual technique was more profitable as it didn’t cost as the software and more efficient. The production of low standard software lead to damage of property and there was even casualties because of incompetence of the software produced. As stated by oppapers.com “the software crisis was at first defined in terms of productivity but evolved to emphasize quality”.By damage of property is meant that the programme would have built the software with poor security which hackers could have breached easily implying that valuable data could have been stolen from the software’s database easily. When referring to casualties it is meant that there were embedded systems in machines like for example radiotherapy machines which gave lethal doses of radioactive material, this could have been avoided by more testing to ensure that the system worked correctly.

What caused it?

It was firstly caused because of the lack of programmers working together therefore a lot of work was concentrated on one programmer which would have to concentrate on many aspects of the software therefore there is a good probability of forgetting something which will bring out a software with bad specs when they tried to counter it by hiring more programmers still it failed as the same problem still persisted because the programmers didn’t know how to work in a team.

Programmers didn’t ask the customer what did they need of the software therefore the programmer would create a program with bad specs which would lead in more maintenance after the software would be created so that the software would be as the customer needs it. The programmer didn’t interact with the customer unlike nowadays were they try to go in the customers head to produce the software with the most desired specs to work efficiently and effectively in its field.

Read also  The Cloud Computing Assignment

The complexity of the tailored software which had to be produced was increasing which on its own would need time to solve in an efficient way in a programming language but the user always wanted to pay as least as possible and demanded that the software had to be done shortly which would have brought about requirements conflicts, as to produce a complex system takes time, but the user didn’t give the time, if a software wasn’t given a low budget meant that the system must not be that complex as the payment is low. Therefore in the end the system requirements were never reached which brought about discomfort in the user which started to see that he computerized system was more as a nuisance than a help.

According to sa-depot.com another cause might have been “failure to manage risk” which in the same site it is stated that the waterfall life cycle will delay problem identification this is because as it can be seen in the picture on the right verification that the software is working correctly only happens before implementation there is no parallel testing in the making of the software which would lead in more maintenance after the implementation of the software.

Another cause mentioned on sa-depot.com is “legacy systems must be maintained, but the original developers are gone”this implies that no good documentation was made so that other programmers can get the hang of the structure of the tailored software.

Inadequate teaching of software engineering might have been another cause which would result in more maintenance and not reaching the deadline because of tasks which had to be redone to be made as they were supposed to be done which implied less productivity and wasting of precious time.


The production of the os/360 system is a good example of the software crisis. The os/360 was to be produced with the system/ 360 mainframe. Its production started in the 1960 and was planned that by 1966 would be produced. The software was the biggest and most complex having over million lines of code and with an initial investiment of 125 million. In Spring 1964 the development task got underway. There were about 70 programmers working on the project but later it was calculated that schedules were slipping therefore they hired more programmers increasing from 60 to 150. But as they increased the number of programmers the less was their standard. Although there was a sudden increase in the number of programmers working on the software, still they estimated thet the development was running late by aproximatley 6 months. Further more a test run was made on the system and was found that the system was very slow which implied that there had to be more reprogramming of already done work which meant more delay in the progress of work. By the end of 1965 it was found out that there were fundamental flaws and thereappeared no easy way to arrange them. There was rescheduling in the development plan and it was announced that the software was running 9 months late. At the peak of the system development there was employeed a stuff of 1000 people. Finally by mid 1967 the system was produced a year late of the initial date stipulated and the IBM went with a loss of aproximatley half a billion. This is a good example of the software crisis when there is a lot of complexity in the system for a number of programmers to produce and when hiring programmers which have a low skill in programming which resulted in a late production of the system and an over budget expenditure which is a loss for the company producing it.

Read also  Metropolitan Area Network In Detail Information Technology Essay



According to accglobal.com ”software crisis persists. Software is still difficult to develop and often fails to meet user expectations” and zappa.ubvu.vu.nl gives a reason why “the software crisis persists because ISs remain as complex and rigid as before to the people who have to maintain them”.

Although programmers nowadays work in teams and every body creates part of the software and no programmer works individually. Nowadays system Analysts are hired to serve as a translator between the programmer and the user which needs a tailor made software so that the analyst can tell the programmer what the user wants by the means of diagrams and structural tools. But still softwares are produced over budget and go over the dead line and there os still a lot of maintenance going on on the system after it is delivered i.e. patches and fixes because bugs were found in the system or there is a malfunction in a feature or even the software stops responding.

Structure theorem:

The structured theorm technique was firstly introduced by Bohm and Jacopini of Italy in 1964 and it was later updated by Dijkstra,Jackson, Yourdon, and wirth and was re-named to the structure analysis. Basically the theorem proofs that every computable function can be derived using only 3 techniques used together or on their own. These are the sequence ,selection and iteration.

Sequence means that the code of the program is read line by line always i.e line 1 is read before line 2 is read etc. e.g.:

Selection means that there is the use of a choice for instance if A>B then A is printed out or if B>A then be would be printed out on the screen. The use of Boolean I,e. true or false. This technique sometimes is referred to as the if/then/else function (if A>B ,then print A, else print B).Another use of the selections statement is the case statement were the computer is give some cases which he would compare them with the result, when the result is matched with one of the cases it processes the lines of code related with that particular case are processed.

Read also  AirAsia Information Systems

The iteration is when the computer has to compute the same thing for a specified number of times i.e. a loop.

Figure 5: the iteration taken from http://www.waycross.edu/Faculty/ckikuchi/COMP1301/StructureTheorem.htThe structured theorem brought about the use of the modular design. the use of modules in programming which means that a program is divided into different modules which then are joined to produce the final product. A module contains functions which are related together for example like the GUI of a program.

Gane and Sarson.

The Gane and Sarson technique was invented by Chrise Gane and Trisk Sarson in the late 1970’s, it is called a DFD(data flow diagram) that is a logical diagram is built using a y person a graphical models so that a clear picture of the whole users in an easy way.

The diagrams used in the Gane and Sarson are 4:

External entity which indicates data entering the systems or data coming out of the system. This is represented by a Rectangle.

A data flow which means when data is flowing form one part to another, this is represented by an arrow, the tip means to where the data is flowing.

The Data Store, means when there is needed that a data which has to be saved, it is represented by a rectangle with the left side without an edge.

The process, this means when there is a process that will change the form of the data flowing in the system, this is represented by a rectangle which instead of corners it has arcs.

The Gane and Sarson is good because you don’t need to be an IT professional to use it as it can be easily understood by the users which want a system done and it people which are working on the same project can have a clear view what is needed from the system and there is no ambiguity as there is in the languages.

In difference than the Yourdon/demarco the gane/sarson diagram doesn’t divide the structure into levels so you are looking at the whole system in one page on the other hand in the Yourdon/Demarco devides the structure into different levels with every level u increase the detail of each process.

Order Now

Order Now

Type of Paper
Number of Pages
(275 words)