The iPremier Co.: Denial of Service Attack
“The iPremier Co.: Denial of Service Attack.”
On January 12, 2007, iPremier Web servers were brought to a halt after a denial-of-service (DoS) attack had occurred. (https://services.hbsp.harvard.edu/services/proxy/content/55482727/55482733/bc0bf879de2a3b14574a611f54ec52c6).
(ADOS attack) is where a multiple of compromised systems, which maybe infected with a Trojan virus, are used to target a single system causing a Denial of Service (DoS) attack.Â Victims of a DDoS attack consist of both the end targeted system and all systems maliciously used and controlled by the hacker in the distributed attack. (http://www.ddosprotection.com/about/ddos-information/)
- How well did the iPremier Company perform during the 75-minute attack?
- In what ways were the company’s operating procedures deficient in responding to this attack? In what ways were they adequate? What additional procedures might have been in place to better handle the attack?
In my opinion, Qdata and iPremier really dropped the ball on this by not thinking steps ahead. They did not have a contingency plan or any plan of sort for this worst case scenario.Â iPremeir had placed too much faith into Qdata`s ability to handle the situation or threat.Â The first thing I noticed the company did was panic, since there was no crisis strategy/disaster plan.Â The attack couldn’t have happened at the worse time since the attack happened during a high traffic period.Â If this attack was done by competitors than they got what they were looking for by hurting the reputation of the company.Â If I was Bob Turley, I may be worried if I’m still going to have a position still, since I was not prepared for this infrastructure break.Â Bob did not go over all known threats to the infrastructure risk matrix and develop procedures to immediately identify the type and risk.Â These threats would need to be continually assessed as new ones emerge and the identification would have helped determined the right procedures for defending against them.Â My first move I would’ve of made is open a line of communication with Qdata to discuss any risk measure we may have to take.Â I would not have let the attack go on for so long without pulling the plug to our servers so the customer information cannot be stolen.Â I would also increase my security against attackers.Â Having system and users use stronger encryption passwords.Â Have better real-time monitoring, with a backup plan that has went through testing.Â Train my employees to better understand the type of attacks and train them on how to handle emergency situations.Â Even after that make another business continuity plan and test it end to end than repeat.Â Keep all the software up to date that will better protect from viruses and attacks.Â You may want to hire an outside audit team to keep a check and balance.
The biggest problem is the host provider.Â If I was Bob, I may want to build a much better relationship with my provider, showing the importance of this never happening again.Â Since in sense it’s my company’s reputation which is on the line.Â If that don’t work, I would go get a more reliable/reputable host provider.Â With a high class support and infrastructure, with better security measures.Â Besides of the updating I would do to the software, Firewalls would also need to up dated.Â This will protect my company from viruses and also protect from the “whatever” employee.Â Again training my employees on what not to do is really important.Â Train them on emails and what type of emails are at risk.Â Tell them to always inform somebody on any obscene gestures they computer may be exhibiting.Â Especially any “ha” emails.
- In the aftermath of the attack, what actions would you recommend?
My biggest concerns are Legal, Public Relations, Stock Prices, Customer Information and Network Security as least important after the attack.Â The attack just proved to any competition that my firewalls can be hacked.Â In looking who could be the one responsible.Â I would be looking at my competition and what would they have to gain in my attack.Â Since in sense if I pulled the plug than it would take at least 24 hours to get back running.Â Even if I did not pull the plug and I rode the attack out than I would still have to shut down business because of then security breach.Â No matter which route taken, I would still be at lost once my firewalls proved to be vulnerable.
There a lot of equations to look at.Â This is the main reason I would have an outside Network Operations Center (NOC).Â They will provide all the monitoring I may need for any issue that may arrive even the increase of bandwidth