The intermittent, ongoing outage of the consumer side of Bank of America's website seems to have finally been resolved.
“Our online banking systems are available to customers,” BofA spokeswoman Tara Murphy Burke said early this afternoon in an interview.
“Given the last few days, we are rigorously monitoring our online banking system, and chose to continue deploying an alternate home page to ensure that customers get to the right destination quickly,” she added.
Such outages tend to feed on themselves. Once customers cannot get in they will return frequently thereafter and try to login yet again creating even more traffic. Thus, companies will deploy alternative sites to help handle the load.
I heard about the current outage Friday along with the rest of the world but frankly it did not interest me, mainly because I deal with the advisory side of BofA's business (aka Merrill Lynch). A quick call to a contact there Monday, plus a couple off-the-grid e-mails to brokers, indicated nothing was wrong.
When Tuesday came and I was still hearing about intermittent unavailability of their site I thought, “Whoa, there has to be something to this.”
First, this is BofA's third significant outage this year (there were similar outages back in January and March).
I questioned Ms. Burke about the possibility of some type of attack and she assured me that every indication is that recent performance issues have not been the result of hacking, malware or a denial of service attack.
Even so, this sort of outage resemelbes certain types of denial of service attacks, especially those that are not meant to kill a site but just keep the defenders guessing at what the problem could be.
I spoke to my former colleague Matt Sarrel, who's eponymous Sarrel Group consulting firm carries out proactive load and security testing on its client web sites in order to prevent just this type of event from disrupting customer service.
“Right now, BofA must have hundreds of security analysts reviewing hundreds of device and server logs in order to determine what is happening,” he said.
“It sounds unlikely that they don't know what is really happening, but on the other hand this could be a difficult problem to troubleshoot as it involves multiple functional groups such as developers, security administrators, server administrators, and network administrators,” added Mr. Sarrel.
And that points to just how tricky these things can be to diagnose.
Even with all the intrusion prevention and intrusion detection systems out there (usually referred to simply by their acronyms these days IPS and IDS), which included dedicated hardware and software, clever hackers out there continue to come up with creative and at times unrecognizable ways to attack.
As more than one security expert source of mine noted it could even have been some type of worm planted there months ago that was just suddenly activated -- and not something that came in at the same time the systems went down.
Admittedly, this it is mere speculation. Still, it seems highly unlikely that some type of run-of-the-mill upgrade or traffic spike was the cause; dealing with those things is an IT team's bread and butter day-to-day operation. In other words high availability/disaster recovery/failover 101.
We won't know the cause definitively until someone in IT at BofA specifically addresses the issue. I'm told, however, that is not within BofA's corporate policy.