Application Security

We’re Hackers for the Telco: BT Group Finds the Flaws You Missed

11 Mar 2016 6:00am, by

It’s still a little shocking for some that one of the world’s largest telecommunications providers should provide an IT security consultancy for major organizations. But it’s been a decade now since BT acquired Counterpane, the maverick security firm founded by world-renowned cryptography expert Bruce Schneier. Now that much of the world’s mobile applications are being facilitated by telecommunications systems, BT’s security practice finds itself in the catbird seat, overseeing the collision between the old order and the new stack.

What’s happening now is that everyday flaws are having immediate, direct impact on global networks. So BT’s engineers have a greater interest in resolving security flaws in mobile and cloud-based apps, than ever before.

Be a Doctor, Not a Warrior

Ever since the Counterpane acquisition, Konstantinos Karagiannis [on the right, pictured above] has been in charge of BT’s ethical hacking and business development team, as the chief technology officer of BT Americas’ Security Consulting Practice. A self-declared Whovian, Karagiannis and his team are frequently hired by major firms to assess the resilience and access control of their applications, more often these days before those apps are released, but not always.

Applications, says Karagiannis, have become today’s portals to the network, assuming that role from Web browsers. A decade ago, you might not have had to hack a browser to read its open source code and take a guess at where its flaws might be. Today, finding the flaws has become more of an adventure in time and space — an exploration that he feels automated tools don’t have the ingenuity or the bravery to undertake.

“Humans have to hack… applications, because they’re written custom, by developers, for a specific purpose,” Karagiannis told The New Stack. “There’s no signatures to look for. It’s not like a network, where you can scan, looking for weak operating system versions, patch levels, or running services.

“With applications, you have to have a hacker who really understands how they work, and what dangerous flaws might be coded into them, because of mistakes that developers are making,” Karagiannis said.

The BT team of “white hats” might or might not work directly with developers, depending upon the whims and desires of the client. Once hired, they don’t report to the development teams, but instead to business units of the client. With certain smaller clients using third-party applications, it’s possible that the team is given their consent to hack those apps — which, therefore, the team ends up actually having seen once or twice before.

And in all too many cases, new government regulations or compliance mandates will compel these clients not only to look into code right now but produce detailed reports in only a few weeks’ time.

Threat modeling can best be achieved in the QA stage of a product’s development, when hackers can utilize an environment that mirrors production as closely as possible.

“A very big customer came to us with a mobile app,” Karagiannis told us, “and it was the eleventh hour. And we worked furiously, and we had to have a call at 1:00 in the morning because I discovered something that was an ultimate showstopper. And they had to pull advertising that was ready to run, and it was a major — but it was their fault. They came to me too late.”

You might wonder why all these clients evidently contact BT, or any of its competitors, at or beyond the last minute. While there’s no one answer that fits everyone, here’s something to think about: Many of these clients have adopted continuous integration pipelines (CI/CD), where automated testing and compliance checkpoints are put in place for each build iteration. Nonetheless, these automated procedures are not substituting for human-powered investigations, all of which power down the continuous delivery machine for some time, and some of which shut it down completely.

In “blind” situations where the hacker and developer teams remain separate, developers are asked to do a code freeze, and the hackers are given a single iteration for private staging. Karagiannis’ people then conduct tests as both authenticated and falsified users, knowing that an attacker may always be someone the system trusts.

From there, the team can conduct code reviews, in the interest of producing what BT calls threat models. Here, he says, “we do talk to the developers and try to get a sense of all the boundaries of the application, and then try to craft a look at all the types of threats it might face, even in the future — things that might not be there now but, because of the way it’s designed, we see a potential for this or that down the road.”

Real Time Rift

Threat modeling can best be achieved in the QA stage of a product’s development, said Karagiannis, when hackers can utilize an environment that mirrors production as closely as possible. This mirroring typically requires the utilization of false data, because it may be improper at least, and illegal at most, for clients to hand over QA code with live customer data. It also utilizes the generation of false customer traffic by way of proxies, so that conditions under stress may be taken into account.

“We don’t really like to test things in production,” he said, “because then it’s too hard to promise that we’re not going to corrupt data. Sometimes an application ends up inadvertently accepting data and overwriting valid data. And also, then, it becomes a privacy issue. We don’t like to hack a system that has customer data in it.”

Karagiannis’ consultancy stands as an antithesis to the argument that quality can best be infused into software through automated testing. Conceivably, automated QA may have the advantage of being able to work with live customer data from inside a protected environment, within the corporate firewalls. However, it’s only capable of finding the types of behavioral and architectural flaws that could be identified before — patterns which are the basis for automation in the first place.

“Security metrics haven’t really given us what we want,” remarked Bryan Fite, BT Global Services’ senior cyber-physical consultant [pictured left above]. “We have the dashboards, and maybe we’re not showing the right stuff.”

Fite gives credit to “organizations that acknowledge that they have a challenge — as opposed to lawyering up — that actually publish and say they’re going to fix this, or actually engage the community in saying, ‘We need help!’” But Fite believes that, of the small amount of information that organizations do share about threat metrics, much of it isn’t even the right information, due in large measure to reliance upon automated tools and instantaneous dashboards to produce this data.

BT’s case is that skeptical, objective, independent humans have to be involved as soon in the development process as possible, but at least toward the end rather than not at all. At issue now is how these objective code review and penetration testing processes become integrated with the current practices of organizations — practices that are already evolving toward more automation, not less.

Karagiannis said his team can test, and has tested, software builds in the midst of code drops. But BT cannot then certify the product to be vulnerability-free.

“The best we can do is say there was a best-effort test in a moving target environment,” he said. “Because sometimes I’ll look at a piece, then they’ll change it and all of a sudden introduce a flaw, and I don’t see it. So it’s dangerous to not have a code freeze.”

Automation vs. Ingenuity

Ironically, the monitoring and automated development platforms used by some of its clients are generating loads upon loads of traffic data, which BT can only ingest into its test environment through batch inputs. Put another way, when the client automates more, the white-hat team has to automate less. Last week, BT entered into a new partnership with Intel Security (which includes the McAfee brand), in an effort — says BT — to normalize the security data that monitoring tools produce.

“One of the biggest challenges is the speed to ingest that information,” said Bryan Fite. “Believe it or not, some threat monitoring systems are even dumb sims that use batch import. I think we need more real-time because it can all happen in the blink of an eye, and the defender’s dilemma is that they have to have twenty clicks, or they might have to wait two days, or they might have to have a service ticket.

“All the data in the world, the ‘big data,’ doesn’t matter if it’s not current. Data does have a shelf life unless all you’re doing is post-mortems and forensics,” said Bryan Fite. “We need real-time, to be able to do decision support, to make our human analysts more effective.”

“With threat modeling,” added Karagiannis, “it works best if you have too much data from too many sources around the world, all the time. Then you can see patterns and warn customers that this wave of attacks is coming.”

What’s getting trickier, though, is the process of distilling (for lack of a more formalized word) the data that emerges from modern threat modeling — which deals with individual applications and millions of protected customers — so that it actually can be shared with vulnerability assessment firms legally. For all the unique value that only humans can provide, it would still be nice if the process of sharing the potential flaws in software and systems architecture, could be automated.

Intel is a sponsor of The New Stack.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.