New Wave of DDoS Attacks

 

Hear what the experts say about the new wave of DDoS attacks—how they’re larger, more complex and dangerous than ever. What are the risks to your business? How can you prepare? Learn from Neustar’s Rodney Joffe and other security leaders.

 

Video Transcript

Tom Field

Matt, what’s different about this latest wave of DDoS attacks as compared to what we’ve seen before?

Matt Speare

From my perspective, the major difference is that these really were not fraud-related; these were almost political statements. That being said, while we were all looking for fraud to be in conjunction with the denial-of-service attack, it didn’t appear to be there. The other surprising piece was the absolute sheer volume, in that here we barely knew about them several days in advance where the attacker named their time and target and we could do nothing to stop them. That was very shocking.

Michael Smith

I don’t see that there’s anything completely revolutionary about these attacks. With web-application attacking, that’s not new. Compromising content management systems, that’s not new. Talking trash on Pastebin, that’s not new. Hacktivists have been doing that for quite a while. Installing booter scripts and using those to make requests from someplace else, that’s not new. It’s combining all these techniques and then being able to target specific targets at the same time. That’s where the attackers have a little bit of genius there. It’s an incremental refinement of existing techniques, and overall it has had a large impact.

Rodney Joffee

I’d like to add to that. I think you’re absolutely correct. What we see that’s very surprising is that this time they demonstrated a very interesting knowledge of the way that the network works, which we haven’t really seen in the past. Very often, we’ve seen attacks that use specific techniques, but are relatively unknowledgeable about how the network works. This time, they were very smart about the way they went about it, and I think from our point-of-view that’s what’s certainly new this time.

Stephen Mulhearn

I agree with both of you. The trend is for the actual attackers to do research. They’re investigating and they’re formulating attacks in a lot more organized manner. Rather than just a single vector of an attack, there are actually multiple vectors and they’re using even the large volumetric attack to obfuscate the true destination of the attack itself.

Field

Matt, you’ve been at M&T for some time and have been in banking for some time. Prior to the past few months, what was your previous experience with DDoS in advance of this so-called new wave?

Speare

When we had seen denial- of-service attacks in the past, while the techniques could be different, generally they were coupled as a mechanism to hide fraud attempts. As an example, with corporate account takeovers, they had the credentials and they even figured out how to be able to defeat the use of a token for second-factor authentication, conduct a few fraudulent wire transfers and then hit with a wave of denial of service so that the people that would be involved with trying to get systems back up and running would be diverted from looking for any type of fraud attempt. They also lasted for weeks on end, where it would go in waves. It would be on, then off. Then on, then off.

This latest wave was a very distinct period of time, generally 24 hours, and then they were gone and off to the next target. While we were all scrambling to look for evidence of fraud that was occurring, we were all surprised when we could not find any.

Field

Does that jibe with what others have seen, not just in financial services but in other industries with DDoS prior to this new wave?

Smith

Different industries have different attacker motivations. What we’ve seen a lot in the commerce space with regular online merchants is that they get hit a lot with protection racket scams. Somebody shows up, talks to an online support representative, and says, “We are DDoS. Give us $6,000. You have two hours.” That’s fairly typical. But when you’re attacking financial services institutions, they’re not going to pay protection rackets. You have to have a different model as an attacker for a way to monetize that. Historically, we’ve seen it tied with fraud. In fact, this time last year we saw a lot of Dirt Jumper activity associated with Zeus. It was just like Matt talked about. Conduct lots of fraud and hit the wire funds transfer system so that they couldn’t reverse the charges, and keep the receiving bank from receiving the “this is fraudulent” message.

We’ve also seen it with hacktivists where they want to do it for prestige, where it’s really about getting their message out there and not really about who the target is. They’ll pick whatever they can pick that’s the easiest target just because they want to cause an outage so that way they get free publicity. There are different actors out there. They have different motivations; they have different toolsets; they have different capabilities. It’s just that I believe at this point we have a new set of threat actors and a new set of capabilities that we have to consider.

Mulhearn

The other aspect of it that we’re seeing, not just from an enterprise or a hosting-center perspective, but also from a service provider perspective, is an awful lot of these guys are actually getting wise to the tracking and traceability that the service providers are applying when they see a DDoS attack. Obviously, there’s a certain amount of human latency there. What we’re seeing and what we’re getting feedback from some of these communities is that they understand it takes a certain amount of time to actually even deduce that a DDoS attack is ongoing. And what they’re actually doing is they’re pulsing their attack within a period so that it’s actually ending the attack before the service providers and the mitigation is able to be put in place. The service providers are almost chasing ghosts as well.

Joffe

While I think all those things are relevant, one of the most concerning things is that in this particular case, when you look at the volumetric side of the attack, this is a battle we’re not going to be able to win by adding capacity. It’s very clear in the way that we saw this attack go on that they continued to monitor the targets. As they saw the targets were coping, they added systems. They also were very smart in that as we took down systems, they were scanning and adding them. I don’t know that looking at this in a way we traditionally look at DDoS is going to work. I don’t think that adding bandwidth is going to solve this problem. We have to find another way.

Field

Rodney, in your presentation you described what you called the perfect DDoS. Based on what you were just talking to us about, what would you say would be the perfect response to this DDoS?

Joffe

That’s part of the problem. It wouldn’t be a perfect DDoS if it was a perfect response or a response that would actually work. I think what we have to be able to do is - and it’s one of the things we talk about in the DDoS world all the time, the spoofed attacks - is to actually implement and force identification at the end. But that’s not going to solve this problem either. I think this is going to be a combination of having best practices in place and using IDS and IPS, which helps you to differentiate between the attack and the amount of traffic.

The second thing is that we’re going to have to find a way of being able to identify the sources that traffic is coming in from and in some other way quench those.

Thirdly, we have to, without question, find a way of being able to apply sanctions against the actors behind it so that over time the message gets across. That will certainly take care of a large group of the attackers, but it’s not going to solve the problem ultimately. I don’t have an easy solution, unfortunately. I think it’s going to be a combination of factors, as well as perhaps a different way of looking at how we move bits around the Internet, unfortunately.

Mulhearn

One of the things that we have seen - and we’ve had numerous discussions internally now about this - is traditionally the way that we’ve always looked at DDoS is you looked at what they could do badly and how I could actually react to it or how I could protect something. I think now we have to start looking at what our end play is and what’s our end goal. It’s maintaining the service. That’s what we’re trying to do. And if we then start reversing that and actually look at what would put that service under threat, work from the service backwards, start applying and looking for mitigation and controls we can put in place. Because whether we like it or not, there will be volumetric attacks. There will be other types of attacks and new types of attacks that come through. What we have to start understanding is what would put that service in jeopardy. And if we start thinking of it like that, rather than they’re going to bombard us with traffic and how am I going to limit that or restrict it, start thinking of it from the other way around.

Smith

We’ve looked at this from a couple of different angles. One is the unique nature of what Akamai does. We’re really a platypus. We do many different things and it’s very, very hard to categorize us. But one of the things that we do is we deploy servers into most of the major carriers that are out there. We’re one or two ASNs or BGP hops, depending on what vernacular you want to use, away from every user and every server on the Internet. What this means is that we have lots of relationships with the carriers. We’re looking at how we notify carriers to tell them that they have an infection problem.

Historically, we’re dealing with compromised desktop computers when you start talking DDoS. Normally, what we do then is we have mechanisms where we can go out to an ISP that we partner with and say, “You guys have a problem. It’s here, here and here. At this particular date and time, these folks were involved in this botnet.” There are people that offer that as a service out there.

It’s reached the point where DDoS and compromised servers and compromised desktops are more like a public health issue. It’s not necessarily my problem. I don’t have the infection. I have managed my IT appropriately. But the fact that somebody else now has this disease or this epidemic, that’s actually impacting my ability to do business. It’s more where we have to start looking at this as a common good, or epidemiology, approach, or public-health approach, where it’s how we can actually stop these at the point of infection or at the point where that illegitimate traffic starts, not dealing with it at the point where it has a chance to collect en masse and actually attack the target.

Joffe

It’s actually a very good pointer to the one place where I think reference needs to be made. I know you’re aware that the issue is dealing with it at the point of infection. A lot of networks are getting better and better at this, and as long as we can get to the point where we can say to the network that it hasn’t taken the steps, we’re able to in some ways sanction them and, in a perfect world, actually disconnect those networks and make life very difficult for them unless they do take action. But I think that’s going to be a really tough thing to do because of the fact you’re dealing with international borders. We actually have countries that you can’t disconnect from the Internet who really don’t do the right thing.

Smith

You can do some things. For instance, you can filter appearing points. You can do some things like that. Also, you can do some crazy tricks with DNS, where you can do DNS black holes. Sometimes they’re effective and sometimes they’re not. When they work, they work great. This thing to consider with this particular wave of attack is that the attackers are using compromised servers where on the desktop world you have established paradigms for updating, applying anti-virus updates, applying operating system patches, doing the threat research and getting those changes down to the desktop user to fix that problem. But in the content management system world, you don’t have that. Take WordPress, for example. You only know that you have patches for WordPress if you follow the WordPress RSS feeds and you see that there’s a new version out. Or you log into the administrative interface of WordPress and you receive a notification that there’s a new version and you need to upgrade. What we need to put a lot of effort into is actually updating a lot of these web applications that will make most of this particular threat go away.

Speare

The way that I think about this overall is that, as an enterprise customer, I’m really looking for partners like Akamai, Fortinet and others to help with it. I’m at that interesting size where I do not have all the capabilities I need to be able to reach out beyond my network very far to get some of those early trigger points. And at the same time, we have a very large customer base that I want to be able to protect those services for. Leverage third parties and layer that technology in, so one particular exploit that seems to have the number of Prolexic or other service provider, I’ve got a complementing service that’s available from an Akamai, Fortinet or Neustar to be able to help mitigate that risk to my customer base overall. Unfortunately, we’ll never as an enterprise customer have the resources to be able to reach much beyond the borders of our network to profit on the concept of a clean pipe and then have the ability for that to still trickle through to be able to scrub as much of that bad traffic as possible so that I can allow my legitimate customers in.

Field

Michael, you as well as our other sponsors have spent time studying the recent attacks. Based upon your studies, what are some of the key lessons that organizations should learn?

Smith

We’ve not only studied it, we’ve lived it. We’ve received some of this attack traffic. We have lots of bank customers. Some use us in varying capacities. Some folks just use us for delivery of “logo.jpg.” It’s security-boring traffic. Some folks use us for transactional data: online banking, very interesting, high- profile and targeted traffic. We’ve seen lots of different patterns throughout this entire attack campaign, lots of things that worked and lots of things that didn’t work.

What’s interesting to me is that some things worked in some places, but they didn’t work in other places; for instance, information sharing. You never want to be the first person receiving the attack in a large campaign, simply because you don’t know the established patterns. You don’t know the attackers’ tactics, techniques and procedures. You have to talk to somebody else to get those. When you’re the first person receiving the attack out of the blue, you’re like, “Wait a minute. What is this thing?” You have to actually diagnose it. Sometimes, even though it’s a denial-of-service attack, sometimes it’s a performance issue. We’re in the holiday season now, and sometimes it’s just a flash mob, everybody checking their account balance because it’s Cyber Monday and they want to go online and buy something. That also looks like a denial-of-service attack at the top level. Once you start digging down into it, it looks different.

Information sharing was really good, especially once you realize that this is a longer campaign and that they’ve hit multiple organizations. Information sharing suddenly becomes critical as far as here’s a packet capture of what we saw. You can take that packet capture and build IDS alerts and build tools. You can put them into your boundaries. You can push them upstream to your providers. You can monitor for these things so that you detect when the attack starts and know a pattern of what these things look like so you can block them. These things are very, very important. I think that worked fairly well.

Also, contrary to what people believe, the attacks didn’t have that much of an impact. Yes, it made lots of press. But the actual impact on a bank, say maybe a two-hour outage, isn’t that large. The bigger thing is the resources that you used to protect against that. The press, the perception of what’s going on, that actually has a bigger impact than the attack traffic itself.

One of the things that didn’t work there was controlling the media. And I don’t think we have anybody who’s talked about that yet, but there was this point there where normally with fraud you don’t want to talk about it. Nobody wants to talk about fraud or particular vulnerabilities in their infrastructure. When you sustain a longer outage, longer being four hours or above, you need to get out there, control the message and get messaging out to your customers to say, “Our online banking is down, but go to an ATM. Those are still up and are running. Go to the branch office. Here’s how you find out where your local branch office is.” Maybe it’s a phone line. Maybe you go to a map provider or something like that. From a response side, we did really well. The outages that people took were within norms for what you would expect. It’s within that acceptable risk timeframe. The thing that, as an industry, we didn’t do a really good job at was getting information out to the public as a whole on what’s going on.

Mulhearn

You make a very good point about the messaging that we put out to the public. I think the press in general has quite a challenge and us as an industry because what we’re actually dealing with is an extremely technical subject to a general nontechnical audience. Therefore, if you look at what the press and how the press reports on this, they’ve only really got one metric to work on that they generally hone in on, and that’s the size of the attack. Now, it’s very difficult for them to judge how successful it was. Was it successfully mitigated? What methods or what was the actual attack and what did it consist of? We’ve got those challenges anyway, and depending on who you talk to in the press, they will obviously report it slightly differently. But if we look historically, they’ve all reported it as this side of the DDoS attack.

Speare

When I think about the messaging piece and what the media says, they use the term “attack.” For the normal consumer, what are they going to think when it’s their bank which has been attacked? It’s going to be, “Is my account safe and secure? Are my funds still there?” Part of our job and the messaging that we have to get out is that the word “attack” does not mean that accounts have been compromised or credentials have been stolen. A denial-of-service is about starvation of resources that prevent you the customer from being able to do business online with us.

Field

Steve, attackers learn lessons, too. What do you think they might have learned from the recent wave and what might we expect to see in another wave of attacks?

Mulhearn

The major thing that they have learned is that they’ve got lots of tools in their tool bag. What we’re seeing are a number of attacks emerging, a number of new types of attacks with this multi-vector. They’ll use a volumetric attack to obfuscate a layer-seven attack. The actual goal of what they’re trying to achieve is actually quite difficult to find. It almost is the needle in the haystack. Therefore, they’re actually learning the internal processes that an awful lot of mitigation processes or tools use to actually detect and mitigate. What they’re actually trying to do is obfuscate, hide and get around the functions that we have today. That’s going to continue. We’re going to see more and more tools come out for the less technically adept people out there to start these attacks.

From the perspective of the motivation for a lot of these, I had a good conversation with a senior person at one of the large UK banks. I said to him, “Does Anonymous worry you?” And his honest answer was, “No, not really.” He said because they actually tell us what they’re going to do before they do it, which is very clear – they give their name, they give who they’re going to attack, when they’re going to attack and how long it’s going to attack for - that allows us to prepare. His biggest fear is the guys that don’t warn him, and I think those are the more complex attacks, when they get tools right and therefore that’s a severe risk in any services.

Joffe

From my point of view, I think what was critical about this and different about this attack to anything else is the fact that it provided an incredible teaching opportunity to the folks that have tried attacks in the past and have been unsuccessful, where we’ve been able to relatively quickly and easily mitigate. Those same people watched what went on and the discussions in the underground are relatively interesting as we watch them. There’s certainly discussion about why it was that this worked and the things they’ve tried in the past have not. What we’re going to see is more and more of the attacks that follow the same kind of pattern because finally there’s something that - as Matt mentioned in the beginning - worked very well.

Smith

One of the things that I grappled with a lot week two or three in this campaign was how much information we want to get out there. I received attack traffic in the first two weeks. I know what the pattern is. I have some source IP addresses. I’ve got other information. I want to share that with people. How do I get that information out to them? How do I tell the public this doesn’t really look like a hacktivist attack and here’s why? It doesn’t match the TTPs of hacktivists. And yet at the same time, what I don’t want to do is give the attackers better damage assessment. In other words, I don’t want to tell them how effective they’re being in some places and tell them actually how they could improve their attack. And they could. There are several different techniques that they could do to actually make this attack worse.

I had to straddle that line there on how much information do I share with folks, how much information do I share publicly versus behind closed doors to try to give the defenders the best chance that they had of actually getting the best information to defend their infrastructure.

Speare

As one of the enterprise customers of those back channels, I can tell you that it was most appreciated to be able to get to be able to do file-carrying ISACs. But at the same time, we don’t have the ability that not everyone that was on that attack list actually goes out there and looked at that information. It’s pretty limited.

Field

Rodney, what unique qualities did you see in the attacks as well as the targets of the attacks?

Joffe

The things that we saw were the switching between DNS and web services and a very intelligent way of combining them. One of the things that was mentioned earlier was the fact that they did a fair bit of reconnaissance that we were sort of surprised by; the announced targets, for example. I won’t name any particular bank, but imagine bank A. We discovered that they were relatively intelligent in what they were doing based on the fact that we provided DNS infrastructure services for recently acquired properties from those banks. And we saw the attacker’s part of the process. It was never mentioned. It was never discussed. They had done enough research to not only look at the primary bank itself, but they looked at the acquisitions and subsidiaries and actually attacked all of them. There were a couple of targeted financial institutions in this attack that actually made use of DNS services and names that weren’t in their primary domain for some of their operational systems. Where everyone has merged and became the “bank of whatever,” they actually maintained some of the older domain names from the acquisition for enterprise services. We actually saw attacks against those domains, which was very surprising and showed that the folks this time had decided to do a lot more research. That was very surprising to us, and has given us a concern now that the old adage, “security by obscurity,” really isn’t a good way of doing things because they now are doing a much better job.

Field

Michael, we’ve talked a lot about financial services organizations. What are the takeaways for non-financial organizations? From your perspective, and others’ perspectives as well, how prepared are they for DDoS attacks if they become the targets?

Smith

I deal a lot with those other non-financial organizations. It varies widely across the board. It really depends on who they are and what their business is. Are their websites just exploratory in nature?

Are these brochure sites with visibly static content and they don’t put a lot of effort into it? Or is it a full e-commerce site where they derive the majority of their income from that particular website? Each organization has a different posture, a different need for their website and a different use of their website.

One of the things that we’re saying is that no DDoS is ever the same. The attackers are different, they have different technologies and they have different abilities to target. They have different ways to assess battle damage assessment, and no organization that’s defending is the same because they have different infrastructures, different websites, different content management systems and different capabilities. Things are always going to change.

For any organization, it’s looking at what are the threats. Starting at the bottom, say we have a five-page brochure website. Somebody takes it down. This is not the end of the world. You might end up in the press, but usually not. It’s a small website, versus somebody that is an online card processor. They actually can trace that this is how much money we make from that website per hour, per day, per week, based on business cycles. Cyber Monday it’s obviously higher. The day after Thanksgiving is obviously higher. Some day in the middle of June, it’s not as much. They can look and say, “What’s the direct impact of this attack financially for us?” Then there are other properties that are non- income creating, but at the same time it’s a brand. Think clothing, media consumption and food, where they don’t necessarily have a website that costs a lot of money or gives out a lot of information, but because they have a well-known brand, they invest a lot of time and effort into building that brand and building a particular messaging around that brand. When they have an attack that’s publicized, it negatively impacts that brand and it creates inefficiencies in their marketing and their ability to reach consumers. There’s this huge spectrum that’s out there of what the impact of a denial-of-service attack is. It’s up to every organization to look at that and say, “What’s actually the impact of this going away for us?” It might be nothing or it might be our business ceases to exist after four days of attacks.

Look at that. Use that as the basis for how much involvement we need with on-premises devices. What’s our capability in circuits? How much capability do we have on our servers? Do we need to engage a third-party service because the attackers have so much capacity, so much capability and that it vastly exceeds what we’re doing today? There are lots of “it depends” in there, but it all starts with what’s the actual impact, what’s the risk and what’s the impact and likelihood of it.

Mulhearn

Virtually all of our customers that we’re talking to today that have either had an attack or envision having an attack are looking at differentiating their own services. They’re actually differentiating the level of criticality of those services, and that to them is a feature that they have to have. They understand that they won’t necessarily stop all of the DDoS traffic. But as long as those critical services that are either financial for them and have a financial impact or cause brand reputation damage, those are the ones that they typically put at the top of the list.

Joffe

We’re all voluntarily agreeing. Where I think the differentiation comes in is that if companies haven’t been through the process of identifying the impact of an outage to specific services, then they really aren’t looking at the problem the right way. This is not about necessarily DDoS or hacking. That’s not the issue they’re going to look at. What they’re going to look at is the impact to the company, of the organization, if there’s a failure of particular services. Understand the financial impact of that and how it impacts the rest of the business and then make decisions about what efforts they put in place to actually mitigate it themselves and what else they have to have in place in case they can’t mitigate it themselves. Very few companies do that. It’s happening more and more, but, in the companies we look at, in general it’s those that have been victimized and don’t want to see it the second time. There are some cases where once was actually too much and you end up with companies that can never recover.

Field

Steve, we talked earlier about Rodney’s term, “the perfect DDoS,” and we talked about the perfect response. Based on discussions we’ve had about lessons that we’ve learned from the attacks, lessons the attackers might have learned, what are you thinking next-generation DDoS defense has to include?

Mulhearn

If we look at the tools in the attacker’s tool bag, generally one size doesn’t fit all in DDoS mitigation. You’ve got to look at multiple mitigation methods to actually stop the attacks that are occurring today.

That’s one of the points. The other point is I always separate and differentiate between trigger versus mitigate, and the reason why I do that is relatively simple. I can use behavioral methods to trigger mitigation. However, that method of mitigation doesn’t actually have to have anything to do with the trigger. What I mean by that is I can have a volumetric attack occurring. However, if that’s the trigger, I have to separate how I mitigate it because the actual threat to my service could be a layer-seven targeted attack. They could understand exactly how one of my databases or back-office systems operates. They then know that if they put enough good requests, and there’s nothing bad about the data or the requests, they’re going to exhaust my resource. What I have to start doing and start understanding is where those services are a threat and at what level they’re a threat.

We have to start looking at the services first and understanding what their capabilities are and then work back from that. We heard earlier about flash crowds. Flash crowds can cause DDoS attacks. It’s not that they’re bad and it’s not that somebody has instigated a malicious method of causing the denial of service. But often they can cause the denial of service. Everybody has to come to the acceptance that you’ll never stop all of that bad traffic. The one thing that you have to maintain is a level of service, and that to me is extremely important.

The other point as well that we have to start doing is separating historically the way we’ve thought about firewalls, IDSes, and IPSes, as stopping all the bad traffic. DDoS mitigation is all about maintaining service, and that’s where we have to go. That’s what we have to start looking at from the next generation, and that’s what the defense has to include, an understanding of what the end point is and the goal that I’m trying to achieve.

Smith

Something that people need to realize is that when you’re defending against a DDoS, it’s not a binary defense. It’s not you’re either up or your down. People need to start looking at how we allow our service to degrade down to the basic minimum functionality and do that gracefully. A great example would be we have customers that have things that are dynamic applications on their website. A great example would be an ATM locator, a branch office locator, a search page or a log-in page, things like that where normally you want them publicly available. And they’re publicly available. Anybody can go and look for their local branch office. But during an attack, those become targets. Those become a really, really nice attack surface for application- type attacks. A very simple response to that is to take that particular functionality and make the user log in to get to that. What you do is you reduce the publicly available footprint or attack surface - however you want to say it - of that particular application.

However, on the other side, if the attack is in the significant volume, by switching that particular functionality to be behind a login wall, now the login becomes a target of something that can fail over. In that case, what you want to do is fail it over to just disable that particular functionality and block any requests to that particular part of the site.

Part of that is a weird Akamai capability. I’ll freely admit that just because of the distributed nature of what we do. But it’s an approach that people can start thinking about. For example, my site now is very content rich and very dynamic. It has personalized content and lots of exchanges between the user’s browser and client. It might be a mobile application; it might be a mobile site. It might be some kind of mash-up thing. It might be some kind of geo-location thing, some kind of data feed or an API. How do I take any of these services and allow them to degrade gracefully to the point where they have the minimum functionality that I need people to get to? This might even involve looking at what do I need that website for and looking at the basic functionality of it. Maybe the reason for that website is to say, “Yes, we’re here and we’re online.” It might be to say this is how you go to your local branch office because we don’t do any transactional banking. It might be this is how you find out information about what your current balance is. It might be this is how you grab your latest statement for the last 30 days. Each of these is a different level of involvement and interaction and a larger attack surface. You have to think about what it is that you’re doing user-experience-wise and how do you limit that functionality in case of an attack.

Also, on the other hand, you’re looking at what the impact is. If I were to take traffic from Brazil - not to pick on Brazil specifically, but I have one particular bot that I’ve been fighting in Brazil for a couple of years - and make all traffic from Brazil go away for you, what would be the impact on your customer base? I almost guarantee that 70 percent of the security people out there won’t know what that impact is. They’ll look at it and go, “I don’t know. I need to go talk to some marketing guy.” The people who survive these attacks understand what that impact is.

For financial services in North America, I’d say you’re probably pretty safe just saying, “What we want to do ultimately is black-list the rest of the world, white-list traffic from the United States and Canada, and that will mitigate quite a bit of the attack traffic that we’re receiving.”

However, there are some corner cases there where you have financial services firms that are maybe a branch of a European bank or they have one particular region in the world that they do lots of business with. That would be absolutely catastrophic to block out the rest of the world except for North America. Understanding who your users are, what they actually do on the site, how they enter the site and how they exit the site, and knowing that these user patterns beforehand, allows yourself, a mitigation provider or somebody upstream to actually block the badness or take a more positive security model, which says if you don’t match the profile of a typical user, you can’t get into this particular property.

Field

Rodney, what can organizations do now to ensure that they’re prepared to detect and respond to DDoS, and how can they test these preparations?

Joffe

I can answer that in a couple of ways. The first thing is to be aware of the fact that you’re actually under attack. One of the things that companies really need to do is think about the instrumentation that tells them that what they’re looking at is an attack rather than results of some success. The second thing they need to be able to do is, say if it’s an attack, how did I detect it and what can I do to test my ability in the future. If I missed it this time, how do I go about testing my ability in the future to be able to detect this is going to be an attack? It’s really an enabler process, which unfortunately many times comes from real-life experience, which is not the way you want to do it. It depends on the kind of business that you’re running, whether you’re written specifically as an online-only business or if you happen to have a major portion of your business come from electronic commerce. You need to be able to think through that and say, “How do I go about the process now in advance of being able to test my capabilities?”

One of the problems is that it’s very difficult to model the way the network works. We know how routing works. One of the biggest problems is, if something happens in this area, how do those things change? That’s a very difficult thing to model. There are no easy ways of doing that, but there are some companies out there that are okay and recommend that companies think about that in advance. The worse time that you want to be learning is in the middle of a firefight. You really want to be able to prepare beforehand, not during.

Field

Steve, what’s your advice for organizations on how to be prepared to detect and respond, and how do they test those preparations?

Mulhearn

There are two different things that need to be addressed here. The first one is education. Many times I go in and I start talking about DDoS, and I’m almost shocked at the naivety of the technical level that they understand what DDoS is and the different levels of DDoS. The different attack vectors and vulnerabilities that can be exploited in a DDoS attack, that’s the first thing. It’s down to us in the industry to help that education process and talk to those potential customers more and more about that. Also, we need to remove some of those myths. Size isn’t everything in a DDoS attack. It’s all about the goal that they’re trying to achieve.

Next is that they’re actually painting the path. That then leads on to the other factor of how they can propose that.

First and foremost, understand and prioritize your services. Understand what would put those services at threat. If it’s something as simple as the number of transactions per second, no single resource is inexhaustible. It will get exhausted at some point. The more you understand that, the more potential you have to actually stop protecting that service.

It’s a combination of that education and preparation. If they can do that and understand what services are critical to them, then they’re in a far, far better situation. The solutions out there today, there’s lots of them. We know that. But the most important thing is to understand sometimes you need a combination of solutions because what we’re actually dealing with is a huge subject, and they’ve got all the tools. We have to find how they’re going to use those tools. But we’ve got a far tougher job and so do the customers. That’s what they need to look at.

Field

Michael, how do organizations ensure that they’re prepared and how do they test those preparations?

Smith

You have to realize the threat evolves. The attackers develop new techniques. They develop new capabilities, just like they do in the web-application security world or in the desktop malware world. I don’t think we’ve necessarily realized that yet, that this information and threat research actually has a place in this world. There’s that part, which is to find out more information. What techniques are happening? What capabilities do the attackers have today? What are they going to have six months from now? It’s going to be completely different.

Understand that, and then implement controls. There are people out there that will do testing. You can do some of the testing yourself. In fact, some of the testing looks a lot like load testing. When testing the application in a lab, throw some requests at it. Testing infrastructure gets interesting to me, simply because if you’re testing the DDoS protection capabilities of your network, you don’t actually want a DDoS in your network. It makes things a little bit more difficult. A lot of times it’s more along the lines of the process piece, on what gateways or thresholds you have for when you revert that service to mitigation. It might be that maybe you sustain an outage at 15 minutes and then flip it to mitigation. It might be at what point a service becomes degraded to the point where you consider mitigation. It might also be at what point you share SSO keys and certificates with your service provider, because that attack is low volume but it’s embedded inside of SSO. That actually has a more significant impact in some ways than just a regular network volume attack. Sharing that information and allowing a service provider to look inside of your user traffic, it’s not a decision that you ever want to make under duress because it impacts other things. What you want to do is have gateway criteria that say if this happens, if this happens and if this happens, then let’s go ahead and share keys with our provider.

In summary, know what the bad guys are doing, know what your capabilities are and then understand what the processes are so that you can actually get the right resources involved to mitigate the right threats.

Field

Matt, based on what you have experienced and what you’ve seen happen to other financial institutions, what advice do you offer to your own peers regarding detection and response to DDoS?

Speare

There’s a people process and technology issue. Certainly we talk a lot about the technologies, and there’s a fair amount that can be brought to bear to help mitigate this issue. It’s really about having a playbook that you have developed based upon scenario analysis of how your teams would react and what the decision criteria is that you have to make certain moves. Hopefully those are educated chess moves on the board.

Finally, with your people, drill through those scenarios at minimum quarterly because you can pick something out in the media. If what happened is something that’s totally outside of our particular vertical that’s occurred, how would we have reacted to that? Let’s do a tabletop so that it’s not unfamiliar to your team on how they’re going to react. Modify and follow the playbook. Also, when you go through those scenarios, it gives you the opportunity to improve that playbook because the detection part is relatively easy. The realization in reaction is we tend to be slower to make certain decisions. Because of the potential downside impact, when we have it documented and we’ve drilled the team, those are going to become second nature. They will become clean, crisp and, hopefully, have a better effect for our customers as well.