More on how the COVID19 Lockdown affected the Internet

Corona Virus

What happens to the Internet when the entire world changes their behavior at the same time? Internet traffic patterns have been growing and changing for many years. However, the rate at which these traffic patterns changed during the spring of 2020 was unprecedented. So, what did this mean for the Internet, and how did networks manage with the heavy influx of traffic?

It was no secret that human behavior patterns drastically changed due to COVID19, as we have previously stated. As government mandated lockdowns confined people to the safety of their own homes, people needed to find new ways to work, educate, socialize and entertain themselves without setting foot outside. This meant many normal face-to-face interactions needed to be digitalized.

In this post, we explore the effects the lockdown had on network traffic with information provided by the ACM IMC 2020 paper The Lockdown Effect: Implications of the COVID-19 Pandemic on Internet Traffic. This paper, for which BENOCS contributed network insights and data, set out to see exactly how much traffic patterns changed as well as to find out if these patterns were proportionate to the new behaviors as reported in the German media and media in other countries.

By examining data from different network vantage points in Europe: 1 ISP, 3 IXPs and one educational network (which consists of 16 different education institutions) between March and June, 2020. The paper identifies significant changes in traffic volumes from the shift in human behavior and shows how it compares to the expected behavior as stated in the media.

Increases in ISP and IXP traffic

Over the last two decades, Internet user profiles changed significantly but at a steady pace. This gave operators ample time to adjust their networks accordingly. However, the effects of COVID-19 hit the Internet like a great storm, causing traffic levels to flood, thus testing the stability of its infrastructure – as storms typically do. In just a few weeks’ time, ISP and IXP traffic specifically in southern Europe increased more than 20% at the beginning of the forced lock-downs. This traffic shows a steady decrease in May 2020, as those areas began to loosen their lock-down restrictions. Not surprisingly, what they found was a traffic increase by 15-20% with a significant amount of growth during off-peak hour times (weekdays from 8AM-7PM), and mainly in residential areas.

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Regular Weekday Traffic Reaches Volumes Similar to Peak-Hours

When looking at the behavior of the traffic, there appeared to be a significant shift, especially on weekdays both during the day and in the evening. In fact, the traffic during this time – typically less utilized ­– began to behave like weekend traffic – typically highly utilized. By testing different vantage points in the European Union, communication, entertainment and social media applications show a strong growth, particularly on weekdays during standard working hours. This tells us that many weekday in-face interactions such as meetings, shopping and schooling were performed at home and online.

The migration of employees into a working from home environment also reflected in network traffic. Due to the influx of employees suddenly working from home, VPN traffic also showed substantial weekday growth, with rates having grown 200% between February and March 2020. The traffic then reduces back to pre-lockdown levels as lock down restrictions loosened and more employees returned to their offices instead of working from home.

Lastly, the academic education network, which is a connection of 16 different networks, shows a drastic change during the lockdown. More specifically, a shift in daily peaks, with behaviors shifting to the middle of the night, thus showing students working from abroad, possibly due to restrictions on entering Germany or even Europe.

The Outcome

Taking all of the information from the vantage points into consideration, we can conclude that the network behaved exactly how one would predict, given how much news coverage at that time focused on the digitalization of work, education and events. How did the ISP fair in all of this? Despite the rise in traffic volumes and the extreme shift in traffic behavior, ISPs, specifically in Germany, have proved the preparedness of their infrastructure. There is no evidence of the increase of traffic impacting Internet operations, despite the 15-20% link capacity increase. However, this is and was easily solved with upgrading and building-out. Additionally, network operators often have plans for spikes in network traffic even long before the COVID-19 outbreak, thus ensuring such volumes would not disrupt their services.

How to stay future-proof

How do network operators know when an upgrade or a build-out is necessary? Network analytics has been around for years, however, now more than EVER has it become a crucial part of routine network operations. As more companies begin to announce their plans to grow their businesses digitally in a post-Corona society, network providers and operators can expect a continuation of rapid traffic growth. In order to guarantee the best quality of service, you need to remain ahead, and the best way to do that is through network Analytics.

The information in this post comes from the paper “The Lockdown Effect: Implications of the COVID-19 Pandemic on Internet Traffic.” To read the full study, please click here.

Not every network’s traffic increased during the COVID19 lockdowns

Corona Virus Lockdown - Stay at home

Since the period when most governments started mandating lockdowns and encouraging social distancing at the beginning of 2020, many network operators, SaaS, OTT providers and CDNs have been reporting unprecedented spikes in traffic volumes and content demand. But what about the other side of the coin? If the demand for the Internet increased, did traffic volumes on every network increase? The answer may surprise you.

In the late winter of 2020, the utilization of the Internet as well as network traffic volumes rose to unprecedented levels on most networks. Although traffic volumes on ISP and mobile operating networks grew, IPX (IP Exchange) networks experience a sharp plunge during the government-mandated lockdowns, despite the billions of Internet users left homebound of several weeks. This happened because IPX networks are used when users are mobile, not stationary.

What is an IPX network?

IPX networks (not to be confused with IXPs) have been a part of the network ecosystem since 2008 for mobile operators and other service providers. Their primary purpose allows for the exchange of Internet Protocol traffic securely and at the expected quality of service to provide reliable Internet access for mobile users outside of their home country. This is especially prevalent on continents such as the Europe, where users cross country (and network) borders regularly, but still use the mobile data of their home country provider. For example, a French mobile data user can travel in Spain (within the Spanish network), and still receive service with the same or similar quality as if they were still in their home (French) network. Additionally, IPX networks make handling roaming easier for operators due to its multilateral or bilateral interconnection. This means that IPX providers to some extent either handle the contract and connectivity for operators or allow operators to sort that out among themselves.

Why did IPX traffic decrease during lockdown?

One of the most significant outcomes of the COVID19 lockdowns is the new demand for Internet services. In order to maintain a sense of “normality”, most everyday activities moved online, which challenged the sustainability of many network operators and content providers. For that reason, it has become more crucial for network operators to have excellent visibility and knowledge of network traffic behavior.

At BENOCS, we offer network analytics solutions for ISP and IPX networks. Our solutions are backed by powerful data to allow you and your team the visibility you need to make informed decisions about your network operations. To learn more about our analytics products, please click here.

Network traffic patterns change during COVID19 lock-downs

ISP traffic during COVID19 lockdown

COVID19 proved to be a challenge for Internet Service Providers (ISPs). As countries began locking-down, consumer demand for the Internet surged, flooding the network pipelines. However, despite these challenges, the Internet managed maintain its strength.

Whether it is due to mandatory lock-downs and quarantines or simply following the advice to stay home, end-users all over the world are finding new ways to stay entertained and connected with one another while maintaining social distance. During the global pandemic – COVID 19 – that struck at the end of 2019 and early 2020, more people than ever began moving their lives into the virtual realm. Schools, work places, social gathering, conferences, leisure activities, etc., which would normally rely very little on the Internet, have moved substantially to the Internet in an attempt to keep functioning. On top of that, staying indoors meant that many relied on streaming services more than ever to keep themselves entertained. For the Internet, this meant an unprecedented increase of traffic in a very short amount of time. In Germany alone, where citizens faced a loose lock-down/contact ban – meaning citizens could go outside for solo activities but needed to keep 1.5 meters distance from everyone – network patterns changed to being highly utilized almost all of the time.

A Surge in traffic during office hours

Before countries and states began COVID19 lock-downs, the Internet usually carried the most traffic during typical leisure times i.e. evenings (6PM to 10PM), weekends, and public holidays. A much lower amount of traffic was distributed during normal working hours (8AM-6PM).  However, during the loose lock-down/contact ban in Germany, which started in March 23rd, 2020, network operators saw a significant increase in internet traffic during workday hours and weekends, revealing that people were supplementing face-to-face activities, such everyday office conversations, school attendance, and social activities as well entertainment with online versions. This induced much longer periods of heavy network utilization.

A shift in traffic patterns

If we look at the figure above, we can compare pre-COVID19 traffic patterns with the traffic patterns sampled during the loosened lock-down to understand how user behavior changed. We see that, since 2018, most traffic in Germany behaved according to the assigned days ­– weekends behaved like weekends and weekdays behaved like weekdays – with a few exceptions. However, if we look at the period between the end of March and the beginning of June, we can see that traffic patterns for specific days no longer fall within their respective categories. Instead, we see that most of the days falls into the holiday category – a day associated with heavy utilization.

What these traffic patterns tell us

With most of the days during the loose lock-down/contact ban in Germany falling into the holiday category, we conclude that people are supplementing the usual face-to-face and outdoor activities with internet-based versions. Speaking with colleagues, attending classes or chatting with friends, as well as streaming videos and games all add more traffic to the network.

What we might see in the future

As scientist learn and reveal more about how to prevent the spread of COVID-19, German government officials are beginning to re-open states and loosen social contact rules. However, the quick shift to virtual contact combined with the networks ability to cope has many people wondering if they want to return to their pre-COVID19 ways. With that being said, we can most likely expect that the network utilization we saw during the crisis to remain an ongoing trend rather a one-hit wonder.

To stay ahead of the trends, it is important for ISPs and network operators to have a visibility into their network traffic behavior. With BENOCS Analytics, you too can ensure your network is ready for the increasing traffic volumes as well as any other long-term spikes in utilization.

The internet during a pandemic: maintaining the quality of service

Internet during COVID19

As the COVID-19 continues to spread globally at alarming rates, governments and large companies alike are taking great preventative measures to slow the spread of the disease, such as by cancelling large events, limiting travel, and forcing employees to work from home. These initiatives are moving a large influx of people into the digital realm at a rapid rate, which is stressing out networks and causing the quality of service to reduce. With so many people now relying on the internet for their livelihood during this pandemic, it is important for service providers to have control over traffic volumes in order to maintain their quality of service. For that, they need excellent network visibility and analytics.

Telecoms in Europe see surges in traffic

In just a few days’ time, countries such as France, Spain and Italy are already starting to see the effects that the new social restrictions are having on the network. Telecoms in Spain, for example, asked customers today to “adhere to some best practice in an attempt to maintain some sort of tolerable experience”. French telecoms, on the other hand, are starting to practice bandwidth discipline by limiting videos streaming sites such as Netflix and YouTube, in order to prioritize those working in home office.

On top of that, mobile networks are also taking a large hit with an increase in their network traffic as loved ones try to stay connected with one-another and colleagues try to collaborate via video calls, messaging apps, and regular telephony.

Network Analytics is important for QoS

In order to cope with the sudden increase in traffic, while at the same time maintaining their quality of service for end-users, network operators need to understand their traffic. This includes: where traffic originates, where it terminates, what type of traffic is traveling on their network and how much. With this information, operators can work with OTTs on decreasing traffic volumes such as reducing video quality from HD to 480p or less during hours of high congestion. Therefore, still providing their services without reducing speed or connectivity.

During times of emergency, it is especially important that the network does not fail as more people turn to the digital realm for staying connected to work and loved ones. Therefore, it is necessary that network operators have access to important network insights for target network management. It is important for network operators to have network analytics.

It is important for analytics to be backed by powerful data

Analytics Data

It’s that time of year again. Another great year is over, and it is time to start making plans for an even better year to come. As we begin to map out our goals for the next 366 days, we thought it might be fun to look further into the future at the potential solutions and services BENOCS could offer its user… when the time is right.

Today, network analytic tools are typically used for four main purposes: DDoS protection, peering & transit optimization, network planning and network debugging. However, as the network continues to grow and change, it is important that you not only choose analytics tools that can support these tasks now, but to also invest in those that are adaptable for whatever new challenges emerge on the network in the future. Here are a few areas of data-driven network management that we believe will one day be relevant that BENOCS could provide.

Probe Data Integration

In order to guarantee a great quality of service for end-users, network service assurance departments on both the network side and the content-producer side use network probes to determine user experience in real time and on granular level. This solution is providing great insight, however debugging still is a manual process, which will eventually become insufficient as the network expands and traffic volumes grow. What service assurance departments will need in addition to knowing when the service performance has decreased is the ability to see where the problem lies for accurate troubleshooting. BENOCS works on solving that issue by creating a probe integration tool that will operate using the relevant data on network topology already collected by our Core Engine.* Not only could it show when the quality of service has decreased, but also the exact path that traffic traveled on, allowing you to see where the quality problems occurred and immediately take action against it.

*The BENOCS Core Engine collects, processes and cross-correlates protocol data from network routers including: BGP, IS-IS, OSPF, Netflow, Netconf, SNMP and DNS.

Cost and Revenue Implications

As a Transit Provider, one of your goals is to decide on how to send traffic at the lowest cost without sacrificing quality of experience for end-users. Typically, traffic can be sent via very different alternative paths: the cheapest route that often fails during peak hours, the mediocre route that is slightly more expensive, or the highest quality and most expensive route, just to name a few. As technology for changing the traffic patterns is limited, Transit Providers are often faced with the decision to either provide great service all of the time OR pay the lowest price. What if there was a way to know exactly when to switch between the routes in order to provide the targeted service levels at the lowest cost? Using the network data already integrated in BENOCS, there is a potential to create a cost and revenue integration solution allows you to automatically change the route of traffic during network failures and peak hours, depending on commercial- and quality-parameters. This means sending your traffic on the low-priced route until it no longer satisfies your quality standards. On top of that, you would obtain more differentiated billing capabilities, charging your customers more accurately by having visibility into the revenue and applicable cost of each customer network.

DNS Data Integration

The role of CDNs on the network has changed significantly over the last decade. As the demand for internet content continues to increase, so does the demand for getting that content from the content provider to the end-user. For this reason, CDNs have grown in size and market share, and now carry diverse content types from multiple content providers. At the same time, public CDNs are also increasingly used to offload traffic during peaks hours and in hard-to-reach networks. Given the diversity and volatility of traffic carried by a single CDN, it is becoming increasingly important for networks to understand the structure and profile of a CDN’s traffic stream to avoid overload and abuse. Since DPI is not viable anymore, integrating DNS information with traffic data from the BENOCS Core Engine could provide necessary visibility into CDN traffic flows in order to full understand the traffic in your network.

SDN Controller Input

In some cases, the future of the network is already taking place as new technology is developed and tested every day. We see SDN technology being deployed not only at the customer edge, but also in core-backbone segments. As the network backbone’s hardware becomes softwarized and thus configuration- and automation- capabilities enhance, you will need to have deep and automated network visibility in order to take full advantage of the new kit. Bringing network-external parameters (like QoE, price) into routing decision algorithms greatly enhances the cost-efficiency and quality of your network, at lower capex. And how could one achieve this visibility and automation? You guessed it! With BENOCS. Such external parameters, combined with the data already collected and processed in BENOCS Core Engine, become important SDN-Controller-Input, which would trigger network automation in SDN-empowered networks.

The future of the Internet is exciting, however, knowing how it will develop and what is necessary for its maintenance is important for its overall sustainability. That is why it is important to invest in products and tools that are already backed by powerful data architectures and have the potential to evolve into the products you need to bolster your future network’s performance.

Please visit our Analytics and Director pages to learn more about what BENOCS provides today, or contact us anytime.

How the BENOCS Flow Director went from research to product

BENOCS Analytics - Flow Director for CDNs and CSPs explained

Back in 2010, before BENOCS, our founder along with a few other researchers presented the new challenges facing ISPs: CDNs/hyper-giants and their poorly mapped traffic. In that proposal[1], they opened the research community to a new idea: the prospect of creating a system that would be implemented in the ISP network, which would use the network’s data to facilitate better mapping from the CDN/hypergiants’ side. Now, nine years later and six years of development, the system called BENOCS Flow Director is not only running, but has also proven its viability and is showing positive results.

This year, BENOCS, along with researchers from Max Planck Institute for Infomatics, TU Berlin and TU Munich, is proud to report on its progress since its system, the BENOCS Director, enabled the first ISP/CDN cooperation in 2017, which will be presented at both ACM CoNEXT 2019 and APNIC 49.

Hyper-giants and their behavior on the network

So what are we talking about when we talk about hyper-giants and ISP/CDN cooperation? For us, a hyper-giant is a large network  that sends at least 1% of total traffic delivered to broadband customers in the ISP network and publically identifies as a CDN, cloud, content or enterprise network. Since the introduction of video and social networks, demand for content has risen rapidly, causing CDNs to become more relevant for ISP networks: the top ten – mostly hyper-giants – being responsible for more than 50% of the traffic on the network . With traffic sizes that significant, it is important that CDNs of all kinds deliver their traffic as accurately as possible in order to reduce congestion on ISP networks and provide the best quality of service.

Given the rapid growth, ISPs and CDNs have also been challenged with trying to keep up with one another’s development, causing them to make unavoidable upgrades just to maintain the quality of service. This lack of visibility has resulted in an extraordinary amount of stress on ISP networks as well as higher costs and maintenance for both sides. Before the BENOCS Director, as well as in some cases today, ISPs and CDNs have implemented their own methods when trying to cope with the growth of content demands and traffic on the network, such as performing their own measurements, which can only provide “best guesses” of the current network state. This is becoming more challenging as CDNs and hypergiants expand and connect more ingress points to the ISP network – which create more path options for them to take.

BENOCS Flow Director improves network traffic

Since both ISPs and CDNs need to co-exist on the network and require visibility of the other in order maintain their performance, why not get them working together? That is how BENOCS Flow Director was born.

BENOCS Flow Director is “an ISP service enabling ISP-CDN collaboration with the goal of improving the latter’s mapping,” according to the research paper, thus reducing costs and improving services for both parties. It does so by collecting and processing the necessary data at scale from the ISP to be able to compute mapping recommendations. It then uses this information to rank every possible ingress point from best to worst and gives this information to the CDN. It is a unique system that was intentionally designed to be vendor diagnostic, deployment agnostic, easy to integrate, fully automated and scalable, so that it could fit into any network and work with any CDN/hypergiant.

According to the research paper, two years after connecting the first hypergiant to a large ISP, it is already working. When the cooperation started, the hypergiant was delivering 70% of its traffic optimally with the trend declining. The combined interest of optimizing network traffic ultimately motivated the CDN to test BENOCS Flow Director. As of today, this metric is in the range of 75-84% and increasing.

Benefits of the Director for ISPs and CDNs

So, what would happen if all major hypergiants were to active BENOCS Flow Director? According to the research paper:

  1. Theoretically, an ISP could see a 20% long haul decrease is traffic reduction, given the CDNs do not have constraints.
  2. Every CDN is different, with some able to reduce their long haul traffic in an ISP network by 40% – this happens because some CDNs interconnect with the ISP at different PoPs, and their traffic matrices differently.
  3. If the system were to be used by all CDNs, the traffic on long-haul links would further reduce to less than 80%

All of which would eliminate guesswork to create a better and more stable network ecosystem, leading to higher quality of service along with a reduction of network infrastructure in use.

This post is based on the research paper Steering Hyper-Giants’ traffic at Scale. We are proud to announce that it won “Best Paper Award” at ACM CoNEXT, 2019 in Orlando, FL and the IRTF’s “Applied Networking Research Prize” for our CTO, Ingmar Poese, in 2020. You can read the full paper here.

[1]     Poese, I., Frank, B., Ager, B., Smaragdakis, G., Uhlig, S., & Feldmann, A. (2011). Improving content delivery with PaDIS. IEEE Internet Computing, 16(3), 46-52.

DDoS attacks are too easy to launch: How effective are booter take downs?

DDoS Attack

DDoS attacks services are becoming so cheap and easy to use that even grandma can launch an attack. Are current protective measures enough?

The widely known DDoS (Distributed Denial of Service) attacks have been a major threat to Internet security and availability for over two decades. Their goal: to interrupt services by consuming critical resources: computation power or bandwidth, with a motivation of anything from political, financial, cyber warfare, deflecting from other attacks to hackers just testing their limits. Since they were first observed, these attacks have grown in both size and sophistication. By spoofing source IP addresses, traffic is not being amplified and reflected towards. Additionally, they are becoming more available. One can purchase booters and stressers for as little as the price of a cup of coffee and launch them with little technical background, enabling anyone the power to launch an attack. This is enough to keep cyber security experts on their toes as well as draw the attention of law enforcement agencies.

The more DDoS attacks become prevalent to everyday network operations, the more necessary it is to study their behavior and impact as well as the impact of current mitigation methods in order to determine network security methods.  Earlier this year, BENOCS, DE-CIX, University of Twente and Brandenburg University of Technology teamed up to study this topic and answer the following questions:

  • What is it like to be attacked and how are they amplified on larger networks, such as Tier-1 and Tier -2 ISPs as well as IXP?
  • What happens to booter websites after they have been taken down by law enforcement?

With an abundance of research focused on different aspects of booters – especially their financial impact – little attention has been given to an empirical observation of attacks as well as how effective an FBI take down of booters actually is. With something this serious stalking the network, thorough investigations and studies need to be performed in order to learn their behavior and figure out better ways to combat booter services.

What happens to a network under a booter attack?

In order to draw conclusions on a DDoS attack landscape, one must first observe it in it in the wild. However, given the duration of attacks (just a few minutes), it is hard to predict where and when one will be launched. Instead of waiting for an attack, the researchers launched their own by purchasing four booter services selected from the booter blacklist – including the more expensive VIP booters on a network they had built up themselves.* From this, they were able to see where the booters directed the traffic, how they worked and to see which services were used for amplification as well as to guess the amount of amplification. On top of that, the researchers provided the first look into whether or not the attacks live up to their sale promises. The actual attacks were performed by passively capturing all traffic of the measurement platform. They showed that the booters lived up to the expectation to attack the specified targets (mostly using NTP amplification attacks), but the VIP services did not deliver on their promised attack volumes ­– as much as 75% less.

*Given the legal and ethical implications of purchasing such booters as well as the damage they can cause on a network, extra steps were taken by the team to comply with laws and cause minimal destruction.

How effective is an FBI booter take-down?

As an attempt to cease some control over DDoS attacks, the FBI tracks down booter websites and removes booter services. But how effective is that really? In addition to observing how large these attacks can be, this research team also wanted to know what is saved by taking these sites down. The answer: not a lot. By taking weekly snapshots of all .com/.net/.org domains and booter top 1M domains, they were able to identify 58 booter services and followed them over a period of 122 days. What they discovered was the takedowns do lead to a sharp reduction of DDoS traffic on the network, however, no significant reduction when hitting the victims’ systems. Additionally, booter services, after a take down, are capable of relaunching under different domains, allowing them to continue business as usual just weeks after being deactivated. Therefore, simply taking down a booter service is not a long-term sustainable solution.

DDoS attacks are a serious threat to the network ecosystem ­­– anyone and everyone connected to the internet is a target, especially IoT devices that never receive udpdates – and current methods of trying to eliminate them, as this study has shown, are not sufficient nor sustainable. In order to find better solutions that reduce the amount of DDoS attacks bought and sold, further research is required, especially on the effects of the booter economy after an FBI take down. This research particular study will remain ongoing until 2022, which will test the capabilities of artificial intelligence and new developments in DDoS protection.

The information in this post comes from the ACM IMC paper “DDoS Hide and Seek: On the Effectiveness of a Booter Service Takedown”.

See the press release for this paper here.

What if we turned the conversation from Netflow vs SNMP to Netflow & SNMP?

Screenshot BENOCS Demolytics SNMP-Flow

There is no “I” in “team”, just like there is no “I” in “Netflow vs. SNMP”. It’s time to stop comparing Netflow and SNMP, and instead start talking about how well they work together when it comes to network monitoring.

If you do a quick internet search on the internet protocols SNMP and Netflow for network measurement, you will find an array of articles either that try to sell you the idea that accurate network monitoring is done with Netflow, not SNMP, or that simply compare the two usually with a slight bias against SNMP. But, what if we changed perspective and stopped looking at these protocols as one or the other and instead looked at them as one AND the other. What if, instead of comparing them, we cross-correlated them with one another, providing you with the advantages of both? In this post we will explain why accurate network monitoring is done with Netflow AND SNMP.

What are SNMP and Netflow?

For those of you not in the know, let’s start with the basics:

SNMP, also known as Simple Network Monitoring Protocol, is a standard internet protocol for collecting and organizing information about managed devices on IP networks and for modifying the information to change device behavior. This protocol was developed in the early days of the internet, back when it was about getting bits across the network and traffic could be easily measured. When it comes to collecting measurements, SNMP will simply show you the amount of traffic that has gone through your network.

Netflow, on the other hand, is a network protocol that collects information on all traffic running through a Netflow enabled device, and will examine each packet to create a flow model that displays the entire path taken by the traffic. This protocol was designed for monitoring, now that networks transfer large and diverse amounts of traffic instead of just bits. Netflow will show you a statistical measurement of exactly where your traffic came from and where it is going.

Why SNMP and Netflow work well together

Due to its age and capabilities, some network operators have deemed SNMP and other quantity measuring protocols as “inefficient” when monitoring the network because of the large amount and diversity of network traffic and players. The interest has thus shifted to not just knowing how much traffic is on the network, but what kind and from/to whom – which can determine so much about how you plan and maintain your network. Therefore, Netflow and other statistical measurements are promoted for getting insight into how traffic is behaving, while SNMP and other quantity measuring protocols are disregarded.

Although, given how complex the network has become, only being able to see the statistical data is still not the best solution when trying to monitor the network. The first problem is that a lot of guessing is involved. Given Netflow sample rates are very low (every 10,000 packets), this data can be very misleading during non-peak hours when it’s harder to extrapolate data from low traffic flows, showing you either too little or too much traffic than is actually there. The second is that you can’t detect if sampled data is missing due to various factors, such as different default configurations on each router, routers using different OS versions or network bottlenecks, which cause an unknown amount packet loss. For these reasons, traffic values can be misrepresented, especially during peak-hours. In most cases, these errors are discovered to be wrong months later when they become so large that it they need to be manually investigated. And for those of you who understand network economy and/or operation, this could mean lost revenue, poor performance and/or long term serious network failures.

So, what can we do to prevent false pretenses? As you already know, each protocol on their own shows one side of the picture: SNMP can show you an almost accurate amount of traffic passing through the network (sampling rate at 1 bucket/5 min), while Netflow can show you where it went. By cross correlating this information and comparing the data provided by both protocols, you can learn about any mistakes in calculations or measurements with a relatively small amount of effort.

With the BENOCS Analytics SNMP line feature, you can directly compare statistical flow traffic (all things Netflow) with measured traffic from SNMP for a more accurate view of your network’s traffic.

Here’s what we do:

Comparing Netflow and SNMP
  • SNMP (top line): we collect via SNMP, Telemetry, Netconf and similar protocols inventory information, auto-detect new interfaces and gather information on byte-count, packet-loss and capacity in as little as 1 minute bucket sizes
  • NetFlow (colorful area): we collect all flow-based information via sFlow, Netflow, IPFIX and similar protocols. We collect send- and receiver IP, traffic volume, interface & protocol information.

By adding the SNMP line feature to your current traffic display, you can create a more accurate picture of your network’s traffic measurements to prevent network failures, avoid costly mistakes, and generate savings just like this customer.

Would you like to learn more about the SNMP feature, or BENOCS Analytics? Contact us today! You can also request a free demo account to see what our tool can do for you!

When it comes to monitoring your network, its time to stop guessing and start knowing!

What are self-operated Meta CDNs, and should ISPs be concerned?

Apple Store

Before 2014, Apple – one of the largest content generators on the web today – relied on external content delivery networks (CDNs), such as Akamai and Level 3, to deliver everything from music/video streaming to iOS updates.  In 2014, Apple released their CDN as an effort to take control over the quality of their content delivery as well as creating the final puzzle piece that gives them control over the entire customer experience (hardware, online platforms, ect.). Interestingly enough, as Dan Rayburn predicted, Apple was in no hurry to convert all of their traffic to their own CDN and would still need some time before they completely stopped offloading traffic onto third-party CDNs.

A 2017 study supported by Benocs shows that Apple, three years later, still relies on third-party CDNs, such as Akamai and Limelight, to deliver its iOS updates. Why? Because, when a company as large as Apple needs to deliver an operating system update multiple times per year to their over 1 billion devices, it needs to find a way to handle overload to supplement its own infrastructure’s limitations. Therefore, they rely on self-operated Meta CDNs to carry their traffic. No big deal, right? Wrong. As this study shows, traffic is not running as smoothly as originally thought.

What are Meta-CDNs?

Before we talk about the main issue, let us first look into the evolution of CDNs. As the internet continues to expand with more content and users, the more CDNs are challenged with providing their users the fastest delivery speeds possible while, at the same time, building as little infrastructure as possible. In order to solve this, CDNs are getting closer to their users in the form of Meta-CDNs – multihoming content amongst multiple CDNs, therefore having access to servers holding content closer to their users. This means that CDNs are publishing content on multiple CDNs thus, requiring additional request mapping – ways to see the additional servers holding the content. Therefore, it is not just CDNs moving the traffic for Apple, but rather Meta-CDNs. This collaboration of multiple CDNs carrying the traffic of a single CDN is thus called a self-operated Meta-CDN, due to its ability to direct traffic both to its own infrastructure or to a third-party CDN.

How Meta-CDNs affect the network

If self-operated Meta-CDNs exist in the network to help companies such as Apple provide their customers with the smoothest iOS update possible, then why is this a problem? Well, according to this study, by looking at the behavior of a self-operating Meta-CDN through the eyes of the internet service provider (ISP), it actually causes more chaos in the network than one would expect. Given that this type of CDN is rather new, not much is known about them to begin with.

By observing the iOS update in September 2017 through the eyes of a major European ISP, researchers found the following behaviors to occur:

  • If the Apple CDN is selected by the DNS-resolver (aka the traffic director), the traffic will move through Apples infrastructure.
  • If a third-party CDN is selected by the DNS-resolver, Apple will offload the traffic onto third-party CDNs.

Since Apple’s infrastructure is not as developed in Europe as it is in North America, when reaching their devices on a global scale, using CDNs such as Akamai – who have a global infrastructure – gives Apple the advantage of reaching their customers with ease.

The consequence this has for the ISPs is the amount of strain put on the network. By offloading the traffic, the ISP is unable to predict how much traffic to expect on its links, given they are unable to see which CDN the overarching Meta-CDN selects as well as how much traffic each CDN is carrying. On top of that, the individual CDNs are carrying traffic further than necessary, which creates overflow – where traffic is being forced to take longer paths.

Why networks need to be aware of Meta-CDNs’ behavior

Why is this kind of unpredictability risky? Because links that were originally thought to be unaffected are actually over capacity, which causes perilous behavior in the network. Therefore, making it necessary for ISPs to further investigate their assumptions about how the network is actually behaving during such high-stress situations such as operating system updates.

The information in this post comes from the paper “Dissecting Apple’s Meta-CDN during an iOS update.” To read the full study, please click here.

Why geolocation is important for enforcing the General Data Protection Regulation and other privacy policies

The importance of geolocation for enforcing the General Data Protection Regulation and other privacy policies

If you are living in Europe or doing business with European companies, you are probably already familiar with the General Data Protection Regulation (GDPR), which has been in effect since May 2018. However, what you may not know is how this law is actually administered. After all, a law is only as good as its ability to be enforced. Given that internet content is shared globally, how can anyone ensure those within the European Union (EU) borders are actually protected by this law, especially when content needs to travel across borders? How do we know if data is just passing through, or if it is terminating in Europe? What can be used as evidence of violations? The answer: geolocation and tracking flows.

With most of the researchers in the network measurement community focusing on data collection and financial worth, little attention has been given to tracking flows in relation to geolocation, which can show whether or not information crosses national or international borders, and whether or not any information is leaked. Additionally, it can show how adequately internet service providers (ISPs) handle the distribution of tracking flows on different networks, i.e. mobile or broadband.

Geolocations show traffic flows

For those of you who are unfamiliar with network measurement, a tracking flow is a flow between an end user and a web analytics tool – the geographical footprint of the tracking flow – and shows where it originated, where it went, and where it terminated. For a user, this means where have you been on the network, and who knows you were there. In relation to the GDPR, this is the evidence that proves whether or not data is being collected on EU users without their permission or knowledge.

Extracting geolocation in a GDPR era

So why aren’t more people researching this method? The simple answer: Because extracting the geolocation of users requires having access to real tracking flows that originate from users and terminate at trackers. On top of getting actual user permission for investigating tracking flows, precision of user location and complete measurements prove to be further challenges.

Despite the barriers, one research group, with support from BENOCS, realized its necessity and found a solution by developing an extensive measurement methodology for quantifying the amount of tracking flows that cross data protected borders through a browser extensions in order to render advertising and detect tracking flows. This is especially ground breaking because the method manages, according to the study, double the amount of tracking flows as previous studies, and shows the actual confinement of trackers staying within the EU. This study also managed to find out whether trackers track sensitive data such as religion, health, sexuality, to name a few, without violating the GDPR.

By tracking 350 test users over a period of four months, researchers found that, in contrast to the popular belief that trackers located outside of Europe conduct most tracking flows, around 90% of the traffic flows that originate in Europe actually also terminate in Europe. This small sample serves as a baseline intended to correlate with datasets that will deal with millions of users in the future.

They also found that, despite the regulations on tracking flows with sensitive and protected data categories, around 3% of the total tracking flows identified in this study fall within the protected categories. This 3% is evidence of violations.

These results are especially important when trying to figure out how well companies are actually abiding by this law.

As we are now living in an era with GDPR and other privacy regulations, it is important that we also have the necessary tools to enforce them. Having had success with their tracking flow measurements, further research and development will continue in this subject in order to provide anyone requiring the evidence and footprint of tracking flows in real-time.

The information in this post comes from the paper “Tracing Cross Border Web Tracking.” To read the full study, please click here.

*This study received a distinguished paper award at ACM Internet Measurement Conference in 2018.