BENOCS Analytics supports KPN with network intelligence

KPN logo

Berlin, Germany – June 19, 2020 – BENOCS GmbH announced today that they are now officially running in the KPN network, providing KPN with detailed network traffic visibility via an easy-to-use user interface. By installing BENOCS Analytics, KPN has acquired a high performing tool that gives them the ability to monitor traffic, troubleshoot problems, seek new business opportunities and more.

Before installing BENOCS, KPN had to rely on data from several legacy tools, which was costing them time and effort for populating information. They also desired a quick and intuitive way to view traffic flows for forecasting, optimizing peering relationships as well as to guarantee effective and quality routing to end customers. “We are very glad to be providing KPN with the network visibility they need in order to optimize their network and QoE,” stated Stephan Schroeder, CEO of BENOCS.

When it came to the actual set up of BENOCS Analytics, it “went very quickly. The BENOCS Engineering team provided fast and responsive support and are very knowledgeable in their field. It is always a pleasure to work with such a team,” stated KPN Network Specialist, Joris de Mooij.

By installing BENOCS Analytics on premise, KPN is able to fulfill strict security requirements, which prevent them from exporting any data outside of their network.

Furthermore, installing BENOCS Analytics could not have come at a better time for KPN. Just as BENOCS was installed and running, internet traffic began to reach record levels as countries began to lock-down due to COVID19. “We had to do some smart upgrades to maintain high quality routes for our end users. BENOCS was of crucial importance in this process. It also helps us with capacity forecasting,” said Interconnection Manager at KPN, Rob de Ruig.

About BENOCS

BENOCS GmbH – a spin-off of Deutsche Telekom – is a small company with big plans to revolutionize the way network traffic is managed. Their intelligent and fully automated solutions fit networks of any size and provide ISPs as well as CDNs strategic ways of coping with growing network traffic. With BENOCS Analytics, network operators, transit and wholesales carriers, Hosting and CDNs gain end-to-end visibility of their entire traffic flows.

About KPN

KPN is a leading telecommunication and IT provider and market leader in the Netherlands. With fixed and mobile networks for telephony, data and television, they serve customers at home and abroad. KPN focuses on both private customers and business users, from small to large. In addition, they offer telecom providers access to their widespread networks.

The internet during a pandemic: maintaining the quality of service

Internet during COVID19

As the COVID-19 continues to spread globally at alarming rates, governments and large companies alike are taking great preventative measures to slow the spread of the disease, such as by cancelling large events, limiting travel, and forcing employees to work from home. These initiatives are moving a large influx of people into the digital realm at a rapid rate, which is stressing out networks and causing the quality of service to reduce. With so many people now relying on the internet for their livelihood during this pandemic, it is important for service providers to have control over traffic volumes in order to maintain their quality of service. For that, they need excellent network visibility and analytics.

Telecoms in Europe see surges in traffic

In just a few days’ time, countries such as France, Spain and Italy are already starting to see the effects that the new social restrictions are having on the network. Telecoms in Spain, for example, asked customers today to “adhere to some best practice in an attempt to maintain some sort of tolerable experience”. French telecoms, on the other hand, are starting to practice bandwidth discipline by limiting videos streaming sites such as Netflix and YouTube, in order to prioritize those working in home office.

On top of that, mobile networks are also taking a large hit with an increase in their network traffic as loved ones try to stay connected with one-another and colleagues try to collaborate via video calls, messaging apps, and regular telephony.

Network Analytics is important for QoS

In order to cope with the sudden increase in traffic, while at the same time maintaining their quality of service for end-users, network operators need to understand their traffic. This includes: where traffic originates, where it terminates, what type of traffic is traveling on their network and how much. With this information, operators can work with OTTs on decreasing traffic volumes such as reducing video quality from HD to 480p or less during hours of high congestion. Therefore, still providing their services without reducing speed or connectivity.

During times of emergency, it is especially important that the network does not fail as more people turn to the digital realm for staying connected to work and loved ones. Therefore, it is necessary that network operators have access to important network insights for target network management. It is important for network operators to have network analytics.

It is important for analytics to be backed by powerful data

Analytics Data

It’s that time of year again. Another great year is over, and it is time to start making plans for an even better year to come. As we begin to map out our goals for the next 366 days, we thought it might be fun to look further into the future at the potential solutions and services BENOCS could offer its user… when the time is right.

Today, network analytic tools are typically used for four main purposes: DDoS protection, peering & transit optimization, network planning and network debugging. However, as the network continues to grow and change, it is important that you not only choose analytics tools that can support these tasks now, but to also invest in those that are adaptable for whatever new challenges emerge on the network in the future. Here are a few areas of data-driven network management that we believe will one day be relevant that BENOCS could provide.

Probe Data Integration

In order to guarantee a great quality of service for end-users, network service assurance departments on both the network side and the content-producer side use network probes to determine user experience in real time and on granular level. This solution is providing great insight, however debugging still is a manual process, which will eventually become insufficient as the network expands and traffic volumes grow. What service assurance departments will need in addition to knowing when the service performance has decreased is the ability to see where the problem lies for accurate troubleshooting. BENOCS works on solving that issue by creating a probe integration tool that will operate using the relevant data on network topology already collected by our Core Engine.* Not only could it show when the quality of service has decreased, but also the exact path that traffic traveled on, allowing you to see where the quality problems occurred and immediately take action against it.

*The BENOCS Core Engine collects, processes and cross-correlates protocol data from network routers including: BGP, IS-IS, OSPF, Netflow, Netconf, SNMP and DNS.

Cost and Revenue Implications

As a Transit Provider, one of your goals is to decide on how to send traffic at the lowest cost without sacrificing quality of experience for end-users. Typically, traffic can be sent via very different alternative paths: the cheapest route that often fails during peak hours, the mediocre route that is slightly more expensive, or the highest quality and most expensive route, just to name a few. As technology for changing the traffic patterns is limited, Transit Providers are often faced with the decision to either provide great service all of the time OR pay the lowest price. What if there was a way to know exactly when to switch between the routes in order to provide the targeted service levels at the lowest cost? Using the network data already integrated in BENOCS, there is a potential to create a cost and revenue integration solution allows you to automatically change the route of traffic during network failures and peak hours, depending on commercial- and quality-parameters. This means sending your traffic on the low-priced route until it no longer satisfies your quality standards. On top of that, you would obtain more differentiated billing capabilities, charging your customers more accurately by having visibility into the revenue and applicable cost of each customer network.

DNS Data Integration

The role of CDNs on the network has changed significantly over the last decade. As the demand for internet content continues to increase, so does the demand for getting that content from the content provider to the end-user. For this reason, CDNs have grown in size and market share, and now carry diverse content types from multiple content providers. At the same time, public CDNs are also increasingly used to offload traffic during peaks hours and in hard-to-reach networks. Given the diversity and volatility of traffic carried by a single CDN, it is becoming increasingly important for networks to understand the structure and profile of a CDN’s traffic stream to avoid overload and abuse. Since DPI is not viable anymore, integrating DNS information with traffic data from the BENOCS Core Engine could provide necessary visibility into CDN traffic flows in order to full understand the traffic in your network.

SDN Controller Input

In some cases, the future of the network is already taking place as new technology is developed and tested every day. We see SDN technology being deployed not only at the customer edge, but also in core-backbone segments. As the network backbone’s hardware becomes softwarized and thus configuration- and automation- capabilities enhance, you will need to have deep and automated network visibility in order to take full advantage of the new kit. Bringing network-external parameters (like QoE, price) into routing decision algorithms greatly enhances the cost-efficiency and quality of your network, at lower capex. And how could one achieve this visibility and automation? You guessed it! With BENOCS. Such external parameters, combined with the data already collected and processed in BENOCS Core Engine, become important SDN-Controller-Input, which would trigger network automation in SDN-empowered networks.

The future of the Internet is exciting, however, knowing how it will develop and what is necessary for its maintenance is important for its overall sustainability. That is why it is important to invest in products and tools that are already backed by powerful data architectures and have the potential to evolve into the products you need to bolster your future network’s performance.

Please visit our Analytics and Director pages to learn more about what BENOCS provides today, or contact us anytime.

How the BENOCS Flow Director went from research to product

BENOCS Analytics - Flow Director for CDNs and CSPs explained

Back in 2010, before BENOCS, our founder along with a few other researchers presented the new challenges facing ISPs: CDNs/hyper-giants and their poorly mapped traffic. In that proposal[1], they opened the research community to a new idea: the prospect of creating a system that would be implemented in the ISP network, which would use the network’s data to facilitate better mapping from the CDN/hypergiants’ side. Now, nine years later and six years of development, the system called BENOCS Flow Director is not only running, but has also proven its viability and is showing positive results.

This year, BENOCS, along with researchers from Max Planck Institute for Infomatics, TU Berlin and TU Munich, is proud to report on its progress since its system, the BENOCS Director, enabled the first ISP/CDN cooperation in 2017, which will be presented at both ACM CoNEXT 2019 and APNIC 49.

Hyper-giants and their behavior on the network

So what are we talking about when we talk about hyper-giants and ISP/CDN cooperation? For us, a hyper-giant is a large network  that sends at least 1% of total traffic delivered to broadband customers in the ISP network and publically identifies as a CDN, cloud, content or enterprise network. Since the introduction of video and social networks, demand for content has risen rapidly, causing CDNs to become more relevant for ISP networks: the top ten – mostly hyper-giants – being responsible for more than 50% of the traffic on the network . With traffic sizes that significant, it is important that CDNs of all kinds deliver their traffic as accurately as possible in order to reduce congestion on ISP networks and provide the best quality of service.

Given the rapid growth, ISPs and CDNs have also been challenged with trying to keep up with one another’s development, causing them to make unavoidable upgrades just to maintain the quality of service. This lack of visibility has resulted in an extraordinary amount of stress on ISP networks as well as higher costs and maintenance for both sides. Before the BENOCS Director, as well as in some cases today, ISPs and CDNs have implemented their own methods when trying to cope with the growth of content demands and traffic on the network, such as performing their own measurements, which can only provide “best guesses” of the current network state. This is becoming more challenging as CDNs and hypergiants expand and connect more ingress points to the ISP network – which create more path options for them to take.

BENOCS Flow Director improves network traffic

Since both ISPs and CDNs need to co-exist on the network and require visibility of the other in order maintain their performance, why not get them working together? That is how BENOCS Flow Director was born.

BENOCS Flow Director is “an ISP service enabling ISP-CDN collaboration with the goal of improving the latter’s mapping,” according to the research paper, thus reducing costs and improving services for both parties. It does so by collecting and processing the necessary data at scale from the ISP to be able to compute mapping recommendations. It then uses this information to rank every possible ingress point from best to worst and gives this information to the CDN. It is a unique system that was intentionally designed to be vendor diagnostic, deployment agnostic, easy to integrate, fully automated and scalable, so that it could fit into any network and work with any CDN/hypergiant.

According to the research paper, two years after connecting the first hypergiant to a large ISP, it is already working. When the cooperation started, the hypergiant was delivering 70% of its traffic optimally with the trend declining. The combined interest of optimizing network traffic ultimately motivated the CDN to test BENOCS Flow Director. As of today, this metric is in the range of 75-84% and increasing.

Benefits of the Director for ISPs and CDNs

So, what would happen if all major hypergiants were to active BENOCS Flow Director? According to the research paper:

  1. Theoretically, an ISP could see a 20% long haul decrease is traffic reduction, given the CDNs do not have constraints.
  2. Every CDN is different, with some able to reduce their long haul traffic in an ISP network by 40% – this happens because some CDNs interconnect with the ISP at different PoPs, and their traffic matrices differently.
  3. If the system were to be used by all CDNs, the traffic on long-haul links would further reduce to less than 80%

All of which would eliminate guesswork to create a better and more stable network ecosystem, leading to higher quality of service along with a reduction of network infrastructure in use.

This post is based on the research paper Steering Hyper-Giants’ traffic at Scale. We are proud to announce that it won “Best Paper Award” at ACM CoNEXT, 2019 in Orlando, FL and the IRTF’s “Applied Networking Research Prize” for our CTO, Ingmar Poese, in 2020. You can read the full paper here.

[1]     Poese, I., Frank, B., Ager, B., Smaragdakis, G., Uhlig, S., & Feldmann, A. (2011). Improving content delivery with PaDIS. IEEE Internet Computing, 16(3), 46-52.

DDoS attacks are too easy to launch: How effective are booter take downs?

DDoS Attack

DDoS attacks services are becoming so cheap and easy to use that even grandma can launch an attack. Are current protective measures enough?

The widely known DDoS (Distributed Denial of Service) attacks have been a major threat to Internet security and availability for over two decades. Their goal: to interrupt services by consuming critical resources: computation power or bandwidth, with a motivation of anything from political, financial, cyber warfare, deflecting from other attacks to hackers just testing their limits. Since they were first observed, these attacks have grown in both size and sophistication. By spoofing source IP addresses, traffic is not being amplified and reflected towards. Additionally, they are becoming more available. One can purchase booters and stressers for as little as the price of a cup of coffee and launch them with little technical background, enabling anyone the power to launch an attack. This is enough to keep cyber security experts on their toes as well as draw the attention of law enforcement agencies.

The more DDoS attacks become prevalent to everyday network operations, the more necessary it is to study their behavior and impact as well as the impact of current mitigation methods in order to determine network security methods.  Earlier this year, BENOCS, DE-CIX, University of Twente and Brandenburg University of Technology teamed up to study this topic and answer the following questions:

  • What is it like to be attacked and how are they amplified on larger networks, such as Tier-1 and Tier -2 ISPs as well as IXP?
  • What happens to booter websites after they have been taken down by law enforcement?

With an abundance of research focused on different aspects of booters – especially their financial impact – little attention has been given to an empirical observation of attacks as well as how effective an FBI take down of booters actually is. With something this serious stalking the network, thorough investigations and studies need to be performed in order to learn their behavior and figure out better ways to combat booter services.

What happens to a network under a booter attack?

In order to draw conclusions on a DDoS attack landscape, one must first observe it in it in the wild. However, given the duration of attacks (just a few minutes), it is hard to predict where and when one will be launched. Instead of waiting for an attack, the researchers launched their own by purchasing four booter services selected from the booter blacklist – including the more expensive VIP booters on a network they had built up themselves.* From this, they were able to see where the booters directed the traffic, how they worked and to see which services were used for amplification as well as to guess the amount of amplification. On top of that, the researchers provided the first look into whether or not the attacks live up to their sale promises. The actual attacks were performed by passively capturing all traffic of the measurement platform. They showed that the booters lived up to the expectation to attack the specified targets (mostly using NTP amplification attacks), but the VIP services did not deliver on their promised attack volumes ­– as much as 75% less.

*Given the legal and ethical implications of purchasing such booters as well as the damage they can cause on a network, extra steps were taken by the team to comply with laws and cause minimal destruction.

How effective is an FBI booter take-down?

As an attempt to cease some control over DDoS attacks, the FBI tracks down booter websites and removes booter services. But how effective is that really? In addition to observing how large these attacks can be, this research team also wanted to know what is saved by taking these sites down. The answer: not a lot. By taking weekly snapshots of all .com/.net/.org domains and booter top 1M domains, they were able to identify 58 booter services and followed them over a period of 122 days. What they discovered was the takedowns do lead to a sharp reduction of DDoS traffic on the network, however, no significant reduction when hitting the victims’ systems. Additionally, booter services, after a take down, are capable of relaunching under different domains, allowing them to continue business as usual just weeks after being deactivated. Therefore, simply taking down a booter service is not a long-term sustainable solution.

DDoS attacks are a serious threat to the network ecosystem ­­– anyone and everyone connected to the internet is a target, especially IoT devices that never receive udpdates – and current methods of trying to eliminate them, as this study has shown, are not sufficient nor sustainable. In order to find better solutions that reduce the amount of DDoS attacks bought and sold, further research is required, especially on the effects of the booter economy after an FBI take down. This research particular study will remain ongoing until 2022, which will test the capabilities of artificial intelligence and new developments in DDoS protection.

The information in this post comes from the ACM IMC paper “DDoS Hide and Seek: On the Effectiveness of a Booter Service Takedown”.

See the press release for this paper here.

What if we turned the conversation from Netflow vs SNMP to Netflow & SNMP?

Screenshot BENOCS Demolytics SNMP-Flow

There is no “I” in “team”, just like there is no “I” in “Netflow vs. SNMP”. It’s time to stop comparing Netflow and SNMP, and instead start talking about how well they work together when it comes to network monitoring.

If you do a quick internet search on the internet protocols SNMP and Netflow for network measurement, you will find an array of articles either that try to sell you the idea that accurate network monitoring is done with Netflow, not SNMP, or that simply compare the two usually with a slight bias against SNMP. But, what if we changed perspective and stopped looking at these protocols as one or the other and instead looked at them as one AND the other. What if, instead of comparing them, we cross-correlated them with one another, providing you with the advantages of both? In this post we will explain why accurate network monitoring is done with Netflow AND SNMP.

What are SNMP and Netflow?

For those of you not in the know, let’s start with the basics:

SNMP, also known as Simple Network Monitoring Protocol, is a standard internet protocol for collecting and organizing information about managed devices on IP networks and for modifying the information to change device behavior. This protocol was developed in the early days of the internet, back when it was about getting bits across the network and traffic could be easily measured. When it comes to collecting measurements, SNMP will simply show you the amount of traffic that has gone through your network.

Netflow, on the other hand, is a network protocol that collects information on all traffic running through a Netflow enabled device, and will examine each packet to create a flow model that displays the entire path taken by the traffic. This protocol was designed for monitoring, now that networks transfer large and diverse amounts of traffic instead of just bits. Netflow will show you a statistical measurement of exactly where your traffic came from and where it is going.

Why SNMP and Netflow work well together

Due to its age and capabilities, some network operators have deemed SNMP and other quantity measuring protocols as “inefficient” when monitoring the network because of the large amount and diversity of network traffic and players. The interest has thus shifted to not just knowing how much traffic is on the network, but what kind and from/to whom – which can determine so much about how you plan and maintain your network. Therefore, Netflow and other statistical measurements are promoted for getting insight into how traffic is behaving, while SNMP and other quantity measuring protocols are disregarded.

Although, given how complex the network has become, only being able to see the statistical data is still not the best solution when trying to monitor the network. The first problem is that a lot of guessing is involved. Given Netflow sample rates are very low (every 10,000 packets), this data can be very misleading during non-peak hours when it’s harder to extrapolate data from low traffic flows, showing you either too little or too much traffic than is actually there. The second is that you can’t detect if sampled data is missing due to various factors, such as different default configurations on each router, routers using different OS versions or network bottlenecks, which cause an unknown amount packet loss. For these reasons, traffic values can be misrepresented, especially during peak-hours. In most cases, these errors are discovered to be wrong months later when they become so large that it they need to be manually investigated. And for those of you who understand network economy and/or operation, this could mean lost revenue, poor performance and/or long term serious network failures.

So, what can we do to prevent false pretenses? As you already know, each protocol on their own shows one side of the picture: SNMP can show you an almost accurate amount of traffic passing through the network (sampling rate at 1 bucket/5 min), while Netflow can show you where it went. By cross correlating this information and comparing the data provided by both protocols, you can learn about any mistakes in calculations or measurements with a relatively small amount of effort.

With the BENOCS Analytics SNMP line feature, you can directly compare statistical flow traffic (all things Netflow) with measured traffic from SNMP for a more accurate view of your network’s traffic.

Here’s what we do:

Comparing Netflow and SNMP
  • SNMP (top line): we collect via SNMP, Telemetry, Netconf and similar protocols inventory information, auto-detect new interfaces and gather information on byte-count, packet-loss and capacity in as little as 1 minute bucket sizes
  • NetFlow (colorful area): we collect all flow-based information via sFlow, Netflow, IPFIX and similar protocols. We collect send- and receiver IP, traffic volume, interface & protocol information.

By adding the SNMP line feature to your current traffic display, you can create a more accurate picture of your network’s traffic measurements to prevent network failures, avoid costly mistakes, and generate savings just like this customer.

Would you like to learn more about the SNMP feature, or BENOCS Analytics? Contact us today! You can also request a free demo account to see what our tool can do for you!

When it comes to monitoring your network, its time to stop guessing and start knowing!

What are self-operated Meta CDNs, and should ISPs be concerned?

Apple Store

Before 2014, Apple – one of the largest content generators on the web today – relied on external content delivery networks (CDNs), such as Akamai and Level 3, to deliver everything from music/video streaming to iOS updates.  In 2014, Apple released their CDN as an effort to take control over the quality of their content delivery as well as creating the final puzzle piece that gives them control over the entire customer experience (hardware, online platforms, ect.). Interestingly enough, as Dan Rayburn predicted, Apple was in no hurry to convert all of their traffic to their own CDN and would still need some time before they completely stopped offloading traffic onto third-party CDNs.

A 2017 study supported by Benocs shows that Apple, three years later, still relies on third-party CDNs, such as Akamai and Limelight, to deliver its iOS updates. Why? Because, when a company as large as Apple needs to deliver an operating system update multiple times per year to their over 1 billion devices, it needs to find a way to handle overload to supplement its own infrastructure’s limitations. Therefore, they rely on self-operated Meta CDNs to carry their traffic. No big deal, right? Wrong. As this study shows, traffic is not running as smoothly as originally thought.

What are Meta-CDNs?

Before we talk about the main issue, let us first look into the evolution of CDNs. As the internet continues to expand with more content and users, the more CDNs are challenged with providing their users the fastest delivery speeds possible while, at the same time, building as little infrastructure as possible. In order to solve this, CDNs are getting closer to their users in the form of Meta-CDNs – multihoming content amongst multiple CDNs, therefore having access to servers holding content closer to their users. This means that CDNs are publishing content on multiple CDNs thus, requiring additional request mapping – ways to see the additional servers holding the content. Therefore, it is not just CDNs moving the traffic for Apple, but rather Meta-CDNs. This collaboration of multiple CDNs carrying the traffic of a single CDN is thus called a self-operated Meta-CDN, due to its ability to direct traffic both to its own infrastructure or to a third-party CDN.

How Meta-CDNs affect the network

If self-operated Meta-CDNs exist in the network to help companies such as Apple provide their customers with the smoothest iOS update possible, then why is this a problem? Well, according to this study, by looking at the behavior of a self-operating Meta-CDN through the eyes of the internet service provider (ISP), it actually causes more chaos in the network than one would expect. Given that this type of CDN is rather new, not much is known about them to begin with.

By observing the iOS update in September 2017 through the eyes of a major European ISP, researchers found the following behaviors to occur:

  • If the Apple CDN is selected by the DNS-resolver (aka the traffic director), the traffic will move through Apples infrastructure.
  • If a third-party CDN is selected by the DNS-resolver, Apple will offload the traffic onto third-party CDNs.

Since Apple’s infrastructure is not as developed in Europe as it is in North America, when reaching their devices on a global scale, using CDNs such as Akamai – who have a global infrastructure – gives Apple the advantage of reaching their customers with ease.

The consequence this has for the ISPs is the amount of strain put on the network. By offloading the traffic, the ISP is unable to predict how much traffic to expect on its links, given they are unable to see which CDN the overarching Meta-CDN selects as well as how much traffic each CDN is carrying. On top of that, the individual CDNs are carrying traffic further than necessary, which creates overflow – where traffic is being forced to take longer paths.

Why networks need to be aware of Meta-CDNs’ behavior

Why is this kind of unpredictability risky? Because links that were originally thought to be unaffected are actually over capacity, which causes perilous behavior in the network. Therefore, making it necessary for ISPs to further investigate their assumptions about how the network is actually behaving during such high-stress situations such as operating system updates.

The information in this post comes from the paper “Dissecting Apple’s Meta-CDN during an iOS update.” To read the full study, please click here.

Why geolocation is important for enforcing the General Data Protection Regulation and other privacy policies

The importance of geolocation for enforcing the General Data Protection Regulation and other privacy policies

If you are living in Europe or doing business with European companies, you are probably already familiar with the General Data Protection Regulation (GDPR), which has been in effect since May 2018. However, what you may not know is how this law is actually administered. After all, a law is only as good as its ability to be enforced. Given that internet content is shared globally, how can anyone ensure those within the European Union (EU) borders are actually protected by this law, especially when content needs to travel across borders? How do we know if data is just passing through, or if it is terminating in Europe? What can be used as evidence of violations? The answer: geolocation and tracking flows.

With most of the researchers in the network measurement community focusing on data collection and financial worth, little attention has been given to tracking flows in relation to geolocation, which can show whether or not information crosses national or international borders, and whether or not any information is leaked. Additionally, it can show how adequately internet service providers (ISPs) handle the distribution of tracking flows on different networks, i.e. mobile or broadband.

Geolocations show traffic flows

For those of you who are unfamiliar with network measurement, a tracking flow is a flow between an end user and a web analytics tool – the geographical footprint of the tracking flow – and shows where it originated, where it went, and where it terminated. For a user, this means where have you been on the network, and who knows you were there. In relation to the GDPR, this is the evidence that proves whether or not data is being collected on EU users without their permission or knowledge.

Extracting geolocation in a GDPR era

So why aren’t more people researching this method? The simple answer: Because extracting the geolocation of users requires having access to real tracking flows that originate from users and terminate at trackers. On top of getting actual user permission for investigating tracking flows, precision of user location and complete measurements prove to be further challenges.

Despite the barriers, one research group, with support from BENOCS, realized its necessity and found a solution by developing an extensive measurement methodology for quantifying the amount of tracking flows that cross data protected borders through a browser extensions in order to render advertising and detect tracking flows. This is especially ground breaking because the method manages, according to the study, double the amount of tracking flows as previous studies, and shows the actual confinement of trackers staying within the EU. This study also managed to find out whether trackers track sensitive data such as religion, health, sexuality, to name a few, without violating the GDPR.

By tracking 350 test users over a period of four months, researchers found that, in contrast to the popular belief that trackers located outside of Europe conduct most tracking flows, around 90% of the traffic flows that originate in Europe actually also terminate in Europe. This small sample serves as a baseline intended to correlate with datasets that will deal with millions of users in the future.

They also found that, despite the regulations on tracking flows with sensitive and protected data categories, around 3% of the total tracking flows identified in this study fall within the protected categories. This 3% is evidence of violations.

These results are especially important when trying to figure out how well companies are actually abiding by this law.

As we are now living in an era with GDPR and other privacy regulations, it is important that we also have the necessary tools to enforce them. Having had success with their tracking flow measurements, further research and development will continue in this subject in order to provide anyone requiring the evidence and footprint of tracking flows in real-time.

The information in this post comes from the paper “Tracing Cross Border Web Tracking.” To read the full study, please click here.

*This study received a distinguished paper award at ACM Internet Measurement Conference in 2018.

Episode 5: Vision Gaps: How to overcome them by evolving towards ISP-CDN interplay

The future of the internet

In the previous episodes, we mentioned that the internet demand is continuously growing, and the network infrastructure is no longer able to efficiently support the heavy traffic without costly upgrades and extensions. So far, we discussed the tools currently being used to support content delivery, why we think they are no longer efficient, and the solutions we can provide to help the network operate better. However, what we have not yet answered is why. Why has the network evolved this way if it is inefficient? To answer that, let us have look at some of the internet’s history to determine the aspects that made it this way, what challenges it faces from the outcome, and how Benocs fits into this timeline.

Where the internet came from and where it is going

What started as a military funded project in 1969 that originally linked three institutes in the USA for academic research sharing, the internet has become a necessity in both social and professional lives. The internet as we know it today began its development in the 1970s as the ARPNET, which tested a new technology called “packet switching” via “nodes” to transmit data. From this development, the first email was sent in 1971 and the first international connections in 1973. From there, the internet began connecting more institutions across the USA and Europe, not just academically, but socially as well. In the 1980s, the amount of those connected reached over one hundred, and its increase in popularity led the military to open up the network for public use via TCP/IP addresses. With the amount of users increasing, World Wide Web using HTTP and HTML was developed in 1992, and the internet was commercialized in 1995 with internet service providers (ISP) controlling the backbone. By the year 2000, about half of the United States’ population was online (“40 Maps that Explain the Internet” by Timothy Lee on vox.com).

As a direct result of a growing user population and growing network, the distribution of content became more difficult and less accurate. Content delivery networks (CDN) began to emerge in the late 1990s to resolve these issues using algorithms to accurately route the traffic (“Company History” on Akamai.com). However, the continuous growth of internet users throughout the globe led to more players controlling the network, and, as an outcome, a more complicated network. Given the rise of internet culture, its value proceeded to increase, and complex business deals between ISPs, CDNs, and content producers began to challenge the way content was delivered. The internet backbone today is no longer a simple route from content to back bone to user. Instead, it now consists of multiple players who have created a power struggle in the network backbone with the emergence of business mergers and direct peering. As a result, content delivery has become harder to manage and is traveling blindly across the network, which is creating bottlenecks and the need for expensive network upgrades in order to help evenly distribute traffic.

BENOCS fits directly into the future of the network

One of the largest challenges facing ISPs today – who provide the connection between the backbone and the user – is to keep its infrastructure operating efficiently. This is difficult since the ISP does not control the source, destination, or behavior of content delivery traffic. CDNs on the other hand, are able to control the source in order to deliver content to its destination, the end user. In the first three episodes, we introduced the methods used to deal with the evolution of the network in order to keep it running smoothly. From this, we learned that the products on the market sustain the network, however, based on our research, are outdated or lacking information. As two different entities sharing the same network space, our mission at BENOCS is to get these two players playing together to improve traffic for the CDN and reduce costly infrastructure for the ISP. We discovered that by communicating the network topology information from the ISP to the CDN, the CDNs make better delivery decisions, thus reducing traffic and delivery speeds (“Content-aware Traffic Engineering” by Frank, Poese, Smaragdakis, Uhlig, and Feldmann). So instead of trying to cope with the continuously growing network with premature upgrades and costly extensions, it is time to start managing it with information that it already has. It is time to involve Benocs in the evolution of the internet, and start improving the internet’s quality while saving on unnecessary costs!

Episode 4: Vision Gaps – How BENOCS tackles the delivery of different types of content

Online Content Delivery

In the previous episodes, we explored the different approaches currently implemented into the network in order to keep up with the increasing customer demands and the delivery of content across the internet. Although they are currently capable of sustaining the higher demands and expectations, their systems are not efficient and require frequent and costly infrastructure updates to manage future congestion. At BENOCS, we introduce a new way of managing internet traffic by collecting and sharing information that is already available on the network in order to balance the system and to facilitate the best and fastest delivery speeds for all types of content such as transaction/clicks and video streaming. Each of these particular types of content have special needs in terms of delivery, in which BENOCS’s unique system can optimize all of their performances. In order to understand the significance, let us now return to our pizza scenario to see what types of issues the stores could face, and how delivery services could be improved with real-time traffic reports for the best performance and customer satisfaction.

Different types of content have different delivery needs

Transactions/clicks are the webpages that internet users want to view, and to view without waiting. This is similar to what we have been discussing in episodes 2 and 3, where the pizza hotline needs to figure out which of the three locations is not only the closest to the customer, but also has the fastest path to follow. This way the pizza order is not sent to the closest pizza store, when the second closest store is faster. Don’t forget that time is money and customers, who expect their pizzas to arrive hot and on time, will take their business elsewhere if their demands are not met.

When internet customers want to stream videos, it is not so important that the content arrives as fast as transactions/clicks, but rather that it arrives at a constant pace for the duration of the video to prevent buffering interruptions while the video is playing. In this case, the CDN would need to find a path that will not just deliver the content quickly, but also at a consistent speed for a specific amount of time. If we were to put this into our pizza scenario, we could imagine that our pizza store also owns several pizza carts in the inner city that are only open during the business lunch rush between 11:00 and 14:00. In order to keep these carts stocked throughout the duration of these busy hours, we need to pay attention to where the carts are located and the road traffic to each of them in order to ensure that the carts are regularly stocked with the freshest pizza possible – because an empty pizza cart is bad for business. This means that the pizza store needs to figure out which roads between the store and the cart are the least likely to become congested between those hours to ensure a consistent flow of pizza to each cart.

Don’t let content travel blind. BENOCS can help it arrive with minimal delays!

Although these two scenarios are not likely real issues that a pizza store would face, they are real on the network, and if not solved, could lead to poorer quality, slower delivery speeds, and costly infrastructure updates. To solve the dilemas in our pizza scenario, we would be interested in a device that can assist us with traffic reports to ensure our products meet customer expectations. However, since we are not experts on the devices to aid pizza delivery issues, we at BENOCS do offer solutions in content delivery to the equivalent problems in the internet. Our products serve as the device that connects those with the knowledge of network traffic/topology (ISPs) to those that need it (CDNs). Instead of allowing content to blindly cross the network, which often leads to bottlenecks and traffic jams, we provide CDNs with information on current network topology so it reaches its destination with minimal delays. By balancing out the network traffic, internet infrastructure is saved from frequent extension and updates. This is like a road with less traffic is subject to less damage and a need to be expanded. Why not spend a little to save a lot? Its time to improve your network’s quality and efficiency while saving on premature network investments and needless maintenance cost!

Tune in next time for our final episode where we will discuss the BENOCS vision in relation to the history of the internet.