What if we turned the conversation from Netflow vs SNMP to Netflow & SNMP?

Screenshot BENOCS Demolytics SNMP-Flow

There is no “I” in “team”, just like there is no “I” in “Netflow vs. SNMP”. It’s time to stop comparing Netflow and SNMP, and instead start talking about how well they work together when it comes to network monitoring.

If you do a quick internet search on the internet protocols SNMP and Netflow for network measurement, you will find an array of articles either that try to sell you the idea that accurate network monitoring is done with Netflow, not SNMP, or that simply compare the two usually with a slight bias against SNMP. But, what if we changed perspective and stopped looking at these protocols as one or the other and instead looked at them as one AND the other. What if, instead of comparing them, we cross-correlated them with one another, providing you with the advantages of both? In this post we will explain why accurate network monitoring is done with Netflow AND SNMP.

What are SNMP and Netflow?

For those of you not in the know, let’s start with the basics:

SNMP, also known as Simple Network Monitoring Protocol, is a standard internet protocol for collecting and organizing information about managed devices on IP networks and for modifying the information to change device behavior. This protocol was developed in the early days of the internet, back when it was about getting bits across the network and traffic could be easily measured. When it comes to collecting measurements, SNMP will simply show you the amount of traffic that has gone through your network.

Netflow, on the other hand, is a network protocol that collects information on all traffic running through a Netflow enabled device, and will examine each packet to create a flow model that displays the entire path taken by the traffic. This protocol was designed for monitoring, now that networks transfer large and diverse amounts of traffic instead of just bits. Netflow will show you a statistical measurement of exactly where your traffic came from and where it is going.

Why SNMP and Netflow work well together

Due to its age and capabilities, some network operators have deemed SNMP and other quantity measuring protocols as “inefficient” when monitoring the network because of the large amount and diversity of network traffic and players. The interest has thus shifted to not just knowing how much traffic is on the network, but what kind and from/to whom – which can determine so much about how you plan and maintain your network. Therefore, Netflow and other statistical measurements are promoted for getting insight into how traffic is behaving, while SNMP and other quantity measuring protocols are disregarded.

Although, given how complex the network has become, only being able to see the statistical data is still not the best solution when trying to monitor the network. The first problem is that a lot of guessing is involved. Given Netflow sample rates are very low (every 10,000 packets), this data can be very misleading during non-peak hours when it’s harder to extrapolate data from low traffic flows, showing you either too little or too much traffic than is actually there. The second is that you can’t detect if sampled data is missing due to various factors, such as different default configurations on each router, routers using different OS versions or network bottlenecks, which cause an unknown amount packet loss. For these reasons, traffic values can be misrepresented, especially during peak-hours. In most cases, these errors are discovered to be wrong months later when they become so large that it they need to be manually investigated. And for those of you who understand network economy and/or operation, this could mean lost revenue, poor performance and/or long term serious network failures.

So, what can we do to prevent false pretenses? As you already know, each protocol on their own shows one side of the picture: SNMP can show you an almost accurate amount of traffic passing through the network (sampling rate at 1 bucket/5 min), while Netflow can show you where it went. By cross correlating this information and comparing the data provided by both protocols, you can learn about any mistakes in calculations or measurements with a relatively small amount of effort.

With the BENOCS Analytics SNMP line feature, you can directly compare statistical flow traffic (all things Netflow) with measured traffic from SNMP for a more accurate view of your network’s traffic.

Here’s what we do:

Comparing Netflow and SNMP
  • SNMP (top line): we collect via SNMP, Telemetry, Netconf and similar protocols inventory information, auto-detect new interfaces and gather information on byte-count, packet-loss and capacity in as little as 1 minute bucket sizes
  • NetFlow (colorful area): we collect all flow-based information via sFlow, Netflow, IPFIX and similar protocols. We collect send- and receiver IP, traffic volume, interface & protocol information.

By adding the SNMP line feature to your current traffic display, you can create a more accurate picture of your network’s traffic measurements to prevent network failures, avoid costly mistakes, and generate savings just like this customer.

Would you like to learn more about the SNMP feature, or BENOCS Analytics? Contact us today! You can also request a free demo account to see what our tool can do for you!

When it comes to monitoring your network, its time to stop guessing and start knowing!

What are self-operated Meta CDNs, and should ISPs be concerned?

Apple Store

Before 2014, Apple – one of the largest content generators on the web today – relied on external content delivery networks (CDNs), such as Akamai and Level 3, to deliver everything from music/video streaming to iOS updates.  In 2014, Apple released their CDN as an effort to take control over the quality of their content delivery as well as creating the final puzzle piece that gives them control over the entire customer experience (hardware, online platforms, ect.). Interestingly enough, as Dan Rayburn predicted, Apple was in no hurry to convert all of their traffic to their own CDN and would still need some time before they completely stopped offloading traffic onto third-party CDNs.

A 2017 study supported by Benocs shows that Apple, three years later, still relies on third-party CDNs, such as Akamai and Limelight, to deliver its iOS updates. Why? Because, when a company as large as Apple needs to deliver an operating system update multiple times per year to their over 1 billion devices, it needs to find a way to handle overload to supplement its own infrastructure’s limitations. Therefore, they rely on self-operated Meta CDNs to carry their traffic. No big deal, right? Wrong. As this study shows, traffic is not running as smoothly as originally thought.

What are Meta-CDNs?

Before we talk about the main issue, let us first look into the evolution of CDNs. As the internet continues to expand with more content and users, the more CDNs are challenged with providing their users the fastest delivery speeds possible while, at the same time, building as little infrastructure as possible. In order to solve this, CDNs are getting closer to their users in the form of Meta-CDNs – multihoming content amongst multiple CDNs, therefore having access to servers holding content closer to their users. This means that CDNs are publishing content on multiple CDNs thus, requiring additional request mapping – ways to see the additional servers holding the content. Therefore, it is not just CDNs moving the traffic for Apple, but rather Meta-CDNs. This collaboration of multiple CDNs carrying the traffic of a single CDN is thus called a self-operated Meta-CDN, due to its ability to direct traffic both to its own infrastructure or to a third-party CDN.

How Meta-CDNs affect the network

If self-operated Meta-CDNs exist in the network to help companies such as Apple provide their customers with the smoothest iOS update possible, then why is this a problem? Well, according to this study, by looking at the behavior of a self-operating Meta-CDN through the eyes of the internet service provider (ISP), it actually causes more chaos in the network than one would expect. Given that this type of CDN is rather new, not much is known about them to begin with.

By observing the iOS update in September 2017 through the eyes of a major European ISP, researchers found the following behaviors to occur:

  • If the Apple CDN is selected by the DNS-resolver (aka the traffic director), the traffic will move through Apples infrastructure.
  • If a third-party CDN is selected by the DNS-resolver, Apple will offload the traffic onto third-party CDNs.

Since Apple’s infrastructure is not as developed in Europe as it is in North America, when reaching their devices on a global scale, using CDNs such as Akamai – who have a global infrastructure – gives Apple the advantage of reaching their customers with ease.

The consequence this has for the ISPs is the amount of strain put on the network. By offloading the traffic, the ISP is unable to predict how much traffic to expect on its links, given they are unable to see which CDN the overarching Meta-CDN selects as well as how much traffic each CDN is carrying. On top of that, the individual CDNs are carrying traffic further than necessary, which creates overflow – where traffic is being forced to take longer paths.

Why networks need to be aware of Meta-CDNs’ behavior

Why is this kind of unpredictability risky? Because links that were originally thought to be unaffected are actually over capacity, which causes perilous behavior in the network. Therefore, making it necessary for ISPs to further investigate their assumptions about how the network is actually behaving during such high-stress situations such as operating system updates.

The information in this post comes from the paper “Dissecting Apple’s Meta-CDN during an iOS update.” To read the full study, please click here.

Why geolocation is important for enforcing the General Data Protection Regulation and other privacy policies

The importance of geolocation for enforcing the General Data Protection Regulation and other privacy policies

If you are living in Europe or doing business with European companies, you are probably already familiar with the General Data Protection Regulation (GDPR), which has been in effect since May 2018. However, what you may not know is how this law is actually administered. After all, a law is only as good as its ability to be enforced. Given that internet content is shared globally, how can anyone ensure those within the European Union (EU) borders are actually protected by this law, especially when content needs to travel across borders? How do we know if data is just passing through, or if it is terminating in Europe? What can be used as evidence of violations? The answer: geolocation and tracking flows.

With most of the researchers in the network measurement community focusing on data collection and financial worth, little attention has been given to tracking flows in relation to geolocation, which can show whether or not information crosses national or international borders, and whether or not any information is leaked. Additionally, it can show how adequately internet service providers (ISPs) handle the distribution of tracking flows on different networks, i.e. mobile or broadband.

Geolocations show traffic flows

For those of you who are unfamiliar with network measurement, a tracking flow is a flow between an end user and a web analytics tool – the geographical footprint of the tracking flow – and shows where it originated, where it went, and where it terminated. For a user, this means where have you been on the network, and who knows you were there. In relation to the GDPR, this is the evidence that proves whether or not data is being collected on EU users without their permission or knowledge.

Extracting geolocation in a GDPR era

So why aren’t more people researching this method? The simple answer: Because extracting the geolocation of users requires having access to real tracking flows that originate from users and terminate at trackers. On top of getting actual user permission for investigating tracking flows, precision of user location and complete measurements prove to be further challenges.

Despite the barriers, one research group, with support from BENOCS, realized its necessity and found a solution by developing an extensive measurement methodology for quantifying the amount of tracking flows that cross data protected borders through a browser extensions in order to render advertising and detect tracking flows. This is especially ground breaking because the method manages, according to the study, double the amount of tracking flows as previous studies, and shows the actual confinement of trackers staying within the EU. This study also managed to find out whether trackers track sensitive data such as religion, health, sexuality, to name a few, without violating the GDPR.

By tracking 350 test users over a period of four months, researchers found that, in contrast to the popular belief that trackers located outside of Europe conduct most tracking flows, around 90% of the traffic flows that originate in Europe actually also terminate in Europe. This small sample serves as a baseline intended to correlate with datasets that will deal with millions of users in the future.

They also found that, despite the regulations on tracking flows with sensitive and protected data categories, around 3% of the total tracking flows identified in this study fall within the protected categories. This 3% is evidence of violations.

These results are especially important when trying to figure out how well companies are actually abiding by this law.

As we are now living in an era with GDPR and other privacy regulations, it is important that we also have the necessary tools to enforce them. Having had success with their tracking flow measurements, further research and development will continue in this subject in order to provide anyone requiring the evidence and footprint of tracking flows in real-time.

The information in this post comes from the paper “Tracing Cross Border Web Tracking.” To read the full study, please click here.

*This study received a distinguished paper award at ACM Internet Measurement Conference in 2018.

Episode 5: Vision Gaps: How to overcome them by evolving towards ISP-CDN interplay

The future of the internet

In the previous episodes, we mentioned that the internet demand is continuously growing, and the network infrastructure is no longer able to efficiently support the heavy traffic without costly upgrades and extensions. So far, we discussed the tools currently being used to support content delivery, why we think they are no longer efficient, and the solutions we can provide to help the network operate better. However, what we have not yet answered is why. Why has the network evolved this way if it is inefficient? To answer that, let us have look at some of the internet’s history to determine the aspects that made it this way, what challenges it faces from the outcome, and how Benocs fits into this timeline.

Where the internet came from and where it is going

What started as a military funded project in 1969 that originally linked three institutes in the USA for academic research sharing, the internet has become a necessity in both social and professional lives. The internet as we know it today began its development in the 1970s as the ARPNET, which tested a new technology called “packet switching” via “nodes” to transmit data. From this development, the first email was sent in 1971 and the first international connections in 1973. From there, the internet began connecting more institutions across the USA and Europe, not just academically, but socially as well. In the 1980s, the amount of those connected reached over one hundred, and its increase in popularity led the military to open up the network for public use via TCP/IP addresses. With the amount of users increasing, World Wide Web using HTTP and HTML was developed in 1992, and the internet was commercialized in 1995 with internet service providers (ISP) controlling the backbone. By the year 2000, about half of the United States’ population was online (“40 Maps that Explain the Internet” by Timothy Lee on vox.com).

As a direct result of a growing user population and growing network, the distribution of content became more difficult and less accurate. Content delivery networks (CDN) began to emerge in the late 1990s to resolve these issues using algorithms to accurately route the traffic (“Company History” on Akamai.com). However, the continuous growth of internet users throughout the globe led to more players controlling the network, and, as an outcome, a more complicated network. Given the rise of internet culture, its value proceeded to increase, and complex business deals between ISPs, CDNs, and content producers began to challenge the way content was delivered. The internet backbone today is no longer a simple route from content to back bone to user. Instead, it now consists of multiple players who have created a power struggle in the network backbone with the emergence of business mergers and direct peering. As a result, content delivery has become harder to manage and is traveling blindly across the network, which is creating bottlenecks and the need for expensive network upgrades in order to help evenly distribute traffic.

BENOCS fits directly into the future of the network

One of the largest challenges facing ISPs today – who provide the connection between the backbone and the user – is to keep its infrastructure operating efficiently. This is difficult since the ISP does not control the source, destination, or behavior of content delivery traffic. CDNs on the other hand, are able to control the source in order to deliver content to its destination, the end user. In the first three episodes, we introduced the methods used to deal with the evolution of the network in order to keep it running smoothly. From this, we learned that the products on the market sustain the network, however, based on our research, are outdated or lacking information. As two different entities sharing the same network space, our mission at BENOCS is to get these two players playing together to improve traffic for the CDN and reduce costly infrastructure for the ISP. We discovered that by communicating the network topology information from the ISP to the CDN, the CDNs make better delivery decisions, thus reducing traffic and delivery speeds (“Content-aware Traffic Engineering” by Frank, Poese, Smaragdakis, Uhlig, and Feldmann). So instead of trying to cope with the continuously growing network with premature upgrades and costly extensions, it is time to start managing it with information that it already has. It is time to involve Benocs in the evolution of the internet, and start improving the internet’s quality while saving on unnecessary costs!

Episode 4: Vision Gaps – How BENOCS tackles the delivery of different types of content

Online Content Delivery

In the previous episodes, we explored the different approaches currently implemented into the network in order to keep up with the increasing customer demands and the delivery of content across the internet. Although they are currently capable of sustaining the higher demands and expectations, their systems are not efficient and require frequent and costly infrastructure updates to manage future congestion. At BENOCS, we introduce a new way of managing internet traffic by collecting and sharing information that is already available on the network in order to balance the system and to facilitate the best and fastest delivery speeds for all types of content such as transaction/clicks and video streaming. Each of these particular types of content have special needs in terms of delivery, in which BENOCS’s unique system can optimize all of their performances. In order to understand the significance, let us now return to our pizza scenario to see what types of issues the stores could face, and how delivery services could be improved with real-time traffic reports for the best performance and customer satisfaction.

Different types of content have different delivery needs

Transactions/clicks are the webpages that internet users want to view, and to view without waiting. This is similar to what we have been discussing in episodes 2 and 3, where the pizza hotline needs to figure out which of the three locations is not only the closest to the customer, but also has the fastest path to follow. This way the pizza order is not sent to the closest pizza store, when the second closest store is faster. Don’t forget that time is money and customers, who expect their pizzas to arrive hot and on time, will take their business elsewhere if their demands are not met.

When internet customers want to stream videos, it is not so important that the content arrives as fast as transactions/clicks, but rather that it arrives at a constant pace for the duration of the video to prevent buffering interruptions while the video is playing. In this case, the CDN would need to find a path that will not just deliver the content quickly, but also at a consistent speed for a specific amount of time. If we were to put this into our pizza scenario, we could imagine that our pizza store also owns several pizza carts in the inner city that are only open during the business lunch rush between 11:00 and 14:00. In order to keep these carts stocked throughout the duration of these busy hours, we need to pay attention to where the carts are located and the road traffic to each of them in order to ensure that the carts are regularly stocked with the freshest pizza possible – because an empty pizza cart is bad for business. This means that the pizza store needs to figure out which roads between the store and the cart are the least likely to become congested between those hours to ensure a consistent flow of pizza to each cart.

Don’t let content travel blind. BENOCS can help it arrive with minimal delays!

Although these two scenarios are not likely real issues that a pizza store would face, they are real on the network, and if not solved, could lead to poorer quality, slower delivery speeds, and costly infrastructure updates. To solve the dilemas in our pizza scenario, we would be interested in a device that can assist us with traffic reports to ensure our products meet customer expectations. However, since we are not experts on the devices to aid pizza delivery issues, we at BENOCS do offer solutions in content delivery to the equivalent problems in the internet. Our products serve as the device that connects those with the knowledge of network traffic/topology (ISPs) to those that need it (CDNs). Instead of allowing content to blindly cross the network, which often leads to bottlenecks and traffic jams, we provide CDNs with information on current network topology so it reaches its destination with minimal delays. By balancing out the network traffic, internet infrastructure is saved from frequent extension and updates. This is like a road with less traffic is subject to less damage and a need to be expanded. Why not spend a little to save a lot? Its time to improve your network’s quality and efficiency while saving on premature network investments and needless maintenance cost!

Tune in next time for our final episode where we will discuss the BENOCS vision in relation to the history of the internet.

Episode 3: Vision Gaps – One step closer with ECS

Computer Keyboard

In the previous episodes, we took you on an “internet road trip” to help explicate some of the issues concerning content delivery across the network. However, the network does not work with such efficiency and simplicity as we have previously alluded. In fact, our previous metaphors are still missing some important features that are used in this process. As an attempt to complete the full picture, let us return to our pizza hotline, but this time include the Extended DNS Client Sub-net (ECS). For the explanation of DNS indirection, we imagined that the pizza hotline had to guess where you were, based on the information they had: you are in a hotel. Well this time, let’s say you provided an address, or for the sake of the internet, we implement the ECS. With an address, the hotline operator could look up your location and pizza store proximity on a map in order to choose which store is closer and, theoretically, ensure the fastest delivery time.

ECS forwards the client subnet to the CDN, but does accurate mean fastest?

Today, when an internet user requests content, the information begins by going directly to the DNS. Here, the CDN selects a well-located server based on the DNS-R location, and then proceeds further to the user. This, of course, is in the simplest terms. The IP address (user’s location), however, does not leave the network, thus creating a guessing game for the CDN to decide which server is the closest to the user. Although, if the ECS is activated, then the IP address is forwarded to the CDN as well, thus giving it the knowledge it needs to make a more accurate delivery. But does accurate mean fastest?

Now imagine that we are back on our road trip. You, the hungry customer just ending a long day of driving, arrive at your hotel. You call the number for a pizza hotline, place your order and provide your address. The pizza hotline forwards the order to the pizza store closest to your hotel and tells you that your order will be at your door in 30 minutes. Perfect! In the meantime, you fall asleep on the bed. When you finally awaken, you check the clock. An hour has passed. Anxious with the thought that you somehow missed the sound of someone knocking on the door, you call the front desk of the hotel and ask if the delivery driver left the pizza there. They assure you that no delivery driver has come through the hotel doors. You then call the hotline, who contacts the driver, and then tells you the bad news. Your pizza is stuck in traffic and will take another 30 minutes. Apparently, there was a terrible accident on the road between the shop and your hotel, which caused the road to temporarily close. Had the hotline known this, they could have forwarded your order to the next closest shop, which would have made your wait time 50 minutes instead of the now predicted 90. If only the hotline had a device that provided up to date road conditions to ensure the order was sent to the pizza store that would reach the customer the fastest.

Fastest delivery speeds come from the best path, not always the closest server

Just like our scenario, by merely applying the ECS on the internet highway does not guarantee the fastest results. Sometimes the closest server has the heaviest traffic, and there is no traffic report to warn the CDN. At BENOCS, our products do just that. By looking into the network vision gaps – as discussed in episode 1 – we create a traffic report by using the information naturally stored within the ISP. Therefore, we are the ones that can give the CDN the most accurate map when choosing the fastest path to the user. Stop making your customers wait and start choosing the store with the fastest delivery time!

Tune in next time where we will explain what information we leverage from the network and how this helps CDNs achieve their different delivery objectives.

Episode 2: Vision Gaps Deep Dive – DNS Indirection

Locations on a Map

After a long and stressful day of driving (almost stranded in the middle of nowhere without gas), you finally arrive to your destination tired and hungry. While scouring through the local phone book located in your hotel room’s desk drawer, you stumble upon an advertisement for a pizza delivery chain with three locations nearby. You decided to call their call center and order the largest pizza available. After placing the order, the person asks, as expected, your location. You provide them with the address of your hotel. With this information, the call center is then able to forward your order to the pizza store closest to you, ensuring that you will receive your pizza in the shortest amount of time and still hot on arrival. How is the pizza hotline able to provide such seemingly effortless service to prevent you from collapsing of hunger? It’s easy. With your address, they are able to look up your exact location.

In this case, the best Quality of Experience (QoE) comes from the pizza hotline being well equipped with the information of your exact location. However, if we imagine that, instead of telling them your address, you only told them that you are in a hotel, how would it look in a town with more than one hotel? Perhaps the hotline would use a similar method to the one currently being used by the content delivery networks (CDN).

CDNs systematically make wrong assumptions about the location of the user

Along the content delivery path today, CDNs often rely on the domain name system resolver (DNS-R) address as a representation of the user in order to connect the content to the user. Since the CDN does not know the user’s actual address, it uses the location it does know: the DNS-R from where the request came. From there it chooses the content server closest to that location.  As outside observers, we would assume that the user and the DNS-R are not that far apart. However, we as observers are wrong. Instead, when a user wants to view content, the network decides which DNS-R the user should use, which in most cases, changes to equally distribute the traffic across the DNS system. Thus, the network rotates the user’s requests between all of them: one of which is closest to you, one in the middle and one very far away. The CDN that is carrying the requests then assumes the location of the DNS-R is the same as the user, and will distribute the content back to the location of the DNS-R – this would mean two out of three cases in our scenario are wrong! This is known as DNS Indirection: when the CDN uses either of the two servers furthest from the user because it does not know the real location. If we were to implement DNS Indirection into our pizza scenario, we would imagine that you only told the operator that you were in a hotel, unaware that there are two other hotels in the town. The operator would then have to guess which hotel you were in and which pizza location is closest to it. As a result, the operator would send your order to pizza store B, if the last person’s order was sent to pizza store A. That means two out of every three times a customer orders a pizza, it will be delivered from a shop further away than necessary, and hungry customers don’t like to wait!

The network holds the information CDNs need, BENOCS digs it up and voilà!

Now, let’s put the address back into the pizza hotline scenario. When you tell the hotline your order, they can easily figure out where you are, and where to send the order so that you have the shortest possible wait time. At BENOCS, our products do just that. We discovered how to communicate the address to the network and CDN, and have created a context-enriched model as well as the mechanism that can distribute it to the CDN. Therefore, we are the one that the CDN can contact in order to see which server is the closest to the user.  Therefore, only the closest server receives the requests to ensure the fastest delivery possible. Stop serving your customers cold pizza, and get your map today!

For more information on DNS indirection at Google, check out Leonidas Kontothanassis’s talk starting at 37:00 minutes.

Tune in next time where we will explain Extended DNS Client Sub-net, which, when activated, sends the client’s IP address from the network to the CDN.

Episode 1: Vision Gaps: How to see through the internet forest

Thick forest

Imagine that you are on a road trip almost exactly half way to your destination. You are at that part where it is simply you, the road, and the beautiful landscape that seems to go on for forever. This should be a relaxing and enjoyable drive, but instead you become frantic having just learned your car is about to run out of gas. With only a few minutes left until you are  stranded in the middle of nowhere, unfamiliar with your surroundings, you come to a fork in the road with three possible paths. Each path has a sign for a gas station, however, the distance and potential road obstacles are unclear considering each road leads directly into a luscious forest. Therefore, your ability to see past the entrances is almost impossible. Running out of time, you have to decide which road will get you to a gas station the fastest using an old submarine sonar that your grandfather gave you from his days in the navy. Of course, using a sonar could only give you an idea of which gas station is the closest but cannot tell you of any potential obstacles, such as traffic, road damage, or even the gas station with the least amount of customers, which can delay receiving gasoline. If you think the solution in this scenario for finding a gas station sounds ridiculous, you are not alone, however, this is  similar to what is currently happening across the internet as content is being delivered to the user.

Measuring the network with a ping is like measuring your distance to a gas station with a sonar.

Like our scenario above, the delivery of content (the gasoline) between a requester (you) and many available content sources (the gas stations) over the network can be a difficult and disorganized journey due to vision gaps (the forest). These vision gaps prevent the content from choosing the best possible path, thus causing a longer travel time and a slower internet service for the user. Of course, if you could only see past these vision gaps, then you could figure out which path will get you to your destination the fastest. Unfortunately, the market today provides inefficient and outdated technology for making these measurements. That leads us to the role of grandpa’s old sonar.  Currently, many content delivery networks rely on a network-measuring tool known as a ping. Like a sonar, the ping operates by sending an echo request to a content host in order to measure the distance between them based on how long it takes the request to come back. The response will ultimately make a ping when it has returned, hence the name. The ping measurement, however, has been the measurement of choice since the 1980s and is now unfit for the already large and continuously growing network. Not only do the circumstances of the paths change within milliseconds causing the ping’s information to already be outdated when it returns, but studies show it is hard to determine how accurate the ping measurement actually is. Just like how you wouldn’t find grandpa’s sonar to be the best tool when looking for the fastest path to the gas station, the ping is not the best tool to measure network traffic.

At BENOCS, we provide the GPS system.

Now imagine that, instead of using your grandfather’s sonar, you had a GPS system tracked by satellite. Your device could not only show you which gas station is the closest, but also advise you on which path has the least obstacles and the shortest wait time, therefore getting your car fueled as fast as possible. At BENOCS, our products do just that. We explore the vision gaps using information already provided by traffic generators and operators (the satellite) within the network in order to show the content delivery network the best possible path to the server, thus providing customers with the best possible quality of experience. So get rid of that old sonar and start seeing past the vision gaps!

Tune in next time where we will explain what information inside the network BENOCS gleans as to equip our “GPS for the internet” with the right map material.

Network analytics will play a key role for future networks

Network Analytics

In just a few years’ time, the Internet has changed significantly. Starting as a hierarchical Tier-1, Tier-2 and Tier-3 topology, it is evolving more and more towards a mashup of directly interconnected networks, thus increasing its complexity both physically and logically. Driven by higher quality demands and lower transit network cost, content providers have been working on increasing the content to user speed by shortening the path, which positions the content as close to the consumer as possible. Content delivery networks (CDNs) started to develop enhanced algorithms to choose the “best” – or at least a better path – to the user in order to make the connection faster.

Every participant of the internet supply chain, which goes from the content, to the CDN, to the upstream provider, to the transit provider, to the ISP, to the end user and vice versa, has an influence on quality, reachability, and security of content. To ensure quality and detect risks, the ISPs monitor and record different sources of data by utilizing various tools.

We at BENOCS seek to leverage these information sources in a completely new way based on the principles of completeness, transparency, and simplicity. We have developed the BENOCS collector, a multi-dimensional collector that stores all the different sources in a single place and, with our analytics, correlates them to gain new and deeper insights of one’s network.

Using analytics to understand what is happening in a network is not a new concept. The current tools on the market are used for specific questions pertaining to the current state of the internet; some of them are often misappropriated to gain “deeper” insights in a foreign domain (accepting the inherent risks of the constraints and misleading assumptions). However, in the past, two major problems prevailed: the first was the amount of data and different systems, and the second refers to processing and correlation. The BENOCS collector solves the first problem by gathering information of the different kinds of protocols (e.g. IS-IS, OSPF, SNMP, BGP, DNS, NetFlow, ect.) in real-time. The second issue is resolved by BENOCS Analytics, which are driven by real-world cases of departments of network operators, quality assurance, planning & forecasting, and sales & support. In fact, we believe the effectiveness of analytics will be so rapid and dramatic that many network professionals will wonder how they functioned without these capabilities.

Being able to answer the following questions in real-time will be the USP of next generation ISPs:

“Are subscribers having trouble with video services like YouTube, Netflix or Amazon? Which users in what locations are affected? Is it caused by an internal or external network? What could be done to solve it?”

Using the power of network analytics and developing a sophisticated intelligence are mandatory to deal with the above more-variables-than-equations types of problems. Thus, analytics will generate revenue and thus pay for itself – both from the cost savings and the increasing quality stand points.

Overcoming today’s network visibility limits

Network Graphic

In times of rapidly increasing internet traffic, it is becoming important for internet service providers (ISP) to seek more visibility and control over how and where content is entering its network, and how this can affect the user’s quality of experience. Content delivery is still a blind flight, but how can you equip for the future when you do not know your demand as well as how it affects your network assets? Instead of facing rising infrastructure costs, let us help you get over the “best effort” principle and make content delivery and Quality of Experience (QoE) for your end users great.

Most of the players offering Traffic Management and Analytics tools are providing a limited scope of network visibility by only collecting NetFlow or HTTP, and rarely look beyond one’s own nose to use a more complete network data. As a result, they offer a rather narrowly focused (thus limited) view with some sort of selection of pre-defined filters, however, fall short of clear-cut possibilities for addressing a wider array of business-critical questions. Data is often aggregated too early and too intensely, which leads to a short-term “memory” of the collected traffic. Moreover, they usually lack a comprehensive structure, yet use a case specific data structure that can only be brought about by correlating the right data sources in the right way and with the right timing/resolution. Demands on tomorrow’s network monitoring capabilities go beyond the mere counting of traffic flows and detection of “insular” configuration errors. The lack of context-awareness and powerful flow granularity management renders most of today’s solutions as unfit for the job inside large carrier networks and the diversity of business questions.

We at BENOCS believe that only a comprehensive data structure built from a wide array of network-innermost real-time data gives you the opportunity to gain the problem-relevant insight for tomorrow’s carrier business. BENOCS combines the deepest network data sources (IGP, BGP, SNMP, NetFlow, IPFIX, DNS, and more) to produce a real-time network database that provides a dynamically updated and high-resolution map of your core asset. Our “Insight Engine” then allows the highly flexible production of those answers you really need in your department – this may include identification of potential peering customers but stretches way over to network capex projections for the 5+ year planning discussions. BENOCS also offers fine-resolution retrospection on raw NetFlow data – more than twice as long as competing solutions – which allows you to detect historic patterns and apply predictive analytics. With the outstanding time resolution and “finer-than-AS-level” detectability offered by BENOCS Flow Analytics, it becomes possible for you to make real-time decisions based on your actual – and even the expected – traffic to bring the most out of your network at all times.

In order to improve QoE, not only by passively “seeing” your given infrastructure and traffic, but also by directing the traffic in more intelligent ways, BENOCS Flow Director gives you the opportunity to coordinate traffic between your AS and content delivery networks (CDN) – see chart below. With our network monitoring feature and API, we leverage the network information from the AS to offer selectable network parameters (e.g. delay) to CDNs and other content providers. CDNs can then choose the most efficient paths based on your published network parameters and can implement a content delivery strategy together with you to enhance cooperative traffic steering. This enables ISPs to inform partnering CDNs on how to reduce congestion, smoothen the overall flow of traffic and optimize end users’ QoE.