Towards application identification with a novel DNS-based approach

Application-oriented view of traffic sources in the form of a sankey diagram

Today’s internet revolves more around applications and less around networks. An interesting example of this current application-oriented approach is a global outage this year[1]. Nobody remembers that AS13414 reported a down, however, many people remember that X (formerly Twitter) had slowdowns and outages affecting many international users.

In this context, network players (e.g., ISPs) have been trying for decades to understand how application traffic is delivered to end-users. Existing tools are limited and only DPI (Deep Packet Inspection) has been the dominant technology to provide such insight; however, this faces increasing challenges with encryption and scaling.

In this post, we present a BENOCS implementation of a DNS-based correlation framework, called DNS Flow Analyzer (DFA), to annotate and classify the traffic flows with information about applications (e.g., TikTok, Disney+, AmazonPrime, DAZN) and CDN domains (e.g.,,, This novel solution allows network providers to expand their traditional network-oriented view with an application-oriented view.

A network-oriented view is not enough

A few decades ago, content providers were building big data centers to serve different Internet-based applications to end-users. In recent years, however, Content Delivery Networks (CDNs) are being used to convey the increasing demands for online applications (including video, gaming, and social networks). These media contents, riding on the top of the network, are known as Over-The-Top applications (OTT-Applications) and they use globally distributed CDNs for sending their content. Currently, large content providers leverage more than one CDN and CDNs also convey traffic of multiple OTT-Applications.

In order to work efficiently, network operators need better knowledge on how traffic from the CDNs and OTT-Applications is delivered to their end-users. However, they have historically focused on obtaining information only about Autonomous Systems (ASes), transit providers, and peers. This network-oriented approach is not enough to answer one key question: how do OTT-Applications use the different CDN domains to distribute their traffic?

An application-oriented approach with DFA

Answering the above question has been a daunting task for network actors. Existing network-focused solutions such as legacy flow tools or DPI are limited in tying traffic information to individual applications. The latter also becomes increasingly inefficient due to encryption and requires a ridiculous amount of hardware, especially working on a large scale.

At BENOCS, we have developed a methodology that includes the analysis, design, and implementation of an application identification system called DNS Flow Analyzer (DFA). DFA annotates and extends the traffic flows with domain name information, so that two new layers are effectively obtained: (i) OTT-Application domain and (ii) CDN domain.

Specifically, we propose a large-scale real-time network data correlation system that uses a set of different data sources (e.g. Netflow, BGP) but mainly it feeds on DNS streams to obtain multi-dimensional traffic information. As a result, we obtain an application-oriented view to identify how a source OTT-Application (e.g. Disney+) is delivering traffic to a network using different CDN domains (e.g.,,

DFA architecture and workflow

The high-level DFA architecture and entire workflow rely on two developed components:

  1. DNS-Netflow Correlation. The output of this component includes extended and correlated data: Netflow and a list of URLs representing a DNS domain name resolution. The sequence of events are:

1.1) Live DNS records are classified in two lists (i) DNS A/4A to map an IP address to a domain name, and (ii) DNS CNAME to map a domain name to another domain name.

1.2) In parallel, live Netflow records are captured at the network ingress interfaces. Each Netflow record contains, among others, timestamp, srcIP, dstIP, bytes, etc.

1.3) DFA looks for the srcIP of a Netflow record in the DNS A/4A list to find the domain name it corresponds to (using getName(IP)). Then, looking at the DNS CNAME list, DFA searches for the previous domain name to find the CNAME it corresponds to (using getName(Name)). The search in the CNAME list continues until no further domain names are found (or a pre-defined loop limit is reached).

Diagram of DFA architecture
  1. CDN-APP Classification. This final output extends the traffic flows with CDN domain and OTT-Application information (including BGP). See the sequence of events below:

2.1) DNS-Netflow data is correlated with BGP to gain more knowledge about the traffic paths (source AS, handover AS, nexthop AS, and destination AS).

2.2) Regarding the CDN domain, getCDN() function uses the first URL in the list of domain names and selects the second-level domain (2LD) and top-level domain (TLD). In case of the latter, this component makes use of the Public Suffix List (PSL) database[2] published by Mozilla.

2.3) This second lookup goes through the list of domain names to obtain an OTT-Application. The getAPP() function uses a URL-APP database to associate a specific domain name or URL to the OTT-Application it belongs to (e.g., is for Disney+, is for AmazonPrime, etc.). This URL-APP is a customized/curated list that continually evolves as new sources are discovered.

DFA architecture to front end (diagram)

DFA correlates flow and DNS data to see where the network traffic originates. It identifies CDN domains and OTT-Applications within the source AS based on DNS A/CNAME records pairing. This novel and future-proof way to identify applications can be typically used by:

  • Firstline maintenance (NOC) to respond to customer complaints, which are generally about applications, not IP-addresses or ASes.
  • DFA also includes an easy-to-understand multi-dimensional dashboard with a network-oriented view (by default), having the option to unlock two new dimensions to allow the visualization of the traffic flows in an application-oriented view with various OTT-Applications and CDN domains.
Screenshot BENOCS DNS Flow Analyzer

Get in touch with us if you’d like to learn more about DNS Flow Analyzer and see it in action!



Happy birthday, BENOCS!

BENOCS 10 years logo

It’s time to celebrate! BENOCS turns 10 years old in June. We spoke to BENOCS CTO and co-founder Ingmar Poese to learn about the early days and his experiences since co-founding the company back in 2013.

How did BENOCS come about?

Believe it or not, it was kind of an accident. At the time I was doing research at T-Labs for my PhD and they were looking for ideas for founding a company. Me & Oliver Holschke came with an idea for a business to serve large telco operators. He took care of the business side of things, while I did the  technical stuff (I didn’t know much about business – and frankly. I’m still not really interested in that part).

The project was then in limbo for a year; we weren’t sure if it would take off at all. Then, in 2013, we got the go-ahead, and it was then that the company was founded conceptually. Back then the name ‘Berlin Networks Engineering’ (BNE), what we wanted to call the company, was already taken, so we couldn’t use it.

We wanted something with “Berlin” in the title, so we looked thought the available web domains for something that started with “BE”. We came across “BENOCS” and thought it sounded good.

What was the first BENOCS product and how did it look?

The product was “invisible”; it simply looked cryptic and mathematical. It was originally an ISP-CDN collaboration tool and was active in the backend only. It was designed as completely transparent, not to be seen. Kind of like IP addresses: they exist and but generally no-one thinks about them, they just work.

However, there was one crucial issue: to get enough ISPs, you need a critical mass of CDNs, and vise versa. Today we still have this product and it’s called Flow Director. I still think it has huge potential in the market. Because it’s hard to see, though, it’s hard to sell.

Incidentally, BENOCS Flow Analytics was also an accident, which happened while we were developing Flow Director. So that’s something.

What have you learned in the past 10 years about the telco industry and business in general?

The first lesson I learned about the industry is also the biggest: telcos move slowly. It’s incredibly hard to convince them of new ideas, especially for a product as close to the core as ours, which makes it  almost impossible to deploy in new networks. When it does work out, though, it’s extremely rewarding for both parties. I like working with large, complex systems, and you don’t get much larger than telcos.

Regarding business in general, I’ve learned one needs to be persistent. Because you can never make each and every customer 100% happy all the time, it’s often necessary to make compromises, finding the thing that works best for the majority. Then you can build out and develop the product further from there. This requires persistence and keeping your eye on the bigger goal.

Thirdly, I learned a few interesting things about going from academia into industry. Some see you “giving up” on research. Others decide you’ve not yet gained enough industry experience. You’re stuck between a rock and a hard place; it’s a tough place to be in. For me, research still the biggest inspiration for my work.

Ingmar Poese, CTO & co-founder of BENOCS

And what have you learned about yourself?

I’ve come to realize just how much I dislike bureaucracy. Having processes in place for the sake of processes sucks. I’ve learnt that not writing code is something I don’t enjoy and unfortunately, I do a lot more of it, i.e. not writing code, than I would prefer. I’m constantly learning soft skills: my people skills have come a long way, but I’m still working on them.

And I’ve learned that if don’t stop myself, I work too much.

How do you ensure your team works healthily?

I encourage healthy working hours. I try not to contact people outside of regular working hours, unless for work-related emergencies, I try to avoid too short deadlines. That said, I also expect everyone in my team to take responsibility for themselves. If you work too much or too little, you need to tell me, so we can sort it out.

As a company we generally want to keep our employees happy and healthy. After all, our team is the secret to making the best products for the telco industry: without them, we cease to exist.

What would you do differently if you started BENOCS today, in 2023?

Building our software would be a very different process today. The technology of 2023 is different; there are many tools now available that didn’t exist 10 years ago. As a result, some technical decisions could today be made differently.

I wouldn’t do much differently regarding the team itself and building up the business. A negative experience with someone in the team at the very beginning taught me that even if a position waits longer to be filled, that is better than hiring the wrong person to do the job. As I already mentioned: each member of our team plays an integral part in keeping BENOCS running smoothly and successfully. We are very conscious of whom we take on board and whether they are the right fit for the company. We are definitely doing something right: we have a fantastic team and I am grateful that they also chose us.

The next 10 years are going to be awesome.

Anomaly detection done right

Screenshot of an Anodot anomaly alert and next to it a screenshot of the corresponding incident in BENOCS Flow Analytics

Network visibility is our specialty: We can show you things about your internet traffic that you never knew. By utilizing various internet protocols such as IGP, BGP, SNMP, NetFlow and DNS, BENOCS Flow Analytics already helps you and your company – among other things – improve peering negotiations, prospect new customers, improve capacity planning and, of course, improve customer service.

You’ll forgive me the 1980’s infomercial reference, but here it comes:

But wait, there’s more!

Recently we teamed up with AI champs Anodot to give our customers even more insights into their network traffic.

You might be familiar with the following scenario: You sign up for notifications to alert you of important incidents affecting your network.  You set manual thresholds, outside of which the alarm bells start ringing: Here! Something important needs your immediate attention!

So, you quickly check the situation. False alarm.

Soon after, the next alert lands in your inbox: This time it’s worked. Thank goodness for alerts!

Then another alert comes in. And another. More false alarms.

Suddenly you are inundated with alerts, only a small proportion of which are useful. As a result, you generally don’t even bother much with them, deleting them with little more than a brief scan.

Problem solved

Anodot uses artificial intelligence and machine learning to learn normal traffic patterns, detect and correlate anomalies, and create real-time alerts based on deviations from normal network traffic. This approach eliminates the problem of alert “noise”, that is, being spammed by alerts that shouldn’t be alerts in the first place.

Thanks to the integration of Anodot’s autonomous monitoring into BENOCS, one click in your email alert will send you straight to the relevant event in Flow Analytics, where you can study the issue in your network and remediate failures before they impact revenues.

Download the white paper to dive deeper into the combined Anodot and BENOCS anomaly detection solution and get in touch with us if you’d like to know more.

Fast Indexing for Data Streams

Screenshot of the sankey diagram in BENOCS Flow Analytics

Our customers, some belonging to the biggest telecommunications providers in the world, need to monitor and analyze huge amounts of traffic. For this reason, Flow Analytics needs a substantial databank behind it. There is no shortage of database management systems on the market, which means we had to do a lot of testing, before deciding on which one would make BENOCS Flow Analytics work.

While the internet is home to massive amounts of data, this data is not static, but rather hurtling through cyberspace like William Shatner on a rocket joyride into space. And there’s not just one William Shatner taking a 10-minute trip: There are countless data transfers happening all the time. This movement means we need to factor in another dimension: time. BENOCS Flow Analytics users need to investigate incidents that occurred in specific time frames, making fast access to specific time ranges while ignoring the rest of the data a basic requirement.

To visualize network traffic in this way we need to measure traffic volume over time, showing the user how the data is behaving on its journey from its origin to its final destination.

Self-healing push architecture

Analyzing network traffic at high complexity and speeds is challenging, especially in diverse environments with asynchronous data feeds. However, we love a challenge and this is the setup that BENOCS operates and has to deal with. Across different network setups, BENOCS unifies the data sources and correlates the incoming network information.

At BENOCS, we process and correlate data feeds of dozens of terabytes each day. The data processing is built around data becoming available from different sources, then being pushed through several jobs. This essentially becomes a data push architecture that processes data as it becomes available.

In the above scenario, three data feeds are producing three results that are of different data types. Furthermore, each of the individual feeds has its own time resolution as well as delay when the data should be available – however, sometimes it’s late. In the case of data being late, processing should not stop, but rather skip the late pieces until they become available. Once available, they must be made available as well.

So why ClickHouse?

At BENOCS, we chose to build this architecture with ClickHouse at its core for several reasons. In summary, those are fast indexing and fuzzy matching on data streams.

BENOCS ClickHouse Pipeline

Let’s consider result 2 as an example. This can only be processed when Feeds A/C have data. However, it is possible to partially process data in case data from Feed A is missing. In numbers this means if Feed A has data for 10 5-minute timestamps for a specific hour ready and Feed C has a matching timestamp for that same hour, at least two of the four timestamps in result 2 can be calculated. The other two timestamps need to wait until Feed A makes the data for it available.

ClickHouse solves this problem for BENOCS by fast lookups on the time dimension. By running DISTINCT SELECT queries on the primary indexing column, terabytes of data can be searched through in a matter of seconds. This makes the operation of checking the data availability light-weight despite the heavy data burden.

However, searching through the timestamps and finding gaps efficiently is not all. The same principle also applies for the actual data processing correlation. ClickHouse’s ability to skip data based on time makes the table sizes become almost irrelevant, as it can zoom in on the needed data efficiently. This makes the processing time for a single time range independent of the actual table size as well as the position in the data. This ClickHouse mechanism allows BENOCS to run efficient self-healing data streams in the face of unreliable data streams.

Fast indexing

Fast indexing is the most important reason BENOCS heavily utilizes ClickHouse. It boils down to ClickHouse offering extremely fast lookups on specific dimensions due to its MergeTree table design. ClickHouse allows for skipping vast amounts of data in a matter of seconds based on the primary key without having to consider the data in irrelevant data at all.

For BENOCS, this dimension is time. In the ClickHouse pipeline we run, lookups based upon time are the first step towards any job being scheduled.

Fuzzy matching

When dealing with different time scales, joining tables usually means unifying the matching columns to have exact matches. However, when dealing with vastly different timescales (see Feed B/C), this becomes highly complicated as FEED B might have multiple different matches for one key in Feed C. Furthermore, other dimensions complicate things due to missing/incomplete data.

This is where the ASOF join of ClickHouse comes to the rescue for BENOCS. This means being able to find the nearest match instead of the exact match using a join. Combined with well selected WHERE clauses this becomes a powerful feature that expediates and simplifies queries massively.


BENOCS processes vast amounts of data in ClickHouse, utilizing its powerful engine. The ability to zero in on the needed data and being able to ignore irrelevant data lets BENOCS build a self-healing data pipeline that can handle unreliable and volatile data feeds into a stable analysis for its customers.

If you’re a telco provider wanting to optimize your network traffic, drop us a line and register for a free Demolytics account to see BENOCS Flow Analytics in action.

This blog post originally appeared on

Jobs for Ukrainians

BENOCS sources talent from

Like so many others around the world, we at BENOCS are horrified by the invasion of Ukraine. Many of us are engaged as individuals in supporting refugees fleeing the country and those still there. However, we also wondered what we as a company could do to help.

We are 25 employees from 10 different nations, so are no strangers to working in an international team. In fact, we are convinced that our multicultural office enriches our team and creates an all-round great atmosphere.

So we’ve decided to help in the best way we can: by especially inviting Ukrainian full stack developers and data analysts to apply for open positions in our team.

We would like to give two Ukrainian refugees (and their families) a chance to gain a foothold in Berlin, so we have posted our two open positions on, a job platform for displaced persons from and within Ukraine.

If you are from Ukraine and would like to know more, jump on the website above or get in touch with us at! Ми раді вітати вас у нашій команді BENOCS!

Everyone else: Please spread the word!

Work and network traffic after COVID-19

Video streaming traffic during COVID19

Like any of you, we here at BENOCS like to ponder current happenings around the globe. Whether it be the political state of affairs in various countries, the cost of public transport in a chosen city or the quality of service at the Vietnamese restaurant down the street: You name it – we‘ll debate about it. The SARS-CoV-2 pandemic, of course, is no exception. It is a topic that has fueled many a heated debate or two.

Doing what we do best (traffic intelligence as a service), we decided to take a closer look at the data to see how lockdowns have affected traffic during the SARS-CoV-2 pandemic and what this means for 2022 and beyond.

The Data

Looking at different network vantage points, what really stuck out was the surge in traffic coinciding with the first lockdown in the spring of 2020. This in itself was no surprise. What was surprising, however,  was – contrary to the expectations of the cynics amongst us – as countries went into lockdown, most of the working population was not sitting at home streaming videos. In fact, streaming services in some networks showed no increase in traffic that could not be attributed to seasonal changes and „organic“ growth.

What did happen when lockdown arrived was a major shift in the world of work and study. Suddenly, face-to-face communication was in many industries no longer possible; workers and students had to pick up the phone, use online chat, or engage with colleagues, customers, and teachers in video conferences. If you are one of the millions of former office-goers who was thrown suddenly into home office in March 2020, or a student who had to adjust to online classes overnight, you’ll no doubt recall recurring connection issues as your internet connection struggled to keep up with all your video calls.

Business Applications Traffic during COVID19

What is also interesting is the increase in traffic during periods of heavy SARS-CoV-2 cases. This suggests a fundamental, behavioral change in the way we work. Workers and their employers have discovered presence in the office is no more conducive to productivity than working from one’s sofa (which we do not recommend, by the way. Please be sure to work ergonomically correct!).

What does this mean for 2022 and beyond?

Will these levels of traffic remain? Our answer: Yes, partly. When comparing current traffic trends to those of previous years, we found no connection between the severity of lockdowns and the corresponding traffic. Traffic seemingly has plateaued to new levels with each wave.  This scenario, as already stated above, points to behavioral changes and suggests a “new normal” in how we work. Eventually the growth will slow, but it probably won’t return to pre-pandemic levels. We’ve come too far for that.

What do you think will happen to work and study traffic when the pandemic is over?

How can we prepare?

In increasingly complex networks, which are also at the mercy of outside influences such as the SARS-CoV-2 pandemic, network oversight has never been more important.

The scope of the SARS-CoV-2 pandemic was unprecedented: no-one can be blamed for struggling with the situation. Is it even possible to prepare for such an event? Perhaps not directly. However, it’s become obvious that not only is effective network oversight crucial but also the ability to act fast and targetedly in the event of increased network traffic. Having access to network analytics that deliver fast and reliable data is essential for customer and investor satisfaction. Remember my mentioning our BENOCS debate about the at the Vietnamese place earlier? Well, the service is great: Fast, reliable, and delicious (obviously). They really know how to keep their customers happy. Let us help you keep your customers happy too.

Heatmaps are the best way to view your external links’ behavior

Screenshot BENOCS Border Planner (formerly known as "External Links") screenshot

Network traffic is growing every day. We’ve known this for years, however the Internet events revolving around COVID-19 lock-downs showed us how dramatically it can grow over a short period of time. That is why BENOCS created the External Links Heatmap for Capacity Planning.

If you were following the news in spring 2020, you might remember Netflix announcing a reduction in stream quality for 30 days to ease the strain of video traffic on ISPs. So why did this happen? Well, the new (at the time) lock-down regulations turned the world as we knew it upside down with the Internet bearing the load. The once face-to-face contact activities such as work, socializing, education and entertainment, went virtual, which led to an unprecedented strain on the network. How?, you may wonder. For one thing, image content, specifically video, is very heavy, which creates a lot of traffic. Additionally, more users went online at the same time, clogging the Internet pipes and straining any through traffic . As a popular video streaming company, you, too would probably conclude that reducing network traffic was better than cutting off customers.

So, what does this anecdote have to do with BENOCS’ capacity planning? Well, as you can imagine, operators were overwhelmed by those surges in Internet traffic. Additionally, the tools they used to monitor the traffic, specifically external links, were too slow and complex to solve the issues fast enough. How could they know the very basics of their network’s performance? Who was filling their pipes? Where could they logically add more capacity?

What we mean by external links

As someone living in the era of the Internet, you are probably most familiar with the term “link” as an abbreviation for “hyperlink” or website address. As someone who is working in network operations, architecture, engineering or IT, you probably think of the connection or “link” between routers. When we talk about external links, we mean the links between routers sitting at the edge of the Internet backbone. These are the links that exchange traffic between different networks in the internet, e.g. a CDN and an ISP.

Although operators always maintained a close eye on these links, the pandemic proved that their methods needed an upgrade. What they needed was a tool that gave them an immediate overview of their links’ traffic behavior in as few clicks as possible to see potential overload of interconnects in real-time. That is what inspired BENOCS to create the External Links Heatmap.

The External Link Heatmap

The BENOCS External Links Heatmap consists of two main graphs: a time series and a daily peak utilization table. Additionally, both elements contain customizable filters, such as date-range and a utilization-threshold slider. These features allow users to determine which links are over-utilized and for how long. They also define at what percent of utilization a link is considered highly utilized. On top of that, users can filter for specific routers, interfaces or AS numbers as well as whether the traffic is going in or out of the network.

The pandemic might slow down, the Internet traffic growth does not. Therefore, its time to consider exploring new options for network traffic analytics before the next quick surge in traffic.

Are you interested? Check out our product webpage for more information or get in touch directly with us today!

Fastly outage has also led to mixed outcomes

Traffic drop during Fastly CDN outage

Yesterday, Fastly made news headlines after a configuration error led to a wide-spread failure of commercial and critical apps and websites. This phenomenon has not only led to a new general interest in the inter-workings of the Internet, but also, as most news sources reported, exposed how fragile the Internet can be.

However, in a world as complex as the Internet, it is easy to miss out on the networks that have had a different experience than those underwhelmed by Fastly traffic during this event. As a network-monitoring-as-a-service provider (amongst many other things), we at BENOCS found that not all networks we monitor showed a drop in Fastly traffic. To show you what we mean, let’s compare two networks.

What we saw

In the image above, we can see that one of the networks showed exactly what we would have expected: a large drop in traffic (about half of the average value) with a deep dip shortly before 12:00 CEST, which matches Fastly’s reported time of 9:47 UTC (11:47 CEST).

However, when comparing that to a separate network in the image below, we actually see a sharp rise in traffic on that very same day with a deep dip around the time of the outage, followed by a major spike in traffic in the hours following.

Traffic spike during Fastly CDN outage

What could have caused the differences in these networks?

As much as we would like it to be, the answer is not so simple. One theory could be that an Internet event, such as an operating system update, a gaming event or download, or possibly a large streaming event could have occurred in the second network, which did not occur in the first. A second theory could be that traffic was off-loaded into the second network from neighboring networks. Whatever the cause, we can clearly see that the traffic in the second network is nearly double than its average traffic throughout the day, while the other network remains at roughly half of its average. Very unusual behavior.

What can we learn from these two networks?

The examples mentioned above show us that, when we look behind the curtain, the real-world is more complex and may often show us different or mixed outcomes than we originally thought. In a world as complex as the Internet, it is important to remember that not all networks will react to events in the same way, and it is important to be aware of what your network is capable of.

Are you looking for a smarter way to see your network traffic and its behavior? BENOCS Analytics may be the product for you. Get in touch with us today to learn more!

Who filled my pipe? Flows on Links can help you figure that out

Screenshot BENOCS Analytics - Module - Flow Explorer

Flow analytics tools, such as our own, are great for seeing how traffic flows enters and leaves your network in order to determine its behavior. However, when it comes to answering questions such as “who filled my pipe (aka who is consuming valuable backbone capacity)?”, or “do I need to raise transit prices?”, you often find yourself limited by the information these tools can provide.

If you are using any near real-time flow analytics today, you are probably impressed with how much it can show you about your network’s traffic in just a Sankey diagram. BENOCS’ Sankey, for example, can show you the dimensions of your traffic spanning all the way from source to destination. It also reveals how traffic has entered and exited the network backbone. However, despite all of these dimensions, being able to see how traffic travels through your network is often missing. That is why BENOCS now offers Flows on Links: displaying information between network routers that don’t usually export flow (e.g. LSR’s). Using information collected from the Interior Gateway Protocol (IGP), the Flows on Links features gives users the ability to see who is sending traffic through each individual link in the network backbone.

What is IGP and why do we use it?

Although widely ignored in analytics tools, Interior Gateway Protocol (IGP) offers significant benefits when integrated in the data pool. That’s why IGP has been an integral element of BENOCS Analytics from the very start.

IGP provides the full topology overview of a network and – if augmented to flow-information – can describe the entire path a packet is using to traverse through that network. This fills the blind spot between the ingressing and egressing network borders. It also allows the application of flow-based information even on links which are not exporting any flow. Sounds magic? That’s probably because it is.

How you can benefit from IGP

With the information provided from IGP, we are able to project individual flows on Links at any point in the Internet backbone. Then, by correlating IGP with BGP, we are able to display the flow of traffic into, through and out of the network. These flows include all known dimensions, like Source, Handover, Ingress, Egress, Nexthop and Destination, which can be displayed and filtered for each network segment you want to examine. As a user, this means you receive an image of your network traffic that shows you right away which companies are “filling your pipes”.

If you were to imagine multiple streams all leading to the same river, BENOCS Flows on Links adds a different color to each stream. Therefore, when you look at the river, you can see exactly which drops of water came from which stream (as long as you imagine the colors won’t blend together).

How Can I Access Flows on Links?

BENOCS’ Flows on Links is designed for anyone looking to know who is loading their links, who is (ab-)using their network backbone, who should be charged more for transit and how to cope with network upgrades.

Sound like something you need? Get in touch with us today.

BENOCS paired with your current network analytics for a high performing network

Screenshot of BENOCS Demolytics showing the Sankey diagram, flow data and SNMP line, and time series

The Internet is a large and complex system that requires experts across diverse backgrounds to function. Common knowledge right? We think the same logic should apply to the different network analytics tools.

When it comes to choosing the analytics tool that is right for your networks, you might be enticed by companies that claim to be a “jack-of-all-trades”. This may sound like the best deal on multiple levels e.g. installation, hardware, fees, etc., but different types of networks and different functions to run a network require different types of network analytics. Would you build a house with a Swiss Army Knife or with a tool box filled with high quality tools?

With different departments such as network architecture or network security requiring different information, it always seems like the perfect solution is one that can provide features for both of them. What networks really need, however, are several different specialized tools for the same price or less. BENOCS Analytics offers customers a specialized tool that focuses on displaying large network traffic flows for peering optimization, network operations and maintenance.

If you're going to spend the time and money, spend it well

If you were to start researching for network analytics tools for enterprise or service providers, network security or DDoS defense, etc. you would find a lot of those products on the same website from the same company. Provided the costs of installation, hardware, operations, and not to mention the time spend just researching the products, these products often feel like the best deal. In the end, you may, however, end up with a tool that may only slightly improve your company’s performance.

At BENOCS, we focus on service provider analytics. Our analytics collect a combination of Netflow, BGP, IGP, DNS and SNMP from your network, aggregate it, and present it to you in a way that makes the most sense to network engineers, peering and transit managers and quality assurance departments. Our data model makes queries zippy, history accessible, overview holistic and hardware affordable.

A company that focuses mainly on DDoS protection, on the other hand, extracts more refined data from the network and presents it in a way that makes the most sense to best identify and combat a DDoS attack the moment it hits. This is a completely different specialization than the one required for network optimization and that would only benefit your network security department.

Remember that Swiss Army Knife we mentioned earlier? A great tool for putting together a temporary shelter in the woods during a weekend camping trip, but not for professional craftspeople building a sturdy house. Your employees are craftspeople looking to build and maintain a house. For the same price as buying them all a Swiss Army Knife, you could invest in a tool box filled with specialized tools to fit their individual needs.

Start filling up your tool box

When searching for the right analytics tools to meet your company’s needs, purchase the highest quality tool each company has to offer, whether it be DDoS protection, network security, enterprise network analytics, or anything else and get BENOCS Analytics for your service provider’s network analytics.

Our customers have done the same. Not only have they saved money on service fees, but they have also seen vast improvements in the performance of their employees, which has trickled down to the performance of their network. You know what they say: Good network performance leads to happy customers.