Flowing text is a project done as a part of academic work that I am involved with for last few years at the University of Rijeka – Department of Informatics. It’s a short overview at latest achievements in the field of network automation with some lab experiments done to test different paths across the network. The work was presented at 6th International Conference on Information Technologies and Information Society (ITIS2014).

The scope of ITIS events are the applications of IT, particularly in social sciences. The conference also covers a wider range of topics related to IT and computational modeling and analysis, in the context of our Creative Core project “Simulations” and our Research Program “Complex networks”. These include cloud computing, complex systems and complex networks, bioinformatics, graph theory and optimization, statistical analysis, business and industrial processes, logistics, information systems and security.

Okaj, let’s go…


dr. sc. Božidar Kovačić & Valter Popeškić (me)
University of Rijeka – Department of Informatics


This work can be described as a progression review document that follows strictly theoretic view of cognitive network proposals and techniques in “Cognitive networks the networks of the future” article [1]. In the timeframe of one year after the presentation of cognitive network theory overview there are more than a few news in the field. Basically all of them are from the department of network device control plane centralization and evolution to real products this year.

The buzzword SDN – Software defined networks took place from other very popular words like Virtualization and Cloud. Main goal of this work is to present a short overview of newly featured ideas, protocols and products from the field. The overview of existing technology is then enhanced with a proposal on possible future improvements that are in main focus of described research.

The work will comprehend all actualities in the networking field and it will correlate suggested automated network management methods with network security, network virtualization and few possible improvements of the whole plethora which may run network of tomorrow.

Key Words: Software Defined Network, NFV – Network Feature Virtualization, BGP AS-PATH prepending, metric

1 Introduction

A cognitive network is a network consisting of elements that reason and have the ability to learn. In this way they self-adjust according to different unpredictable network conditions in order to optimize data transmission performance. In a cognitive network, judgments are made to meet the requirements of the network as an entire system, rather than the individual network components. The main reason of the emergence of cognitive networks is to achieve the goal of building intelligent self-adjustable networks and in the same time improve the performance. Intelligent self-adjustable networks will be able to use measurement of network state and different probes results, convert all into statistic data to determine ideal network operating state for many tunable parameters.

2 New type of networks

2.1 Network planes and their function

To be able to further explain the network devices configuration centralization it is very important to list all different parts of common networking. The word that will emerge first is a plane. We can say that the plane is a networking context describing three components of networking function. Three different planes in networking are control plane, data plane and management plane. Each plane carries a different type of traffic. Data plane carries user traffic and its often called forwarding plane. Control plane is plane carrying signaling traffic and used by protocols that are running on network devices to synchronize their functions. At the end management plane is carrying administrative traffic and is basically a subset of the control plane. We are able to configure particular device using management plane. That device when configured is able to calculate and make routing decision using his routing protocol which in turn uses his control plane to communicate with the same routing protocol on other device in network. At the end device will forward received traffic based on the decision made in control plane that are more or less caused by our settings done in management plane.

Today networks are distributed. It means that devices mostly used in real world have all planes implemented on each network device. It means that management and every other function of a network are distributed. Every device is manageable locally. For example, routing process runs on every router in the network and based on information received from his neighbors that router chooses where to forward traffic. Note that information about best path is processed locally in operating system.

2.2 Software defined networks – SDN

“SDN is the physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.” [2]

Open Networking Foundation

To be able to get you some insights into the future of network technology in this part we will shortly describe different terms of the software defined networking world. As stated early in this work network devices functionality is comprised of three planes of operation Management Plane, Control Plane and Forwarding or Data Plane. In the old fashion mode all three planes are part of every networking device.

The idea behind modern proposed software driven devices is that they do not have completely distributed management and control plane on every unit. The idea behind the SDN implementations is to have centralized SDN controller that will enable centralized control over configuration. All tough it sounds reasonable, it will be better to state that SDN is comprised of few SDN controllers because like any other networking system it will need to have some sort of redundancy[1] implemented in order to have stability, resiliency and even better scaling of the system.

Each SDN controller in that case is able to connect to network devices and collect the network state details from which he will make a network model. From this model controllers will be able to show to the user the network state using specific user interface and it will be able to collect user commands that will make him generate and apply changes to network devices in network.

2.3 Most advantageous SDN feature – Automation

The best thing about SDN is not the ability to configure all devices from one centralized place. Though this is also a good thing. The best thing is that SDN controller can do it automatically. SDN controller has the ability to adapt making his own decisions and changing the configuration based on the end-to-end visibility of current network state. Controller is making decisions in a way that he computes the flow paths in his software. That is the main reason for having the name SDN – Software Defined Network relating to this new networking concept.

This kind of functionality will enable us to have more than just destination based hop-by-hop layer 3 forwarding that we do today. Maybe we will finally see easy to manage large-scale orchestration and provisioning toolset that will enable us to do security and QoS policies on different places in the network that may be dynamically tuned to current networking traffic state. Maybe even dynamically adjusted policy-based routing.

2.4 Misconception regarding SDN

SDN is regularly misinterpreted. There are more that few sources stating that SDN is all about centralizing the control plane. Nevertheless, SDN is a concept that is defining how to centralize the configuration for all devices in the data network system.

Decisions and forwarding is still done in distributed fashion and it is the way it should be in the future. The main goal of SDN concept is strictly defined. That is, enabling us to configure all devices from centralized place and have a mean to enable automatic tuning of network parameters based on current network state. In that way we must not connect to all 500 different devices in order to configure how they will forward the traffic afterwards or, for example, to tune some routing path to avoid congestion. The controller itself who is in charge to “control” and configure our 500 device needs to be distributed into several segments or places across datacenter in order to provide redundancy and make the system more scalable. You would then connect to one of those controllers and make some changes. That controller will then sync with other controllers and that push the configuration to devices where is needed. The most important thing to emphasize here is that there will be no centralized networking system. The system would need to stay distributed because that is one of the main characteristic of the networking after all.

2.5 Changing how network context are functioning

SDN in theory separates Data plane and Control plane by centralizing Control Plane. Like the definition on Software defined network from Open Networking Foundation it is described in short that we want to take control plane out of every device and centralize it on some sort of controller. The idea behind this is proven to be a good one. Centralization of control plane together with simple tough robust protocol (OpenFlow[2]) used to send the configuration change information to each network device could derive with a stable and highly automated network system.

2.6 OpenFlow

SDN controller will use OpenFlow protocol to send changes to forwarding table of every network device. OpenFlow is a low-level tool bringing us a new way of controlling the configuration of forwarding table in the network switch from central location. It is basically a protocol with implemented API that is used by network SDN controller to configure network device [3]. So SDN controller has a user interface where users can configure some network component and then the controller is sending the configuration to the device using OpenFlow protocol.

2.7 Network Function Virtualization – NFV

There are several advantages coming with virtualized network management. One of them is surely cutting the cost of equipment. In future, there is a big chance that you will be able to buy very cheap not complex hardware for your network that supports OpenFlow. Of course there would always be a possibility to get very complex, feature rich devices running full IOS[3] or JUNOS[4] firmware but also controllable with OpenFlow. But that is only the management virtualization.

Network function virtualization also contains firewalls and load balancers, SIP gateways and Network appliances of different kinds. NFV basically means putting network services on a generic x86 hardware. The thing to point here is that NFV means also bare-metal installation of network services and operating systems not only virtualizing them on some sort of hypervisor[5]. The idea is to run those services with generic hardware skipping the one vendor appliances.

Virtualizing of network devices and their services is allowing us to simply move the network part together with applications that are running on our servers. This process is facilitating disaster recovery and cloud migration. Load balancers and firewalls will follow the application when you move it somewhere else in the datacenter, to other datacenter or to cloud.

2.8 Virtual firewalls today

Virtualization of firewalls is one of the most interesting parts of the whole NFV process. In the same time, it is the most controversial one because the security, stability and performance implications. This work will describe some of the most significant characteristics of today’s virtualized firewalls with mostly performance and security in the focus. Later chapter is specially intended to give some ideas of security measurement and security improvement inside virtual network segment.

Most networking devices running layer 4[6] to layer 7 functions today are running x86 CPU. As from this, they all have the possibility to virtualize those appliances and offer them as VM[7] appliances that are running inside particular virtual machine.

First reason to virtualize firewalls is the issue with physical appliances regarding the inability to move physical device. This is particularly difficult for firewalls. Virtual firewall enables us that when we migrate the whole VM to other host, virtual firewall migrates with the workload and together with VMs. Of course, this would be only possible if we have separate virtual firewall for each tenant, each VM. The best part of last sentence is that it maybe sounds like something that is negative about virtual firewall, but is actually really positive and best case scenario. We are virtualizing firewalls because we want to move them together with the whole VM server but we also want to have a separate firewall in front of every server. This kind of implementation is per se futuristic but it would enable us to have granularity and simplicity in security management and firewall configuration. We could, in this way, have a template for newly created VM that will be able to create separate instance of virtual firewall put preconfigured rules on that firewall and attach it to the newly provisioned VM.       Deployment and provisioning flexibility enables us to deploy per tenant/per application firewall that have first time configuration from template and are much easier to configure and furthermore to manage later in the process.

From here we see that some of the most obvious advantages are transport network independence, easy configuration management, workload mobility and simple and automatic deployment and provisioning.

From other side the main objections are focused on performance, attacks on hypervisors and multi-tenant attacks. Here we need to point out that the drawback is mostly about packet processing in CPU that is very expensive from every point of view, particularly for throughput. The main performance issues are related to mostly used Linux TCP/IP stack in virtual appliances and hypervisor virtual switch implementations that are adding their own I/O CPU processed overhead.

Those issues are more and more reduced using different techniques. There was different testing done on the TCP/IP stack performance on firewall VMs and the result from one of those was that replacing the Linux TCP/IP stack with proprietary one increases performance from 2Gbps to 15Gbps. The process includes connecting the VM directly to physical interface card. If you do not tie the VM directly to physical interface you lose the ability to use optimization techniques that TCP/IP stack has because you are going through hypervisor virtual switch. Other method to increase the throughput is by using TCP offload mechanisms. TCP offload is basically a technique where you send a large TCP segment to virtual switch and virtual switch just forwards that whole segment directly to physical network interface card. The interface card does the slicing of that data into TCP segments that are sent across the network skipping the hypervisor process overhead. It shows performance increase from 3Gbps to 10Gbps all this if normal Linux TCP/IP stack was used. TCP offload for now is able to handle VLAN tagged traffic and in the next generation of network interface cards it is promised that there would be also support for VXLAN.

Bypassing the TCP stack by bypassing the kernel and in this manner allowing the process to directly access memory where the packet buffers are, is giving great throughput results up to 40Gbps through one Xeon server.

3 Security feature proposal

3.1 Introduction

Foremost reason for going down the road of networking virtualization in this work was a challenge to give a measure to one important mechanism that is mostly left out of the virtualized networking layer, security.

The main goal and focus of this work is the measurement of network path security. Be that in the virtualized and physical networking environment or in today’s mixed environment. In conditions described here we need to take into account the virtualized part of the network communication path as the most difficult part to monitor and with that the most difficult part to protect and secure the communication that is going through.

Main goal of this chapter was to solve the main challenge in the process of data path security measurement – Data path selection. We needed to get some traffic engineering solution which will enable us to send the probes across different, mostly not best paths, directed towards specific destination. In order to get this to work we decided to go with BGP outbound path selection influencing mostly with AS-PATH[8] prepending.

Creating the experiment and studying methods of path selection on “normal” non-virtualized network segments will help us later bring that knowledge inside virtualized networking environment. Starting the experiment with BGP and AS-PATH prepending was a natural and simple way to get the results with path selection. Path selection will then be the starting point for making the system of probes which will test actual paths in virtual and non-virtual environment.

Taking this into account future development of the mentioned method, the main reason for this metric development will be its usage in the path selection mechanisms. That metric could then be incorporated in future versions of routing protocols and involved in metric fine tuning with current robust routing protocols. Giving the more concise use case it would help calculate real time security metric that will enable our routing protocols to select most secure path for every communication across the network that needs this kind of communication quality [4, 5, 6, 7]. In virtualized world it will enable us to select different virtual paths in VM intercommunication giving the possibility to select one path for secure communication of user applications maybe even across virtual firewall in between VMs and direct, completely “free” L2 path between those same VMs in non-private system communication would be needed.

3.2 Self-estimating the need to enforce the usage of network path security metric

Security metric implementation considers that it will be set-up to fall-over to standard routing whenever there will not be the need to have a secure communication channel. This can be the case of video and audio streaming in multimedia serving where the speed of the transfer and jitter control is of far greater importance than security of the transfer. Maybe there will be a suggestion to apply this security policing only to TCP and not UDP traffic. For the details about the concrete final implementation of the metric more detailed testing and simulations need to be done. Simulating different real life usage situations will enable us to learn more about what would be the best implementation scenario.

3.3 Idea

Idea about security metric starts to be built on the basis of standard routing protocols. In the beginning we decided to use BGP – Border Gateway Protocol implemented for IPv6 network. Multiprotocol BGP for IPv6 was the best candidate for the beginning of the experimentation giving the ability to transfer different information inside extension community attribute. Other suggestions and ideas included the usage of IPv6 RH0 headers. Additional IPv6 RH0 extension header have the ability to select way-points towards destination for each packet sent. IPv6 extension header space is practically unlimited in size by using data payload space for expanding. From our calculation it can be used to insert up to 90 way-point IPv6 addresses inside every packet. Using this two methods, layer 3 (IPv6) and layer 7 (MBGP) respectively enable us to have flexibility in measuring and applying path security metrics.

In routing process we have always multiple paths to get to each destination. There are surely more routes existing in the process of searching the way to get the packet to their destination. Those routes will be processed inside the router and inserted into RIB[9] (Router Information Base) but only the best one will be selected to be inserted into FIB[10] (Forwarding Information Base). Route inserted into FIB will be used by that device to forward traffic to next device toward destination. The question that we are asking is if that path that we are inserting into FIB is also the most secure one. If we want to determine that we need to use one more test before selecting which RIB route is going to be inserted into FIB. For testing purposes, in order to solve the issue of testing the paths that are not yet in the FIB we used IPv6 RH0 headers. IPv6 has the ability to get packets sent across specific intermediary next-hop addresses before going to the actual destination. In this way we can force test datagrams to cross different patch and calculate which of them was received on the other side with less or no errors in transmission. There are different ways to determine if a datagram has experienced attacks while crossing the path towards the destination. In that way we will be able to determine if the path in that specific moment has some attacker activity going on. Future experimentations gave the result that usage of RH0 headers in IPv6 is only theoretically an excellent choice. In the first phase of testing it was clear that today’s networking equipment is more than half times configured with source based routing disabled by default. It will basically drop all of our packets with RH0 headers defined. Next step was to make the experiment using BGP and AS-PATH prepending traffic engineering technique.

4 Experiment and Analysis

Experiment was done using 90 Virtual Quagga routers running Zebra OS. Those routers run a specially written routing daemons that enables BGP, OSPF and other routing protocols on them. Protocols are implemented by the RFC standards. The fact that the solution is open source enabled us to get the experiment to the next level. Not only Unix based Quagga routers enabled us to run a simpler version of IPv6 Internet network but also to have the possibility to influence and change the mechanics inside routing protocols by different means and directly inside router OS [8]. That would not be impossible if using some vendor specific hardware that does not enable the owner to get the control over the internals of operating system.

The experiment was done in next few steps. At first we made the eBGP peering between every router and his neighboring devices making more than 300 connections peering between 90 devices. This way we had a situation of real world Internet customers and ISP interconnection scenario inside our virtualized environment.

Quagga lab in GNS3

Figure 1, Virtual Quagga routers running Zebra OS simulating Internet BGP network for AS-PATH prepending experiment.

Be selecting all different AS numbers on the devices every device was basically a representation of the whole customer, whole ISP. It was simple enough for our purposes. After making the connections and configuration of IPv6 addressing and BGP peering for all devices we continue to the main part of our experiment.

One router from environment edge was selected as the source of the testing and another from the other part of this environment was playing the role of destination.

We advertised the first /48 prefix out of our router to the “Internet” and after a short period of time all routers did learn the prefix and decided what will the best path to that prefix be across the network. Our source router did learn about other networks that other ASs have across the whole network. After this short period we could look at the BGP routing table inside our source router and read out all AS numbers that exist on the network. We also have a possibility to see which way our router decided to send the packets when they are forwarded to our selected destination device.

When we look at the route to our destination device on source router and read out the information about all AS numbers that are interconnections towards destination.

Next step was to subnet our /48 prefix to large number of /128 host networks. There is the possibility to get and be able to advertise about 1.2*1024 different host prefixes.

We selected the AS-PATH prepending method so that for every of those host prefixes we can advertise one or more AS-PATHs inside community header. If our destination prefix route in BGP showed 10 different interconnecting AS numbers we can advertise different /32 routes with every permutation of those AS numbers to get different paths back to our subnet. This method of traffic engineering is then used to test every of those paths in order to distinguish the path with best security metric between them. After the test, we can use that path for the whole subnet and advertise the /48 prefix with that combination of AS-PATH AS numbers prepended.

Experiment showed us that idea about sending data packets across different paths toward same destination was possible. It proved our idea about bypassing the best-path BGP route selection mechanism was also possible. It will enable us to test non-best paths regarding BGP for possibly better performance than best-path selected by BGP. BGP metric is complex but is unable to select the path based on real time security measurement or any other known performance metrics.

This could lead us to a new and reactive routing protocol feature that could test different routing paths before selecting the best one and thus have the ability to circumvent network congestions. Congestions of this kind are today a major issue in US. Having high quality multimedia streaming services in increasing number of countries across the world will also increase congestions on ISP interconnection peers. Having a reactive BGP metric controlled by SDN controller of some kind that could react upon congestion threshold would surely give to SDN one more reason to exist.

5 Conclusion

This work is written to be an early insight of the new virtualization technologies trend called SDN and NFV. Furthermore, there was considerable effort invested in testing of the idea that it is possible to have better way to calculate best path for data traffic using into consideration real-time probing of secondary waypoint paths. SDN is a new way to implement our idea is more flexible and responsive way.

SDN as a technology is not new but it was long time trapped only in theory. Actually there was a considerable delay in network device virtualization considering the impact of virtualization technology on other IT departments and their existing devices like servers etc. Virtual networking devices were not in the process of becoming the reality for a long time. The emerging virtualization technology in data centres with millions of virtual server instances gave a big push to networking features virtualization development as well as development of centralized automated controllers to run those devices.

This article is not only about SDN controllers in future networking implementations. However it is true that tries to give an overview of this part of the networking technology. The main reason for this work is giving a ground for the future research in making network more secure. Analyses were made on network flows within networks of different kinds, virtual and physical. The idea that emerged was that there are some security measurements methods missing that would enable us to improve network path performance by a big percentage. After a period of research using different sources the idea becomes clearer every day. Perhaps this work will be a starting point for development of a part of SDN feature set that is yet to be made.

6 References

  • Popeškić, B. Kovačić, “Cognitive networks the network of the future.” Society and Technology 2012 / Plenković, Mario (ur.). – Zagreb : HKD & NONACOM, 243-254 (ISBN: 978-953-6226-23-8), July 2012,
  • Nick McKeown et al. (April 2008). “OpenFlow: Enabling innovation in campus networks”. ACM Communications Review. Retrieved 2009-11-02.
  • Atef Abdelkefi, Yuming Jiang, Bjarne Emil Helvik, Gergely Biczók, Alexandru Calu, Assessing the service quality of an Internet path through end-to-end measurement, Computer Networks, Volume 70, 9 September 2014, Pages 30-44, ISSN 1389-1286
  • Brownlee. Traffic Flow Measurement: Meter MIB.RFC2720. 1999
  • Zhang, M. Roughan, C. Lund, and D. Donoho. An information theoretic approach to traffic matrix estimation. In SIGCOMM ’03: Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications, pages 301–312, Karlsruhe, Germany, August 2003.
  • Myung-Sup Kim, et al, A flow-based method for abnormal network traffic detection. Network Operations and Management Symposium, 2004. IEEE/IFIP, Volume: 1, 19-23 April 2004 Pages:599 – 612 Vol.1
  • Jakma, P.; Lamparter, D., “Introduction to the quagga routing suite,” Network, IEEE , vol.28, no.2, pp.42,48, March-April 2014
  • Will E. Leland, et al, On the self-similar nature of Ethernet traffic. IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 2, NO. 1, FEBRUARY 1994.
  • Park and W. Willinger, Self-Similar Network Traffic: An Overview Self-Similar Network Traffic and Performance Evaluation. K. Park and W. Willinger (editors), John Wiley & Sons, New York, New York.
  • Vardi. Network tomography: estimating source-destination traffic intensities from link data. American Statistical Association, 91:365–377, 1996.
  • Zhang, M. Roughan, N. Duffield, and A. Greenberg. Fast accurate computation of large-scale ip traffic matrices from link loads. ACM SIGMETRICS, 31(1):206–217, 2003.
  • Y. Zhang, M. Roughan, C. Lund, and D. Donoho. Estimating point-to-point and point-to-multipoint traffic matrices: an information-theoretic approach. Sheng-Yuan Tseng, Yueh-Min Huang, Chang-Chun Lin, Genetic algorithm for delay- and degree-constrained multimedia broadcasting on overlay networks, Computer Communications, 2006, 29, 17, 3625


[1] Redundancy – making duplicate critical components and functions inside a system in order to increase reliability of the system.

[2] OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network [3].

[3] IOS – Cisco IOS – Internetwork Operating System is software used on Cisco switches and routers.

[4] JUNOS – Juniper JUNOS – Juniper Junos is the FreeBSD-based operating system used in Juniper Networks routers and switches.

[5] Hypervisor – Virtual machine monitor (VMM) is a software, firmware or hardware that creates and runs virtual machines.

[6] Layer 4 – OSI model Open Systems Interconnection model is a theory model that standardizes the internal functions of communication network system by partitioning it into abstraction layers.

[7] VM – Virtual Machine

[8] AS-PATH Prepending procedure applies only to eBGP sessions—that is, when advertising prefixes to another AS and the local AS number is prepended in front of the AS_PATH attribute the number of times specified. BGP is always selecting the prefix with the shortest AS_PATH. The length of this attribute is probably the best approximation of the classic IGP metric when mapping this concept to BGP. This could be directly compared to the hop count concept used in RIP. Using this BGP attribute we can control inbound traffic path selection as one of most interesting traffic engineering technique inside BGP routing protocol.

[9] RIB – Router Information Base

[10] FIB – Forwarding Information Base

UPDATE on 14 June 2016:
Article was added to new Scientific & Academic category where all my science work will be published.

Leave a Reply

%d bloggers like this: