Cloud Native / Networking

How OPNFV Operationalizes Network Functions Virtualization

6 Jan 2016 11:54am, by

Editor’s Note: This is the second part of a multi-part series detailing Network Functions Virtualization, an emerging set of technologies to virtualize the network layer. The first part of the series, the introduction to NFV, can be found here.

Thanks to cloud computing, networks are experiencing an exponential growth in traffic. Service providers are under immense pressure to meet the demands of end-users and enterprises, and so not surprisingly they are looking towards technologies as software-defined networking (SDN) and network functions virtualization (NFV) to improve service agility and operational efficiency.

Open source software has become the key ingredient in addressing these challenges. In this article, we will focus on one such open source project, The Open Platform for NFV (OPNFV), which is a carrier-grade open source reference platform for NFV.

The Big Picture

Backed by Linux Foundation, OPNFV attempts to bring together providers, cloud and infrastructure vendors, developers and users alike to define a new type of reference platform for the industry, integrating existing open source building blocks with new components and testing that accelerates development and deployment of NFV.

OPNFV is a collaborative project, involving service providers such as AT&T, China Mobile, NTT DOCOMO, Telecom Italia and Vodafone as well as IT vendors such as Brocade, Cisco, Dell, Ericsson, HP, Huawei, IBM, Intel, Juniper Networks, NEC, Nokia Networks, and Red Hat.

All these industry leaders together aim to advance the evolution of NFV by building a carrier-grade reference platform. The focus is making multiple open source components interoperable, along with achieving consistency and high performance. In this regard, OPNFV works with multiple upstream projects to coordinate continuous integration and testing while filling development gaps.

OPNFV does not aim to develop and standards, instead works closely various standard groups such as ETSI’s NFV Internet Standards Group, ONF, and the IEEE, to ensure consistent implementation of standards for the NFV reference platform.

OPNFV’s initial focus is on virtualized infrastructure. Considering the use of OpenStack as the virtual infrastructure manager, the below figure depicts the initial focus on OPNFV.


OPNFV’s initial step is to assemble a minimal set of base infrastructure to enable real-world deployments. The minimal set would consist of  software for compute (OS Nova, KVM), storage  (OpenStack Glance and Cinder), network (OpenDaylight, OVS and ONOS), infrastructure (RabbitMQ, PaceMaker, MySQL), operations (OpenStack Horizon, Keystone, and Heat) and testing (OpenStack Tempest and Robot, Rally).

The first version of OPNFV, called ARNO, was released in June 2015. The next release, to be called Brahmaputra, is scheduled for early 2016. With the ARNO release, ISO images are made available to deploy using the Fuel and Foreman deployment tools. The release includes various open source components, including OpenStack (the Juno release), OpenDaylight (the Helium SR3 release), as well as supporting technologies such as CEPH, and KVM.


The figure above summarizes the OPNFV usages and the associated process. As a user, one can add new open source components, propose feature enhancements, come up with new use cases or run custom virtualized network function on the reference platform.

OPNFV installation process can be summarized into two phases. In the first phase, the installation focuses on setting up the virtual infrastructure manager (VIM). In the next phase, more of OPNFV specific installation and maintenance are taken care of, such as a common configuration with Puppet manifests, system level tests with Rally/Robot, etc.

OPNFV Projects

OPNFV projects can come in a number of different flavors: (a) Requirements Projects, (b) Development Projects, (c) Integration and Testing Projects (d) Documentation.

Requirement projects focus on identifying and addressing gaps in upstream projects like OpenStack, OpenDaylight, etc. These projects are driven by the need for defining the use cases, and by the need for ensuring that the upstream components meet the demands of carrier-grade NFV deployments. Hence, the output of these projects typically includes use cases, gap analysis, requirements and architecture and implementation plans.

The below figure may help the reader to appreciate the range of requirements projects:


Here are brief summaries of some the projects listed above:

DPACC:  Improving the performance of VM-centric communications has always been a strong focus in both academia and industry. The project focuses on building a framework for VNF data plane acceleration. The framework includes APIs for VNF portability and predominantly the resource management functions in environments that include hardware accelerators.

Promise: Promise is a resource reservation and management project. In order to strengthen the realization of NFV, Promise focuses on a specific requirement of service providers. Operators would require reserving a set of resources for the future use, either foreseeing some traffic bursts or as part of preparation for any natural disasters. Such requirements are addressed by including a capacity management system that would manage the resource pools such as compute, network, and storage.

Copper: Copper was established to define specific use cases for intent and policy auditing. Various examples are provided, which include local and geo-redundancy, and security controls. Copper aims to ensure that the virtualized infrastructure complies with the needs of the VNF developers or users.

Doctor: Due to strong high-availability requirements in telecom deployments, majority of the modules/nodes come in redundant configurations, which needs to be maintained and managed under all circumstances. There are three use cases to show typical requirements and solutions for automated fault management and maintenance in NFV. These are (a) Auto healing (triggered by critical error), (b) recovery based on fault prediction (Preventing service stop by handling warnings), (c) VM Retirement (managing service while H/W maintenance). Doctor is a “fault-management” project focuses on working closely with OpenStack to develop a framework for realizing such management and maintenance of faults.

Multisite: NFV infrastructure may be rather true in many cases, distributed across multiple geoproject-one-tablegraphical locations. Such distributed deployments would require coordination among multiple virtual infrastructure managers (VIMs).  Such coordination may vary from supporting application level redundancy across data centers to network (virtual and physical infrastructure) management, to policy management. This project focuses on the enhancement to OpenStack to support such multisite NFV deployments.

Resource Scheduler: Existing solutions for resource scheduling in OpenStack considers a limited set of parameters, such as CPU and memory. However, along with these parameters, for some applications that are telecom specific, it would be necessary to consider network information too. This project aims to develop an efficient resource scheduler that would consider the network information for resource isolation.

Prediction: Avoiding failures is the motivation behind this failure prediction system. The Prediction system constitutes of data collector, a predictor and a management module. For data collection, the tools such as OpenStack’s Celiometer and Monasca are enhanced to interwork with collectors, such as Zabbix, Nagios, and Cacti. The predictor subsystem uses real-time analysis and machine learning techniques and works on the data provided by the collectors. Based on the prediction made, the predictor sends notifications to the management modules that will act upon the notifications appropriately.

Collaborative Development

OPNFV’s collaborative development projects aim to produce original code in collaboration with the existing/ongoing open source projects. CollaborationsLet me provide a brief overview of some of these projects.

  1. Open Network Operating System Framework (ONOSFW): The ONOS SDN controller is enhanced with a bunch of code to realize its integration with OpenStack. This integration covers, ML2, L3 and some service plugins and drivers.
  2. OpenVSwitch (OVS) for NFV: This is a planned collaborative development within the Open vSwitch project to improve the performance of the software-accelerated userspace Open vSwitch and increasing its suitability for Telco NFV deployments.
  3. OpenContrail: This project will enable OpenContrail to be selected as the virtual networking technology in OPNFV deployments.The main tasks of the project will be integration of OpenContrail artifacts into the OPNFV continuous integration infrastructure, and ensuring support in the various installer projects.
  4. Moon Security: This project proposes a security management system called Moon that specifies users’ security requirements. It will also enforce the security managers through various mechanisms like authorization for access control, firewall for networking, isolation for storage, logging for tractability, etc.
  5. Fast Datapath: Project “Software Fastpath Service Quality Metrics” is focused on the development of utilities and libraries that would support low latency, high performance packet processing paths (fast path) through the NFV Infrastructure (NFVI) such that VNFs can measure telco traffic and performance key performance indicators (KPIs)  and detect and report violations that can be consumed by VNFs and higher level EMS/OSS systems. An ability to measure and enforce Traffic Quality KPIs in the data-plane will be mandatory for any Telco-grade NFVI Implementation, regardless of whether the data plane is implemented in hardware or software.
  6. Kernel-based Virtual Machine (KVM): The NFV hypervisors provide crucial functionality in the NFVI. The existing hypervisors, however, are not necessarily designed or targeted to meet the requirements for the NFVI, and work needs to be done to enable the NFV features.
  7. OpenDaylight Service Function Chaining (ODL SFC): Service function chaining provides the ability to define an ordered list of a network services (e.g. firewalls, NAT, QoS). These services are then stitched together in the network to create a service chain. This project provides the infrastructure to install the upstream ODL SFC implementation project in an NFV environment.

Integration and Testing

OPNFV includes a systematic continuous-integration (CI) -driven testbeds in different parts of the globe. The testing projects cover both functional and performance testing:

Octopus: OPNFV works with many upstream open source projects. Typically, these projects are developed and tested independently and the use cases considered by these projects may not cover NFV-specific cases. Hence, integration of these projects may reveal some gaps, which should be identified at the earliest. Octopus first aims to achieve prototype integration on an initial set of upstream projects.

BootStrap/Get Started: This Project assembles and tests a minimal set of infrastructure components for OPNFV to run a set of sample VNFs. Basically, this project provides a solution to automatically install and configure the required components using existing installer and configuration tools and perform a set of basic system level tests. The installation is based on Linux Ubuntu 14.04/Centos 7 as the base operating system.

VSPerf: This project aims to characterize the performance of Virtual Switches (VSwitch). VSPerf aims to develop both VSwitch testing framework and all the associated tests, which will help to validate the use of a VSwitch in Telco’s NFV deployment.

Functest: Functest is used to setup test tooling to run test cases, including ones that could be considered as performance testing, and to integrate test cases into CI processes.

Yardstick: This project will enable NFVI verification from the VNF perspective. It offers both functional and performance test cases addressing the whole system.

Q-Tip: Q-Tip is a performance benchmarking suite for OPNFV, aiming to characterize the platform starting from the bare-metal components.

IPV6: This project will create a meta-distribution of IPv6-enabled OPNFV platform and to come up with a methodology of evolving IPv6 OPNFV. The Deliverables of IPV6 project are automation scripts, installation and user guides, test cases, gap analysis and future recommendation.

Pharos: This project focuses on creation of a federated NFV test capability. This testing facility, hosted by different companies in the OPNFV community, considering both geographically-dispersed and technologically-diverse environments. The effort will create a specification for an OPNFV-compliant test environment, along with tools, processes and documentation. Such an infrastructure can be used to support integration, testing and collaborative development of projects.

A Potential Requirement/Development Project

I would like to end this article by sharing a project proposal, called Project 1View. I welcome your feedback and comments.

This project addresses needs in system visualization, which will be needed by administrators, application developers, and network managers. In the current NFV deployments, a number of tools offer visualization capabilities at different levels on the stack, for services, infrastructure management, physical and virtual network topologies, virtual and physical network elements, flow tables, statistics and configurations.

In some cases, some of these tools lack some important features. Looking at the flow-table is extremely challenging when trying  to perform troubleshooting. I believe there is a need for developing clear set of requirements in order to simplify the visualization problem, and developing foundation elements or building blocks for visualization.

Hence, 1View will be about specifying a set of requirements for the ideal visualization tools, considering different aspects of visualization in NFV deployments, and initiating a collaborative development with upstream projects to realize those requirements.

Cisco, HPE, IBM, Intel, and Red Hat are sponsors of The New Stack.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.