NETCONF: A new approach to network management

Security violations and network vulnerabilities are just two of the unwanted results of less than successful network configuration management

This article by Imad Ajarmeh, Khalid Elmansor, Carl Moberg, and James Yu presents the summary of the joint research project of the NETCONF protocol between Tail-f Systems, Stockholm, Sweden, and DePaul University, Chicago, Illinois. The objective of the research project is to develop a network management system on top of a platform that employs XML-based software that can be installed in the managed device to act as an agent daemon. The platform can formally define the configuration requirements and collect the data from various network devices to validate the provisioned configurations.

Even though SNMP provides operations to configure network devices, it is limited to collecting statistics and status information from network devices. It is hardly used for configuration purposes. There are several reasons for this deficiency. Here, we mention the important ones:

1.      The SNMP protocol is simple, leaving the onus of manipulating configuration data on the management application. For this reason, tool development based on SNMP is expensive.

2.      SET requests are sent independently. This may cause a serious network problem if a manager sends several SET requests to configure a particular device and one request fails.

3.      SNMP does not provide any mechanism to undo recent changes in the device configuration.

4.      SNMP does not provide synchronization among multiple network devices. If a manager sends a SET request to a group of devices (to have similar configuration), some of them can succeed, and others can fail.

5.      It does not employ the standard security mechanism. Instead, the security is self-contained within the protocol itself, which makes SNMP credentials and key management complex and difficult to integrate with other existing credentials and key management systems.

 

Currently, the proprietary Command-Line Interface (CLI) and web interfaces remain the preferred and often the only option for configuration management.

NETCONF

In 2006, NETCONF emerged as a prospective approach to network configuration management for the purpose of automation. It has been standardized in the IETF Request for Comments (RFC) 4741. The NETCONF protocol defines operations for managing network devices where configuration data can be retrieved, uploaded, manipulated, and deleted. The standard also defines the application programming interface as well as the connectivity requirements for NETCONF.

Figure 1 shows the four NETCONF protocol layers. In transport layers, the standard provides three mechanisms to send NETCONF messages: Secure Shell (SSH), Simple Object Access Protocol (SOAP), and Blocks Extensible Exchange Protocol (BEEP). The support of SSH is considered mandatory for NETCONF implementations.

21
Figure 1: NETCONF Protocol Layers

 

NETCONF adopts an XML-encoded Remote Procedure Call (XML-RPC) to communicate between a manager and an agent. XML has several advantages that enable NETCONF to be simple, highly flexible, and very cost-effective when developing new applications. In the operations layer, configuration data can be retrieved by a <get-config> request, and modified by <edit-config>, <copy-config> and <delete-config> requests. Non-configuration data (such as status and statistical information) along with configuration data can be retrieved by a <get> request. Figure 2 shows an example of a NETCONF message to configure a Voice over IP (VoIP) client.

22
Figure 2

 

One of the big advantages of NETCONF over SNMP is how the protocol works when manipulating a group of semantically related configuration data. Whereas SNMP modifies the value of a single parameter at a time, NETCONF modifies all or selected parameters on a single primitive operation. Another advantage of NETCONF is that it allows configuration to occur in a transactional manner. NETCONF takes into account when some of the network devices successfully upload the configuration, but others fail to upload the configuration. In this case, NETCONF allows a managed device to rollback to a known-state configuration. This is because NETCONF defines transactional models that synchronize, validate, and commit device configuration within an entire network deployment.

Interestingly, the upper three layers (Transport Protocol, RPC, and Operations) of NETCONF have been standardized, but its content layer has not yet been standardized. Given the current proprietary nature of the configuration data being manipulated, the specification of this content depends on the NETCONF implementation. However, a separate working group called NETMOD formed in May 2008. Its primary goal is to specify a standard data modeling language and standard content for the NETCONF protocol. The new data modeling language, YANG, is a conceptual schema for data configuration. It provides a concise description of data types, relationships between data configuration, and integrity constraints that ensure the correctness of uploaded values during the configuration process.

XML-based support

One example of a product that supports the NETCONF protocol is ConfD, XML-based software from Tail-f Systems (Figure 3). ConfD can be installed in the managed device to act as an agent daemon. Currently, ConfD can be installed on the following operating system platforms: UNIX, Linux, and QNX. ConfD provides a unified management system that accepts configuration requests from a NETCONF interface, CLI, a Web interface, or an SNMP request.

23
Figure 3: ConfD Architecture Design

 

ConfD can automatically read a YANG data model and render the three northbound interfaces (NETCONF, CLI, and Web UI) based on YANG specification by using few commands. This enables network administrators to focus on writing data models for configuration parameters and building the external daemon that interacts with ConfD and the Managed Object. ConfD’s rich set of API functions facilitates the interaction between external and ConfD daemons. The ConfD database system, Configuration Database (CDB), efficiently stores data configuration. Developers are not forced to use this CDB system; they can use any external database system.

Validation mechanism

When a configuration parameter needs to be updated through an <edit-config> request, ConfD will fulfill the request as one transaction. Figure 4 illustrates the conceptual states ConfD transaction goes through.

24
Figure 4: ConfD Transaction State Machine


As shown, before a transaction goes into the Write state, the write request goes into the Validate state. At this state, ConfD allows the programmer to insert C code to reject the request, accept it, or accept it with a warning. The user may choose to abort or commit the transaction.

Joint research plan

The validation model adopted by ConfD ensures consistency of data configuration within a managed device itself. However, the fundamental goal of network management is the consistency of data configuration that spans the entire network. For example, to provide connectivity to all hosts in a network, a manager must consider the forwarding state for all routers. Although a network manager configures each router independently, the routers must be globally consistent to guarantee the property of connectivity among all hosts.

To provide a global validation process, a centric agent that can contact all managed devices is required. Figure 5 shows a ConfD server that is responsible for validating the global forward state of all routers. The key question is how global validation is performed. Given the validation server is a ConfD agent, where is the data model residing in this server?

25
Figure 5: Framework for Global Validation

 

To answer these questions, we need to understand the meaning and the scope of data models. In a network management paradigm, data modeling should provide a method to define and analyze data requirements needed to support the services provided by an enterprise or Internet service provider. To construct this model, we must specify the structure of data configurations for each network device as well as the operational requirements. By operational requirements of data configurations we mean end-to-end requirements such as performance, security, connectivity, and fault tolerance. One objective of this research is to define a representational model to capture operational requirements. The representational model must be encoded by the YANG syntax.

Figure 6 shows our proposed framework to describe the network configuration. The first step is requirements specification and analysis. During this step, the system administrators determine the set of data configurations that need to be considered. This step requires visiting each network device and then specifying the required data configurations for that particular device. In parallel with specifying the data configurations, system administrators determine the users’ requirements as a set of service properties: performance properties, security properties, connectivity properties, and similar. Along with specifying users’ requirements, it is useful to define the operations that will be applied to the formal model. The operations might be associated with actions to be performed when the data configuration violates the operational requirements.

26
Figure 6: Main Phases of Designing a Comprehensive Data Model

 

Unifying two different network views

An important part of determining the users’ requirements is finding a way to unify two very different views of the network. The first is the network topology (the distribution of links across network devices). The second is the access control list that determines the device behavior. Unifying these views requires combining the policies governing redistribution of network devices with the policies governing the properties of packets that traverse a link. This unified framework underlies our research analysis. The expected deliverable of the research is a configuration management validation system as shown in Figure 7.

27
Figure 7: Configuration Management Validation System


The output of the validation system is a compliance report of whether the configurations of the network devices conform to the requirements that are formally specified. It should be noted that the validation system only reads data from network devices and does not write configuration data to the devices. The next phase of the research project is to automate the configuration management process, which will write validated configuration data onto the devices.

James Yu is an associate professor at the College of Computing and Digital Media at DePaul University. He received his Ph.D. from Purdue University and worked at Bell Laboratories for 15 years.

Khalid Elmansor and Imad Ajarmeh are Ph.D. students under Dr. Yu’s advising, and they are doing research in the areas of network management, VoIP engineering, and High Availability (HA) networks.

Carl Moberg held several management positions at ServiceFactory, a company he co-founded in 1999, before joining Tail-f Systems. Prior to ServiceFactory, Carl was one of the principal architects of Telia’s Internet service platform. This platform was an industry first, supporting a wide variety of access types and application services in an integrated fashion to millions of subscribers.

Tail-f Systems

www.tail-f.com