NFV, SDN, Clouding Computing Concepts, Terminology, and Standards – Part 2: History of SDN & its Adoption by Network Providers

Software Defined Networking (SDN) is an easy to grasp, transformative concept with complex, diverse implementation options which this blog will explore from web research listed in the bibliography as of its writing Jan 2018.

SDN is still evolving based on work at several standards bodies. The Open Networking Foundation was founded in 2011 to promote SDN/OpenFlow and the Linux Foundation announced in April 2013 the OpenDaylight Project (ODL) and later ONOS in 2015.

Here is the Wikipedia list of open source SDN initiatives:
• Open Daylight (controller baseline upon which other controllers are built)
• ONOS
• Project Calico
• The Fast Data Project
• Project Floodlight
• Beacon
• NOX/POX
• Open vSwitch
• vneio/sdnc (SDN Controller from vne.io)
• Ryu Controller (supported by NTT Labs)
• Cherry
• Faucet (Python based on Ryu for production networks)
• OpenContrail

Please see Part 1 of this Blog for a review of NFV.

SDN History

The first time I heard the phrase “network operating system” was in the early 1980s regarding Novell Netware or Banyan VINES products that allowed shared file and printer access among multiple computers over a LAN along with other centralized networking functions in a client server architecture. These functions have since been subsumed by server operating systems like Windows Server. But many of the concepts of SDN date back to this period where separating the control and forwarding planes was discussed in SS7 networks, IP switching, and ATM networks.

In the mid-1990s as Internet usage started to explode, the precursor research work for programmable networking that eventually led to SDN was called Active Network which was funded by USA/DARPA up into the early 2000s (DARPA had previously funded the development of the Internet.) The motivation for research in Active Networks was, like it is still today for SDN, a desire to: reduce the time needed to deploy new services, leverage the reduced cost of computing (and now cloud computing) to reduce OPEX, and better network management with unified control over networks that were becoming more cumbersome with the proliferation of vendor specific network nodes including firewalls, proxies, policy servers, and transcoders.

As I heard it or read it – but I can’t source it, a research project, the Clean Slate Program, was started at Stanford University mid-2000s to answer the question what would we do differently if we were able to redesign the Internet from a clean design sheet. The answer was to separate the management and control planes from the data plane in routers. (Router tables that only list choices for the next hop are problematic from a network assurance perspective because re-routed traffic can flood the re-route link – so, an abstracted global, view of the network was needed to efficiently re-route traffic.)

In any case from a historical perspective, in about 2008 and previously in 2006 as the SANE project; the “Programmable Open Mobile Internet 2020 (POMI) project” was funded by the National Science Foundation at Stanford University and University of California Berkeley with the objective of defining a standard for software-defined networking (SDN) as a way of configuring network routing and network addresses through software. That standard became OpenFlow and its appeal took off like a rocket because of its adoption by virtually all network equipment manufactures and standards bodies. It was followed quickly by the design of an SDN controller platform, NOX, by Nicira Networks and later BEACON in 2010 all of which were donated as Open Source in 2008. Nicira was subsequently acquired by VMware in 2012

SDN Architecture

The most relevant developments occurred around 2015 when it was finally determined which layers comprise the SDN architecture and which interfaces are best suited for interoperation between them.

See Figure 1 showing SDN as layered architecture.

NOTES for Figure 1 – SDN Terminology and Popular Products:

1. A “plane” distinguishes between different areas of operations versus a layer of software. The planes can be moved around within the SDN architecture e.g. the control plane can be depicted as a service to a management plane when placed above the control plane.
2. The “Y” in the figure denotes an interface.
3. Examples of forwarding-plane abstraction models are the Forwarding and Control Element Separation (ForCES), OpenFlow, YANG model, and SNMP MIBs.
4. Examples of the operational-plane abstraction model include the ForCES model, the YANG model, and SNMP MIBs.
5. Communication between control-plane entities is usually implemented through gateway protocols such as BGP or other protocols such as the Path Computation Element (PCE) Communication Protocol (PCEP). Other examples are SoftRouter, and RouteFlow.
6. Examples of CPSIs are ForCES and the OpenFlow protocol.
7. Examples of MPSIs are ForCES, NETCONF, IP Flow Information Export, Open vSwitch Database (OVSDB), and SNMP.

SDN Platforms as Network Operating Systems

Software Defined Networking (SDN) can be considered as a “network operating system” in that network intelligence is centrally managed (logically) by maintaining an abstracted global view of the network– essentially a programmable router/switch model for physical or virtual networks.
In contrast to earlier research on active networks that proposed a node operating system, a network operating system is software that abstracts the installation of state in network switches from the logic and applications that control the behavior of the network leading to the conceptual decomposition of network operations into layers.

The OpenFlow model is the model that I had in mind when researching for this blog, but it quickly got more complicated. Nowadays there are typically at least two sets of SDN controllers:
• SDN controllers for the NFV infrastructure of a datacenter,
• Historical SDN controllers for managing the programmable switches of a network.

OpenFlow is merely one widely popular instantiation of SDN principles. Different APIs could be used to control network-wide forwarding behavior; previous work that focused on routing using BGP as an API could be considered another instantiation of SDN, for example architectures from various vendors e.g. Cisco ONE and JunOS SDK represent other instantiations of SDN that differ from OpenFlow.

Adoption of SDN by the Communications Industry – Cloud & Virtualization as a Use Case for SDN

Implementation of SDN by network operators for cloud based network operations was slower than for premise or cloud based datacenters because network operators required significant organizational and skill transformation to demolish network/vendor silos and there was a lack of comprehensive and performant NFV/SDN solutions that met “carrier grade” requirements. IT datacenters with vast arrays of server farms were quicker to grasp the benefits of SDN (and cloud computing)

Since then, the pace of NFV/SDN adoption still has not yet matched the early vision although progress has been made by Tier1 operators (AT&T, Verizon, NTT, China Mobile, Telefonica, etc.) for services like: Evolved Packet Core, virtual Customer Premise Equipment, vVPN, and IP Multimedia Subsystem. It remains to be seen if the rest of the industry will be fast followers.

Telco hybrid clouds combining public and private cloud computing as part of the upcoming 5G ERA is probably the biggest driver for SDN & NFV versus the other way around. I heard that every 5G slice will require its own SDN controller. In the case of SDN controllers for the NFV infrastructure, NFVi including SDN represents only about 15% of the spend for cloud based VNF/NFV/MANO/OSS software products estimated by Ovum to be $34B in 2022.

For NFVi, SDN is mostly designed to provide policy and centralized management for the Openstack/Neutron networking layer that provides inter-working between the virtual ports created by Openstack/Nova. Technically, the defacto technology of these SDN controllers is to manage the Linux kernel features made of L3 IP routing, Linux bridges, iptables or ebtables, network namespaces and Open vSwitch.

Hyperscale players like Google, Microsoft, and Amazon that have been cloud based from the start continue to lead in this regard. (The Google WAN Controller B4 project for their datacenters which may have been co-developed with NTT was one of the first commercial SDN deployments starting in 2012 implementing OpenFlow/OVS/OVSDB-ONIX for Linux based hypervisors.)

The path for network operators probably remains long for most network operators but automating the lower part of the stack, the NFVi, which includes SDN represents an early, useful win. While early SDN deployments focused on university campuses, data centers, and private backbones, recent work explores applications and extensions of SDN to a broader range of network settings, including: home networks, enterprise networks, Internet exchange points, cellular core networks, cellular and Wi-Fi radio access networks, and joint management of end-host applications and the network.

Three Illustrative Deployments

NTT Com’s SD-WAN
NTT Communications Corporation (NTT Com) offers a software-defined wide area network (SD-WAN) Service Platform, which covers more than 190 countries. SDN and SD-WAN are closely related as both are software-defined – SDN is an architecture whereas SD-WAN is a purchasable technology based on that architecture. NTT SD-WAN real-time streaming sheds light on application performance, network security and customer experience. NTT Com uses its SD-WAN to scale operations and to bring new services to the market swiftly

Microsoft’s virtual machine manager
Microsoft’s virtual machine manager (VMM) can deploy and manage software-defined infrastructure. It is used to configure conventional data centers and provide a unified management experience. Among its competencies, VMM provides virtualized hosts, network and library resources, and allocated storage.

When SDN is woven into the VMM tapestry, users can perform a variety of tasks. These include overseeing the infrastructure, like network controllers, software load balances and gateways; defining and managing virtual network policies; and directing traffic flows between virtual networks. Furthermore, it integrates numerous technologies, such as the network controller, RAS gateway and software load balancing.

VMware NSX
Tribune Media had arguably the biggest SDN deployment using VMware NSX, which transferred more than 141 applications to an SDN infrastructure over five months. Tribune Media cut ties with the rest of the Tribune Company in 2012. As a result, the company had to replace its IT infrastructure and applications. Consequently, Tribune Media chose VMware SDDC as the foundation for its IT infrastructure.

SDN Storage
Although a significant topic, SDN storage is beyond the scope of this blog

NFV, SDN, Clouding Computing Concepts, Terminology, and Standards – Part 1: History of Virtualization & its Adoption by Network Providers

History

Virtualization

Virtualization refers to the abstraction of logical resources away from their underlying physical resources to improve agility, reduce costs, and thus enhance business value. Computer virtualization has a long history, spanning nearly half a century. IBM developed the first Virtual Machine, CP/CMS which later evolved to VM/370 and today is published as z/VM, in the early 1970’s. VM demonstrably reduced the amount of hardware needed to support a given number of time-sharing users and was also often used as an application development and testing environment on an otherwise production mainframe machine.

The heart of the VM architecture is a control program(CP) or hypervisor (supervisor of the OS’s kernel which itself is a supervisor) that creates the virtual machine environment.

Type-1, native or bare-metal hypervisors

These hypervisors run directly on the host’s hardware to control the hardware and to manage guest operating systems. These include Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V and VMware ESX/ESXi. These hypervisors are inherently more efficient than Type 2 hypervisors because they don’t require the resources it takes to run a traditional operating system.
Type-2 or hosted hypervisors

These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.

However, the distinction between these two types is not always clear. Linux’s Kernel-based Virtual Machine (KVM) and FreeBSD’s bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor but technically they are type 2 hypervisors because a host OS is required.

Unix which was developed by Bell Labs (now Nokia) in the early 1980’s was the first step towards multi-user operating systems and it was also the first step towards application virtualization at the user or workspace level. But Unix applications are not fully portable the way Java applications are portable which are interpreted (vs compiled) in the Java Virtual Machine developed by Sun Microsystems in the 1990’s and is now owned by Oracle.

In 1998,  VMWare was established, and in 1999 it began selling a product called VMWare workstation. VMWare is now the market leader in virtualization. In 2001, VMWare released two new products as they branched into the enterprise market: ESX Server and GSX Server. ESX Server is a Type-1 Hypervisor; GSX Server is a Type-2 Hypervisor.

Since releasing ESX Server, VMWare enabled exponential growth for virtualization in the enterprise market as enterprise IT departments started to re-architect data centers for cloud computing and virtualized servers. By enabling IT departments to run multiple applications on the same server, server virtualization provides dramatic gains in utilization. The subsequent introduction of Virtual Machine Manager from VMWare and similar products from companies like Microsoft and Citrix fueled further rapid adoption and grow.

Virtualization makes it easier for applications to be accessed remotely, allows applications to run on more systems than originally intended, improves stability, and more efficiently uses resources which significantly lowers costs especially for hardware, space, and energy but also for operations based on automation and agility that can quickly scale to meet peak demands.

Virtualization is available for mainframes, servers, networks, storage, clients and applications. But x86 server virtualization ushered a new era of commoditization where virtual resources and applications run on commoditized compute resources for large data center bringing cost savings and other benefits.
In the 1990s, the concept of virtualization was expanded beyond virtual servers to higher levels of abstraction including storage and network resources, and virtual applications, which have no specific underlying infrastructure.

NFV

The Network Functions Virtualization (NFV) initiative was launched Oct 2012 in a white paper by 13 network operator authors presented at the SDN and OpenFlow World Congress held at Darmstadt-Germany titled “Network Functions Virtualization — An Introduction, Benefits, Enablers, Challenges & Call for Action”.  It “aims to … leverage standard IT virtualization technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be in Datacenters, Network Nodes and in the end user’s premises.” They formed a new operator-led Industry Specification Group (ISG) under the auspices of ITU/ETSI to work through the technical challenges for Network Functions Virtualization.

Thus, an industry wide transformation effort was launched to disrupt the way networks are built, accessed, and managed to dramatically reduce the time to market for network services and increase business agility. The ETSI ISG NFV now includes over 290 companies 38 of which are the world’s leading service providers and it has provided over fifty publications. NFV extends the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may be chained together to create communication services. These NFVs can be moved to, or instantiated in, various locations in the network as required, without the need to install new equipment.

NFV goes beyond previous IT virtualization efforts in that not only are applications, servers, switches, routers, and storage virtualized but the network appliances – its nodes – such as mobile network nodes, firewalls, DPI, gateways, etc, are replaced by the network virtualization approach with the attendant higher requirements for reliability and performance needed by network providers. Cisco Systems said in a blog post last December predicting 2017 trends: “We’re going to see Network Functions Virtualization (NFV) spread from service providers to the enterprise”. So, the transformation that started as IT virtualization will come full circle and progress the whole of virtualization further.

NFV decouples software implementations of Network Functions from the compute, storage, and networking resources through a virtualization layer.  This decoupling requires a new set of management and orchestration functions (MANO) and creates new dependencies between them, thus requiring inter-operable standardized interfaces, common information models, and the mapping of such information models to data models the architecture of which is the topic of the next installment of this series. Stay tuned.

Bibliography

 

(JKCS), J. K. (n.d.). Cloud Computing and Virtualization. Retrieved from http://jkremer.com/White%20Papers/Cloud%20Computing%20and%20Virtualization%20White%20Paper%20JKCS.pdf

(Oct 2014). NFV White Paper #3. Retrieved from http://portal.etsi.org/NFV/NFV_White_Paper3.pdf

Ajay Patter Veetil, WIPRO. (n.d.). NFV and its impact on OSS. Retrieved from http://www.wipro.com/documents/network-function-virtualization-and-its-impact-on-OSS.pdf

AT&T Domain 2.0 Vision White Paper. (2013, Nov). Retrieved from file:///C:/Users/Owner/Documents/2%20Advisory%20Research/at&t/AT&T%20Domain%202.0%20Vision%20White%20Paper.pdf

(2013). Network Function Virtualization. Retrieved from http://portal.etsi.org/NFV/NFV_White_Paper.pdf

(2013). NFV Whire Paper 2. Retrieved from http://portal.etsi.org/NFV/NFV_White_Paper2.pdf

Conroy, S. P. (2010). History of Virtualization. Retrieved from http://www.everythingvm.com/content/history-virtualization

David Ramel. (2017). The Rise of NFV. Retrieved from Virtualization and Cloud Review: https://virtualizationreview.com/Articles/2017/05/22/rise-of-nfv.aspx?Page=2

NGMN Alliance. (2017, May). 5G end to End Architecture. Retrieved from https://www.ngmn.org/uploads/media/170511_NGMN_E2EArchFramework_v0.6.5.pdf

ONF, O. (2016). Impact of SDN and NFV on OSS/BSS. Retrieved from https://www.opennetworking.org/images/stories/downloads/sdn-resources/solution-briefs/sb-OSS-BSS.pdf

Digital Ecosystem Curator – New Speak, New Role

The Future

Sourced mostly from Tr3Dent – experimenting with first blog

It’s no secret that the future of digital services ranging from smart cities, connected cars to the Internet of Things is centered around the successful development of platform based business models and digital ecosystems. This is not your grandfather’s Electronic Data Interchange (EDI) for supply chain management which was a walled garden with proprietary hub and spoke networking, and data formats.

What is a digital ecosystem?

A digital ecosystem for the approaching 5G ERA is a fabric of independent parties contributing to produce a highly automated, end to end digital service – like scheduling an ambulance service as part of an eHealth digital ecosystem. A digital ecosystem, as defined by Gartner “is an interdependent group of enterprises, people and/or things that share standardized digital platforms for a mutually beneficial purpose (such as commercial gain, innovation or common interest).”

Gartner’s primary research has revealed that 79% of top-performing digital organizations participate in a digital ecosystem vs. 49% of average performers. Source: Gartner’s 2017 CIO Agenda Report

Today’s digital ecosystems represent $4 trillion

In fact, as of today, 50% of the world’s top 30 brands and 70% of the world’s $1B unicorn start-ups achieved their success by building flexible digital platforms and leveraging complex digital ecosystems to grow their businesses.
These organizations, companies like Uber, AirBNB, Amazon, Alibaba and Google have achieved exponential growth by being expert at managing digital ecosystems. In fact, collectively, the leading 170 digital ecosystem managers are worth more than $4 trillion dollars and continue to grow! Source: Digital Ecosystem Management: The New Way to Grow, Bearing Point

So, who’s managing these ecosystems? Or who should be?
Enter the role of the digital ecosystem curator.
From Merriam -Webster dictionary:

noun cu·ra·tor \ˈkyu̇r-ˌā-tər, kyu̇-ˈrā-, ˈkyu̇r-ə-\ : one who has the care and superintendence of something; especially :  one in charge of a museum, zoo, or other place of exhibit

While the term curator has traditionally been used to describe the role of someone tasked with caring for a collection, often of art in a museum or animals in a zoo, it aptly describes the role of the individual(s) tasked with ensuring the successful delivery of value from a digital ecosystem.

The term curator is preferred to supervisor, manager or even director as it more accurately reflects the fact that someone tasked with ensuring that a digital ecosystem serves it purpose does not “own” or have direct financial influence or control over most of the participants and resources in the ecosystem. This is like the role of a curator in a museum who is responsible for a collection that is only partially owned by the museum with most pieces on loan from other organizations or individuals.

What does a digital ecosystem curator do?

A digital ecosystem curator must have the right combination of skills and expertise to fulfill the following critical tasks.

#1. Engage, build and manage partners

The curator of a digital ecosystem will absolutely have to be adept at engaging partners, companies, organization and individuals involved in the digital ecosystem. Once engaged, the ecosystem curator will have to be skilled at structuring the right partnership agreements and putting the right processes and communication in place to manage those partnerships.
Some of the keys to success in building and maintaining successful partnerships within a digital ecosystem include establishing the following:

A. Clearly defined business process roles. It’s critical that each party in the digital ecosystem understand his role. The curator must also understand and sponsor accept those roles on an end to end basis.

B. Enrich with analytics. The curator enhances the performance of the ecosystem with process oriented analytics. All parties in a digital ecosystem need to understand the performance metrics by which they and the ecosystem overall will be assessed.

C. Efficient interoperability. The use of Open APIs and industry standards for data, processes and applications will ensure the longevity of the ecosystem and ease of both entry and exit by various partners.

D. Common vision. There should be a very clear and shared vision for how the stakeholders in a digital ecosystem will come together to deliver value whether that value is commercial gain, innovation or a common interest.

#2. Leverage data to succeed

In any digital ecosystem, there is a tremendous amount of data generated – both transactional and operational. Milton Keynes, one of the fastest growing cities in the UK, put together a “smart” initiative to help minimize negative impacts created by their unprecedented growth and engage citizens in the process of improving their quality of life.

Alan Fletcher, Chief Liaison Office of that initiative describes the role of the data curator as “exposing exactly the right things that people can make sense of and use.” The successful digital ecosystem curator must be able to do the following with massive amounts of data from a variety of internal and external sources:
A. Organize. To make sense of the data generated in a digital ecosystem it must first be organized. Many times, the data will come from a huge variety of systems and applications and not everyone will utilize the same taxonomy or glossary of terms. The digital ecosystem curator must assess which pieces of information and data need to be common, collected and stored. A savvy digital ecosystem curator will strive for adoption and implementation of industry standards where available and will insist on use of a consistent data architecture to enable organized and intelligent data.

B. Analyze. Strong digital ecosystem curators will have exceptional analytic skills and experience analyzing or interpreting large data sets. A successful digital ecosystem curator will possess the analytics skills necessary to make sense of complex data sets and understand where the data is telling them something important and where it has the potential to provide input to performance improvements.

C. Identify. In today’s economy, it’s not enough to simply analyze the data and isolate trends, predictors, and other performance indicators. The astute digital ecosystem curator will also have to identify where the data is illustrating an opportunity or issuing a warning that has a direct correlation to the business value or innovation created by the ecosystem. To take things a step further, an outstanding digital ecosystem curator may also be able to identify additional business value to be generated or potential business models to be built.

D. Articulate. Once identified, the next task the digital ecosystem curator faces is the task of articulating his or her finding to all the ecosystem stakeholders, which remember, can be from a huge variety of companies, organizations, geographies and industries. The digital ecosystem curator must be an exemplary communicator.

E. Action. Let’s imagine our curator has organized her data, analyzed it, identified a tremendous opportunity and articulated it clearly to all stakeholders. She now has to action that opportunity. She needs to inspire and lead the ecosystem and provide a clear path of execution for all stakeholders to follow.

#3. Create the N²  Network Value Effect

The third thing a digital ecosystem curator must be able to do is arguably the most important as well. The curator must help all stakeholders participating in a digital ecosystem to prioritize the value they are delivering. Inside any complex business model, it’s very easy for participants to get distracted by operational challenges, discrepancies in data, technology conflicts, cultural differences and more. The digital ecosystem curator will have the ability to help the entire ecosystem maintain their focus and work through challenges as they arise.

A. Keep objectives in foreground. It is the digital ecosystem curator’s role to keep the business objectives and desired outcomes at the forefront of any discussion among stakeholders. Every stakeholder should be continually reminded why the ecosystem has been created and who will benefit. Having constant reminders of the target and goals will help keep the team focused.

B. Clear, concise, easily found strategy. The strategy, the “who, what, where, when, why” of the business model supported by the digital ecosystem, should be documented in a clear and easy to understand format and shared with all stakeholders.

C. Measurable. The age-old adage “You can’t manage what you can’t measure” applies here as well. The digital ecosystem curator must identify the metrics that will be used to measure the performance of the service(s) delivered by the ecosystem and perhaps the performance of distinct stakeholders as well.

Where do you find a Digital Ecosystem Curator

Changes are that currently there is no such role in the organization or job candidates carrying this title. Like the role of the data librarian last decade, its new speak and a new role.

The best candidates might already be working within the end to end business process that the digital ecosystem will automate. For example, for smart city digital ecosystem there may be a network and technology oriented assistant city manager who would be ideal to groom and empower for the role.

Tools for the digital ecosystem curator

As the role of digital ecosystem curator evolves, so do the tools and techniques used to fulfill the role. Tr3Dent has introduced its Transformation Accelerator which provides the digital ecosystem curator with a highly collaborative platform he or she can use to create a single source of truth accessible by all stakeholders for each business model and its corresponding ecosystem.
The Transformation Accelerator software can be accessed by users through Tr3Dent’s secure cloud or hosted on a private cloud. The platform can also be customized and re-branded for companies or organizations and used internally or externally. An example of this is the recently launched CurateFX made available through TM Forum, the leading global member organization for digital business. In this instance, the platform is pre-integrated with the industry standards, metrics and APIs available in TM Forum’s Frameworx which is used by more than 90% of all communication service providers.

The Transformation Accelerator platform from Tr3Dent has everything a digital ecosystem curator needs to communicate and collaborate with all stakeholders including the overarching business strategy, links to relevant industry standards for data, APIs, processes and more, stakeholder roles and a clear view of the ecosystem. Having all these elements stored in a shared repository and easily accessible by all ensures there is a well understood and common vision.

Do you have someone assigned to the role of digital ecosystem curator? Do you have other skills or tasks you believe are important for the role? Comment here or contact me directly at ptaglia@s3mktg.com.