Skip to content

The purpose of this introduction

This introduction attempts to explain the basic operation of OVN and OVS, particularly in the context of Genestack. However, you should refer to the canonical upstream documentation for individual projects as necessary.

Assumed background information

In sticking to introducing OVN in Genestack, the following got written with some assumed knowledge:

  • Some software-defined networking (SDN) concepts
  • Some OpenStack concepts
  • Some database management system (DBMS) concepts

In most cases, you will probably have to decide whether you want or need to do further out-of-band reading based on the context in which these come up. In some cases, a cursory explanation gets provided in passing.

Basic background information and terminology

  • You can find information on the general OVN architecture in ovn-architecture(7), which you can find in PDF, HTML, or plain text here
  • While it probably contains some outdated information, you may wish to watch Introduction to OVN - OVS Conference 2015
    • three OVN contributors created this presentation
    • the basic architecture hasn't changed
  • Genestack installs OVN with Kube-OVN.
    • this includes OVS, described later.
  • To complete the architecture reading, you should read the Kube-OVN architecture documentation.
  • Reading and understanding the two canonical architecture documentation references above will give you more information than shown here, but this attempts to provide a high-level overview, and covers in particular how we use the single OVN installation from Kube-OVN for both OpenStack and Kubernetes.
  • A software-defined networking controller provides centralized management for software defined networks, particularly in defining rules for how to move network packets.
  • OVN functions as a software-defined networking controller.
  • OVN has good documentation: Open Virtual Network (OVN) Documentation
    • The OVN reference guide typically gets installed as the UNIX manual pages
      • although you will not find the manual pages in the Genestack pods.
  • OVN calls its northbound clients Cloud Management Systems, often abbreviated as CMS.
  • OVN in Genestack has two CMSes:
    • the Kubernetes cluster
    • OpenStack Neutron
  • Open vSwitch provides software-defined switches.
    • This usually gets abbreviated as OVS.
    • It provides L2 switches in software.
    • In Genestack, OVN takes care of the OVS switch programming.
    • See more detailed treatment of OVS below.
  • OVN works with OVS to:
    • program OVS switches
    • provide a higher layer of abstraction than OVS.
      • In particular, OVN provides logical entities
  • OVN ultimately has OVS at the far south end of its data flow
    • and exists to provide a logical higher layer of abstraction than OVS

Basic OVN operation and architecture

  • This section covers basic OVN operation abstracted from the CMS.
  • We install OVN as for Kubernetes with Kube-OVN
    • so a section will follow on the details of installation by/for Kubernetes.
  • Remember, Genestack uses OVN with two CMSes:

    • OpenStack
    • Kubernetes

    so a section will follow with details for each respective CMS. OVN design allows for use of more than one CMS with for an OVN installation.

  • So, this section contains an abstract description of OVS operation, followed by installation details and per-CMS details in following sections.

the OVN plugin

  • Every CMS has an OVN plugin.
    • In Genestack, OpenStack and Kubernetes, as the two CMSes, both use their OVN plugins on the single OVN installation.
  • Some details on how this works will follow in sections and subsections below.
  • OpenStack Neutron has networking-ovn for the OVN plugin.
    • This gets developed in Neutron as a first-party ML2 plugin for using OVN
    • So Genestack implicitly uses this.
  • The plugin allows the CMS to control OVN for its needs.
    • e.g., creating a network in Neutron will result in networking-ovn, the OpenStack OVN plugin, writing a logical switch into the NB DB.
    • One of the main functions of the OVN plugin is to translate network components from the CMS into logical entities based on standard networking concepts such as switches and routers into the OVN north database

Central OVN components

  • OVN has three central components below the CMS/OVN plugin architecturally:
  • the ovn-northd daemon
  • The north database, often referred to as NB or NB DB.
  • The south database, often referred to as SB or SB DB.
  • As a group, these often informally get referred to collectively as OVN central.
  • OVN doesn't generally vary in implementation below the CMS/OVN plugin
    • The CMS/OVN plugin must get implemented separately for each CMS.
    • However, Kube-OVN, in use in Genestack, actually has made some minor modifications as described here
      • so you might need to know about that relative to stock OVN.

OVN databases and ovn-northd

  • As mentioned, OVN has the NB and SB.
    • both are centrally located in the OVN architecture
    • It has no other databases (unless you count the OVS databases at the far south end of the data flow)
OVSDB DBMS
  • OVSDB started as the DBMS for OVS, which also uses it.
  • The OVS developers (who also developed OVN) originally designed OVSDB for OVS.
  • NB and SB run OVSDB as the DBMS.
    • OVS does as well.
  • As an aside, in various sources, the term "OVSDB" sometimes gets used in a way that makes it look like an application layer network protocol, a database, or a DBMS.
    • When used like a protocol, it refers to accessing the OVSDB by the OVSDB procotol
    • etc. (e.g., used as a database, refers to an OVSDB DBMS database).
  • OVSDB works transactionally, like InnoDB for MySQL or MariaDB.
    • So you can expect ACID semantics or behavior
    • However, it lacks many features of larger general-purpose DBMSes
      • e.g., sharding
  • OVSDB uses the Raft algorithm for high availability (HA).
Raft algorithm
  • In Raft HA algorithms, a cluster elects a leader.
  • All writes go only to the leader.
  • If you lose a leader, a new leader gets elected.
  • It takes a majority of nodes connected together to elect a new leader.
  • The cluster remains fully functional as long as you have a majority of nodes connected together.
    • This allows the cluster to maintain consistency.
    • A connected minority of nodes will not elect a leader and will not take writes
      • so a reunited cluster can consistently reconcile the data from all nodes because anything written to a majority of nodes should get written to all nodes
  • The cluster functions in read-only mode when you don't have a leader.
ovn-northd
  • ovn-northd translates logical entities from the NB DB into logical flows, and writes the logical flows into the SB DB
  • So, ovn-northd talks with the NB and SB DBs.
  • ovn-northd doesn't talk with anything south of the SB DB, but the logical flows it writes there influence the southbound ovn-controller component
OVN NB
  • The CMS or CMSes (e.g., OpenStack and Kubernetes) drives OVN with its OVN plugin
  • The OVN plugins write OVN logical entities directly into the NB DB
  • The NB contains OVN's logical entities
  • written there by the OVN plugin, based on actions taken by the CMS
  • e.g., logical switches, logical routers, etc.
  • It generally doesn't contain much of anything else.
  • OVN plugins perform CRUD-type (create, read, update, delete) operations on the NB DB directly.
  • ovn-northd automatically acts on the state of the database, so the OVN plugin doing CRUD operations plays a major and direct role in driving OVN's operation.
    • So the OVN plugin doesn't do anything like API calls or use a message queue. It modifies the NB DB, and OVN continually treats the NB DB as canonical. ovn-northd will start propagating updates applied by the OVN plugin or by anything else to the NB DB southward (SB DB) automatically and immediately.
OVN SB
  • The OVN SB contains OVN logical flows based on the logical entities in the NB DB.
  • The SB DB gets read by the ovn-controller component described below.
  • The SB DB holds information on the physical infrastructure
    • e.g., the existence of compute nodes and k8s nodes

Distributed OVN components

OVS

  • While included here as an architectural component of OVN, you can use OVS without OVN.
  • A network flow refers to a sequence of packets from a source to a destination that share some characteristics, like protocol, such as TCP or UDP, destination address, port number, and other such things.
  • OVS resembles Linux-bridge in providing L2 switches in software

    • but OVS switches have greater programmability with the OpenFlow protocol.

    • OVS switches get programmed with OpenFlow, which define the network flows.

    • OVS runs on all forwarding-plane nodes
    • "Forwarding plane" from SDN terminology also sometimes gets called the data-plane
    • e.g., all Kubernetes nodes in Genestack and Kubernetes clusters using Kube-OVN.
    • OVN manages the OVS switches.
    • (Incidentally, OVS came first, and the OVS developers also wrote OVN as the logical continuation of OVS)

ovn-controller

  • The ovn-controller(8) component runs on anywhere OVS runs in any or all types of OVN installations.
  • The ovn-controller reads logical flows from the SB DB and implements with OpenFlow flows on OVS.
  • ovn-controller also reports hardware information to the SB DB.

OVN installation via Kube-OVN in Genestack

  • Genestack installs OVN via Kube-OVN "for" the Kubernetes cluster.
  • Genestack does not install OVN separately for OpenStack.
  • You should see the Kube-OVN architecture page for more detailed explanation

ovn-central in Kube-OVN and Genestack

  • In Kube-OVN and Genestack, OVN central:
    • as described in the documentation, "runs the control plane components of OVN, including ovn-nb, ovn-sb, and ovn-northd."
    • runs in the kube-system namespace
    • runs on three pods:
      • with a name starting with ovn-central, and
      • labelled app=ovn-central
    • each pod runs one copy of each of the three OVN central components:
      • NB
      • DB
      • ovn-northd
    • so the informal name "OVN central" for these centralized components matches what you find running on the pods.
    • these pods get labelled with what each pod serves as the master for:

      • so you find one each of the following labels across the three pods:
        • ovn-nb-leader=true
        • ovn-northd-leader=true
        • ovn-sb-leader=true

      although one pod might have more than one of the labels.

      - these labels indicate which pod has the leader for the service
        in question.
      - With the raft HA algorithm described previously, OVN should continue
        working normally when losing one of these pods.
      - Losing two of these pods, with one still running, should result in OVN
        working in read-only mode.
      

kube-ovn-controller

  • kube-ovn-controller pods

    the linked documentation describes, in part (see the documentation link for further details):

    This component performs the translation of all resources within Kubernetes to OVN resources and acts as the control plane for the entire Kube-OVN system. The kube-ovn-controller listens for events on all resources related to network functionality and updates the logical network within the OVN based on resource changes.

kube-ovn-monitor

  • kube-ovn-monitor

    the linked documentation describes it:

    This component collects OVN status information and the monitoring metrics

per-node components

  • Each Kubernetes node has pods for OVN as well.

Components on all nodes

A number of components run on all nodes as a DaemonSet.

ovs-ovn pods
  • ovs-ovn pods

    the linked documentation describes it:

    ovs-ovn runs as a DaemonSet on each node, with openvswitch, ovsdb, and ovn-controller running inside the Pod. These components act as agents for ovn-central to translate logical flow tables into real network configurations

kube-ovn-cni pods
  • kube-ovn-cni pods:

    the linked documentation describes it, in part (see the documentation link for further details):

    This component runs on each node as a DaemonSet, implements the CNI interface, and operates the local OVS to configure the local network.

    This DaemonSet copies the kube-ovn binary to each machine as a tool for interaction between kubelet and kube-ovn-cni. This binary sends the corresponding CNI request to kube-ovn-cni for further operation. The binary will be copied to the /opt/cni/bin directory by default.

    kube-ovn-cni will configure the specific network to perform the appropriate traffic operations

    • see the documentation in full for more details
kube-ovn-pinger pods
  • kube-ovn-pinger pods

    the linked documentation describes it:

    This component is a DaemonSet running on each node to collect OVS status information, node network quality, network latency, etc. The monitoring metrics collected can be found in Metrics.

kube-ovn-speaker
  • Included only for completeness. Genestack does not use these by default.

ovn-metadata-agent on compute-nodes only

  • ovn-metadata-agent pods:
    • run only on compute nodes
    • uniquely amongst all pods mentioned, Genestack installs these from the OpenStack Helm chart, and they don't come from Kube-OVN
    • These provide the metadata service associated with Openstack Neutron for instances.

OVN and OpenStack

  • networking-ovn serves as the OVN plugin for OpenStack Neutron.
  • As mentioned above, OpenStack Neutron has networking-ovn for the OVN plugin.
    • This gets developed in Neutron as a first-party ML2 plugin for using OVN
  • To drive an OVN installation via networking-ovn, Neutron only requires:
    • NB DB connection information
    • SB DB connection information
  • Genestack supplies Neutron with the NB and SB DB connection information
    • So you see and find OVN components installed for Kubernetes via Kube-OVN as described above instead of what you would find for a conventional OVN installation installed for and servicing OpenStack as its sole CMS.
  • Neutron has the ability to automatically repair the OVN database to match Neutron
    • the setting neutron_sync_mode in neutron.conf or override conf.neutron.ovn.neutron_sync_mode in Genestack or OpenStack Helm overrides control whether Neutron does this.
    • Genestack turns neutron_sync_mode off because it doesn't work when you use a second CMS on the same OVN installation
      • presumably because Neutron can't assume everything in the NB should belong to it to modify; in particular, entries that appear extraneous from the perspective of Neutron may belong to another CMS
        • In particular, a fresh Genestack installation already shows 6 unknown ACLs when Neutron runs this check