Install Kube OVN
The Kube-OVN project is a Kubernetes Network Plugin that uses OVN as the network provider. It is a CNI plugin that provides a network solution for Kubernetes. It is a lightweight, scalable, and easy-to-use network solution for Kubernetes.
Prerequisites
The override values file for Kube-OVN can be found in /etc/genestack/helm-configs/kube-ovn/kube-ovn-helm-overrides.yaml
and should be setup-up before running the deployment. In a common production ready setup, the only values that will
likely need to be defined is the network interface that will Kube-OVN will bind to.
Example Kube-OVN Helm Overrides
In the example below, the IFACE
and VLAN_INTERFACE_NAME
are the only values that need to be defined and
are set to br-overlay
. If you intend to enable hardware offloading, you will need to set the IFACE
to the
a physical interface that supports hardware offloading.
For a full review of all the available options, see the Kube-OVN base helm overrides file.
Example Kube-OVN Helm Overrides
# Default values for kubeovn.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
registry:
address: ghcr.io/rackerlabs
imagePullSecrets: []
images:
kubeovn:
repository: kube-ovn
dpdkRepository: kube-ovn-dpdk
vpcRepository: vpc-nat-gateway
# Change "tag" when PR https://github.com/kubeovn/kube-ovn/pull/5005 is merged
tag: v1.12.32-gc-disable
support_arm: true
thirdparty: true
image:
pullPolicy: IfNotPresent
replicaCount: 3
namespace: kube-system
MASTER_NODES: ""
MASTER_NODES_LABEL: "kube-ovn/role=master"
networking:
# NET_STACK could be dual_stack, ipv4, ipv6
NET_STACK: ipv4
ENABLE_SSL: false
# network type could be geneve or vlan
NETWORK_TYPE: geneve
# tunnel type could be geneve, vxlan or stt
TUNNEL_TYPE: geneve
IFACE: "br-overlay"
DPDK_TUNNEL_IFACE: "br-phy"
EXCLUDE_IPS: ""
POD_NIC_TYPE: "veth-pair"
vlan:
PROVIDER_NAME: "provider"
VLAN_INTERFACE_NAME: "br-overlay"
VLAN_NAME: "ovn-vlan"
VLAN_ID: "100"
EXCHANGE_LINK_NAME: false
ENABLE_EIP_SNAT: true
DEFAULT_SUBNET: "ovn-default"
DEFAULT_VPC: "ovn-cluster"
NODE_SUBNET: "join"
ENABLE_ECMP: false
ENABLE_METRICS: true
# comma-separated string of nodelocal DNS ip addresses
NODE_LOCAL_DNS_IP: ""
PROBE_INTERVAL: 180000
OVN_NORTHD_PROBE_INTERVAL: 5000
OVN_LEADER_PROBE_INTERVAL: 5
OVN_REMOTE_PROBE_INTERVAL: 10000
OVN_REMOTE_OPENFLOW_INTERVAL: 180
OVN_NORTHD_N_THREADS: 1
ENABLE_COMPACT: false
func:
ENABLE_LB: true
ENABLE_NP: true
ENABLE_EXTERNAL_VPC: true
HW_OFFLOAD: false
ENABLE_LB_SVC: false
ENABLE_KEEP_VM_IP: true
LS_DNAT_MOD_DL_DST: true
LS_CT_SKIP_DST_LPORT_IPS: true
CHECK_GATEWAY: true
LOGICAL_GATEWAY: false
ENABLE_BIND_LOCAL_IP: true
SECURE_SERVING: false
U2O_INTERCONNECTION: false
ENABLE_TPROXY: false
ENABLE_IC: false
ENABLE_NAT_GW: true
ENABLE_OVN_IPSEC: false
ENABLE_ANP: false
SET_VXLAN_TX_OFF: false
OVSDB_CON_TIMEOUT: 3
OVSDB_INACTIVITY_TIMEOUT: 10
ENABLE_LIVE_MIGRATION_OPTIMIZE: true
ipv4:
POD_CIDR: "10.236.0.0/14"
POD_GATEWAY: "10.236.0.1"
SVC_CIDR: "10.233.0.0/18"
JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "208.67.222.222"
PINGER_EXTERNAL_DOMAIN: "opendns.com."
ipv6:
POD_CIDR: "fd00:10:16::/112"
POD_GATEWAY: "fd00:10:16::1"
SVC_CIDR: "fd00:10:96::/112"
JOIN_CIDR: "fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "2620:119:35::35"
PINGER_EXTERNAL_DOMAIN: "opendns.com."
dual_stack:
POD_CIDR: "10.236.0.0/14,fd00:10:16::/112"
POD_GATEWAY: "10.236.0.1,fd00:10:16::1"
SVC_CIDR: "10.233.0.0/18,fd00:10:96::/112"
JOIN_CIDR: "100.64.0.0/16,fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "208.67.222.222,2620:119:35::35"
PINGER_EXTERNAL_DOMAIN: "opendns.com."
performance:
GC_INTERVAL: 0
INSPECT_INTERVAL: 20
OVS_VSCTL_CONCURRENCY: 100
debug:
ENABLE_MIRROR: false
MIRROR_IFACE: "mirror0"
cni_conf:
CNI_CONFIG_PRIORITY: "01"
CNI_CONF_DIR: "/etc/cni/net.d"
CNI_BIN_DIR: "/opt/cni/bin"
CNI_CONF_FILE: "/kube-ovn/01-kube-ovn.conflist"
LOCAL_BIN_DIR: "/usr/local/bin"
MOUNT_LOCAL_BIN_DIR: false
kubelet_conf:
KUBELET_DIR: "/var/lib/kubelet"
log_conf:
LOG_DIR: "/var/log"
OPENVSWITCH_DIR: "/etc/origin/openvswitch"
OVN_DIR: "/etc/origin/ovn"
DISABLE_MODULES_MANAGEMENT: false
nameOverride: ""
fullnameOverride: ""
# hybrid dpdk
HYBRID_DPDK: false
HUGEPAGE_SIZE_TYPE: hugepages-2Mi # Default
HUGEPAGES: 1Gi
# DPDK
DPDK: false
DPDK_VERSION: "19.11"
DPDK_CPU: "1000m" # Default CPU configuration
DPDK_MEMORY: "2Gi" # Default Memory configuration
ovn-central:
requests:
cpu: "300m"
memory: "200Mi"
limits:
cpu: "3"
memory: "4Gi"
ovs-ovn:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "2"
memory: "1000Mi"
kube-ovn-controller:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-cni:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-pinger:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "200m"
memory: "400Mi"
kube-ovn-monitor:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "200m"
memory: "200Mi"
Label Kube-OVN nodes
key |
type | value |
notes |
---|---|---|---|
kube-ovn/role | str | master |
Defines where the Kube-OVN Masters will reside |
ovn.kubernetes.io/ovs_dp_type | str | kernel |
(Optional) Defines OVS DPDK mode |
Label all controllers as Kube-OVN control plane nodes
Deployment
To run the Kube-OVN deployment, run the following command commands or script.
Run the Kube-OVN deployment Script /opt/genestack/bin/install-kube-ovn.sh
.
#!/bin/bash
# shellcheck disable=SC2124,SC2145,SC2294
GLOBAL_OVERRIDES_DIR="/etc/genestack/helm-configs/global_overrides"
SERVICE_CONFIG_DIR="/etc/genestack/helm-configs/kube-ovn"
BASE_OVERRIDES="/opt/genestack/base-helm-configs/kube-ovn/kube-ovn-helm-overrides.yaml"
KUBE_OVN_VERSION="v1.12.30"
MASTER_NODES=$(kubectl get nodes -l kube-ovn/role=master -o json | jq -r '[.items[].status.addresses[] | select(.type == "InternalIP") | .address] | join(",")' | sed 's/,/\\,/g')
MASTER_NODE_COUNT=$(kubectl get nodes -l kube-ovn/role=master -o json | jq -r '.items[].status.addresses[] | select(.type=="InternalIP") | .address' | wc -l)
if [ "${MASTER_NODE_COUNT}" -eq 0 ]; then
echo "No master nodes found"
echo "Be sure to label your master nodes with kube-ovn/role=master before running this script"
echo "Exiting"
exit 1
fi
helm repo add kubeovn https://kubeovn.github.io/kube-ovn
helm repo update
HELM_CMD="helm upgrade --install kube-ovn kubeovn/kube-ovn \
--version ${KUBE_OVN_VERSION} \
--namespace=kube-system \
--set MASTER_NODES=\"${MASTER_NODES}\" \
--set replicaCount=${MASTER_NODE_COUNT}"
HELM_CMD+=" -f ${BASE_OVERRIDES}"
for dir in "$GLOBAL_OVERRIDES_DIR" "$SERVICE_CONFIG_DIR"; do
if compgen -G "${dir}/*.yaml" > /dev/null; then
for yaml_file in "${dir}"/*.yaml; do
# Avoid re-adding the base override file if present in the service directory
if [ "${yaml_file}" != "${BASE_OVERRIDES}" ]; then
HELM_CMD+=" -f ${yaml_file}"
fi
done
fi
done
HELM_CMD+=" $@"
echo "Executing Helm command:"
echo "${HELM_CMD}"
eval "${HELM_CMD}"
Deployment Verification
Once the script has completed, you can verify that the Kube-OVN pods are running by running the following command
Output
NAME PROVIDER VPC PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE V4USED V4AVAILABLE V6USED V6AVAILABLE EXCLUDEIPS U2OINTERCONNECTIONIP
join ovn ovn-cluster IPv4 100.64.0.0/16 false false false distributed 3 65530 0 0 ["100.64.0.1"]
ovn-default ovn ovn-cluster IPv4 10.236.0.0/14 false true true distributed 111 262030 0 0 ["10.236.0.1"]
Tip
After the deployment, and before going into production, it is highly recommended to review the Kube-OVN Backup documentation, from the operators guide for setting up you backups.
Upon successful deployment the Kubernetes Nodes should transition into a Ready
state. Validate the nodes are ready by
running the following command.