ON THIS PAGE

Read this topic to understand how to installvMX instance in the OpenStack environment.

With vMX 14.1R4 and later the VCP (Virtual Control Plane) and VFP (Virtual Forwarding Plane) run as separate VMs. This is owning to the new distributed forwarding nature of vMX. Help us improve your experience. Let us know what you think. Do you have time for a two-minute survey?

Preparing the OpenStack Environment to Install vMX

Make sure the openstackrc file is sourced before you run any OpenStack commands.

To prepare the OpenStack environment to install vMX, performthese tasks:

Creating the neutron Networks

You must create the neutron networks used by vMX before youstart the vMX instance. The public network is the neutron networkused for the management (fxp0) network. The WAN network is the neutronnetwork on which the WAN interface forvMX is added.

To display the neutron network names, use the neutron net-list command.

Note:

You must identify and create the type of networks youneed in your OpenStack configuration.

You can use these commands as one way to create the public network:

  • For example:

  • For virtio, you can use these commands as one way to createthe WAN network:

    For example:

  • For SR-IOV, you can use these commands as one way to createthe WAN network:

    For example:

Preparing the Controller Node

Preparing the Controller Node for vMX

  1. Configure the controller node to enable Huge Pages andCPU affinity by editing the scheduler_default_filters parameterin the /etc/nova/nova.conf file.Make sure the following filters are present:

    Restart the scheduler service with this command.

    • For Red Hat: systemctl restart openstack-nova-scheduler.service

    • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-scheduler restart

  2. Update the default quotas.

    Configuring the Controller Node for SR-IOV Interfaces

    Note:

    If you have more than one SR-IOV interface, you need onededicated physical 10G interface for each additional SR-IOV interface.

    Note:

    In SRIOV mode, the communication between the Routing Engine(RE) and packet forwarding engine is enabled using virtio interfaceson a VLAN-provider OVS network. Because of this, a given physicalinterface cannot be part of both VirtIO and SR-IOV networks.

    To configure the SR-IOV interfaces:

    1. Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file to add sriovnicswitch as a mechanism driver andthe VLAN ranges used for the physical network.

      For example, use the following setting to configure the VLANranges used for the physical network physnet2.

      If you add more SR-IOV ports, you must add the VLAN range usedfor each physical network (separated by a comma). For example, usethe following setting when configuring two SR-IOV ports.

    2. Edit the /etc/neutron/plugins/ml2/ml2_conf_sriov.ini file to add details about PCI devices.
    3. Add the –-config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini as highlighted to the neutron server file.
      • For Red Hat:

        Edit the /usr/lib/systemd/system/neutron-server.service file as highlighted.

        Use the systemctl restart neutron-server commandto restart the service.

      • For Ubuntu (starting with Junos OS Release 17.2R1):

        Edit the /etc/init/neutron-server.conf file as highlighted.

        Use the service neutron-server restart command torestart the service.

    4. To allow proper scheduling of SR-IOV devices, the computescheduler must use the FilterScheduler with the PciPassthroughFilterfilter.

      Make sure the PciPassthroughFilter filter is configured in the /etc/nova/nova.conf file on the controller node.

      Restart the scheduler service.

      • For Red Hat: systemctl restart openstack-nova-scheduler

      • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-scheduler restart

    Preparing the Compute Nodes

    Preparing the Compute Node for vMX

    Windows Qcow2 Image Download

    Note:

    You no longer need to configure the compute node to passmetadata to the vMX instances by including the config_drive_format=vfat parameter in the /etc/nova/nova.conf file.

    To prepare the compute node:

    1. Configure each compute node to support Huge Pages at boottime and reboot.
      • For Red Hat: Add the Huge Pages configuration.

        Use the mount | grep boot command to determine theboot device name.

      • For Ubuntu (starting with Junos OS Release 17.2R1): Addthe Huge Pages configuration to /etc/default/grub under the GRUB_CMDLINE_LINUX_DEFAULT parameter.

      After the reboot, verify that Huge Pages are allocated.

      The number of Huge Pages depends on the amount of memory forthe VFP, the size of Huge Pages, and the number of VFP instances.To calculate the number of Huge Pages: (memory-for-vfp / huge-pages-size) * number-of-vfp

      For example, if you run four vMX instances (four VFPs) in performancemode using 12G of memory and 2M of Huge Pages size, then the numberof Huge Pages as calculated by the formula is (12G/2M)*4 or 24576.

      Note:

      Ensure that you have enough physical memory on the computenode. It must be greater than the amount of memory allocated to HugePages because any other applications that do not use Huge Pages arelimited by the amount of memory remaining after allocation for HugePages. For example, if you allocate 24576 Huge Pages and 2M Huge Pagessize, you need 24576*2M or 48G of memory for Huge Pages.

      You can use the vmstat -s command and look at thetotal memory and used memory values to verify how much memory is leftfor other applications that do not use Huge Pages.

    2. Enable IOMMU in the /etc/default/grub file. Append the intel_iommu=on string to any existingtext for the GRUB_CMDLINE_LINUX parameter.

      Regenerate the grub file.

      • For Red Hat: grub2-mkconfig -o /boot/grub2/grub.cfg

      • For Ubuntu (starting with Junos OS Release 17.2R1): update-grub

      Reboot the compute node.

    3. Add bridge for Virtio network, and configure physnet1:

      For example, an OVS bridge, br-vlan is added. (Thisis the same br-vlan which was added in bridge_mappingsin ml2_conf.ini above on controller. See Configuring the Controller Node for virtio Interfaces). To this bridge,add the eth2 interface, which can be used for Virtio communicationbetween VMs.

      In /etc/neutron/plugins/ml2/openvswitch_agent.ini, append physnet1:br-vlan string:

      Restart neutron service.

      • Redhat:

        systemctl restart neutron-openvswitch-agent.service

        systemctl restart openstack-nova-compute.service

      • Ubuntu

        service nova-compute restart

        service neutron-plugin-openvswitch-agent restart

    Configuring the Compute Node for SR-IOV Interfaces

    Note:

    If you have more than one SR-IOV interface, you need onephysical 10G Ethernet NIC card for each additional SR-IOV interface.

    To configure the SR-IOV interfaces:

    1. Load the modified IXGBE driver.

      Before compiling the driver, make sure gcc and make are installed.

      • For Red Hat:

      • For Ubuntu (starting with Junos OS Release 17.2R1):

      Unload the default IXGBE driver, compile the modified JuniperNetworks driver, and load the modified IXGBE driver.

      Verify the driver version on the eth4 interface.

      For example, in the following sample, the command displays driverversion (3.19.1):

    2. Create the virtual function (VF) on the physical device.vMX currently supports only one VF for each SR-IOV interface (forexample, eth4).

      Specify the number of VFs on each NIC. The following line specifiesthat there is no VF for eth2 (first NIC) and one VF for eth4 (secondNIC with SR-IOV interface).

      To verify that the VF was created, the output of the iplink show eth4 command includes the following line:

      To make sure that the interfaces are up and SR-IOV traffic canpass through them, execute these commands to complete the configuration.

    3. Install the SR-IOV agent.
      • For Red Hat: sudo yum install openstack-neutron-sriov-nic-agent

      • For Ubuntu (starting with Junos OS Release 17.2R1): sudo apt-get install neutron-plugin-sriov-agent

    4. Add the physical device mapping to the /etc/neutron/plugins/ml2/sriov_agent.ini file byadding the following line:

      For example, use the following setting to add a bridge mappingfor the physical network physnet2 mapped to the SR-IOV interface eth4.

      If you add more SR-IOV ports, you must add the bridge mappingfor each physical network (separated by a comma). For example, usethe following setting when adding SR-IOV interface eth5 for physicalnetwork physnet3.

    5. Edit the SR-IOV agent service file to add –-config-file/etc/neutron/plugins/ml2/sriov_agent.ini as highlighted.
      • For Red Hat:

        Edit the /usr/lib/systemd/system/neutron-sriov-nic-agent.service file as highlighted.

        Enable and start the SR-IOV agent.

        Use the systemctl status neutron-sriov-nic-agent.service command to verify that the agent has started successfully.

      • For Ubuntu (starting with Junos OS Release 17.2R1):

        Edit the /etc/init/neutron-plugin-sriov-agent.conf file as highlighted.

        Make sure that /etc/neutron/plugins/ml2/sriov_agent.ini has the correct permissions and neutron is the group of the file.

        Use the service neutron-plugin-sriov-agent start commandto start the SR-IOV agent.

        Use the service neutron-plugin-sriov-agent status command to verify that the agent has started successfully.

    6. Edit the /etc/nova/nova.conf file to add the PCI passthrough allowlist entry for the SR-IOV device.

      For example, this entry adds an entry for the SR-IOV interfaceeth4 for the physical network physnet2.

      If you add more SR-IOV ports, you must add the PCI passthroughallowlist entry for each SR-IOV interface (separated by a comma).For example, use the following setting when adding SR-IOV interfaceeth5 for physical network physnet3.

      Restart the compute node service.

      • For Red Hat: systemctl restart openstack-nova-compute

      • For Ubuntu (starting with Junos OS Release 17.2R1): service nova-compute restart

    Installing vMX

    After preparing the OpenStack environment, youmust create nova flavors and glance images for the VCP and VFP VMs.Scripts create the flavors and images based on information providedin the startup configuration file.

    Setting Up the vMX Configuration File

    The parameters required to configure vMX are defined in thestartup configuration file.

    To set up the configuration file:

    1. Download the vMX KVM software package from the vMX page and uncompress the package.
    2. Change directory to the location of the files.

      cd package-location/openstack/scripts

    3. Edit the vmx.conf textfile with a text editor to create the flavors for a single vMX instance.

      Based on your requirements, ensure the following parametersare set properly in the vMX configuration file:

      • re-flavor-name

      • pfe-flavor-name

      • vcpus

      • memory-mb

      See Specifying vMX Configuration File Parameters for informationabout the parameters.

      Sample vMX Startup Configuration File

      Here is a sample vMX startup configuration file for OpenStack:

    See Also

    Specifying vMX Configuration File Parameters

    The parameters required to configure vMX aredefined in the startup configuration file (scripts/vmx.conf). The startup configuration file generates a file that is used tocreate flavors. To create new flavors with different vcpus or memory-mb parameters, you must change the corresponding re-flavor-name or pfe-flavor-name parameter beforecreating the new flavors.

    To customize the configuration, perform these tasks:

    Configuring the Host

    To configure the host, navigate to HOST and specifythe following parameters:

    • virtualization-type—Mode of operation;must be openstack.

    • compute—(Optional) Names of the computenode on which to run vMX instances in a comma-separated list. If thisparameter is specified, it must be a valid compute node. If this parameteris specified, vMX instance launched with flavors are only run on thespecified compute nodes.

      If this parameter is not specified, the output of the nova hypervisor-listcommand provides the list of compute nodes on which to run vMX instances.

    Configuring the VCP VM

    To configure the VCP VM, you must provide the flavor name.

    Note:

    We recommend unique values for the re-flavor-name parameter because OpenStack can create multiple entries with thesame name.

    To configure the VCP VM, navigate to CONTROL_PLANE and specify the following parameters:

    • re-flavor-name—Name of the nova flavor.

    • vcpus—Number of vCPUs for the VCP; minimumis 1.

      Note:

      If you change this value, you must change the re-flavor-name value before running the script to create flavors.

    Configuring the VFP VM

    Juniper Vmx Qcow2 Download

    To configure the VFP VM, you must provide the flavor name. Basedon your requirements, you might want to change the memory and numberof vCPUs. See Minimum Hardware Requirements for minimum hardware requirements.

    To configure the VFP VM, navigate to FORWARDING_PLANE and specify the following parameters:

    • pfe-flavor-name—Name of the nova flavor.

    • memory-mb—Amount of memory for the VFP;minimum is 12 GB (performance mode) and 4 GB (lite mode).

      Note:

      If you specify less than 7 vCPUs, the VFP automaticallyswitches to lite mode.

      Note:

      You must specify the parameters in this order.

      • vcp-image-name—Name of the glance image.

      • vcp-image-location—Absolute path to the junos-vmx-x86-64*.qcow2 file for launching VCP.

      • vfp-image-name—Name of the glance image.

      • vfp-image-location—Absolute path to the vFPC-*.img file for launching VFP.

For example, this command installs the VCP image as re-testfrom the /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2 file and the VFP image as fpc-test from the /var/tmp/vFPC-20170117.img file.

sh vmx_osp_images.sh re-test /var/tmp/junos-vmx-x86-64-17.1R1.8.qcow2fpc-test /var/tmp/vFPC-20170117.img

To view the glance images, use the glance image-list command.

Starting a vMX Instance

Modifying Initial Junos OS Configuration

When you start the vMX instance, the Junos OS configurationfile found in package-location/openstack/vmx-components/vms/vmx_baseline.conf isloaded. If you need to change this configuration, make any changesin this file before starting the vMX.

Note:

If you create your own vmx_baseline.conf file or move the file, make sure that the package-location/openstack/vmx-components/vms/re.yaml references the correct path.

Launching the vMX Instance

  1. Modify these parameters in the package-location/openstack/1vmx.env environmentfile for your configuration. The environment file is in YAML formatstarting in Junos OS Release 17.4R1.
    • net_id1—Network ID of the existing neutronnetwork used for the WAN port. Use the neutron net-list command to display the network ID.

    • public_network—Network ID of the existingneutron network used for the management (fxp0) port. Use the neutron net-list | grep public command to display the networkID.

    • fpc_img—Change this parameter to linux-img. Name of the glance image for the VFP; same as the vfp-image-name parameter specified when running the scriptto install the vMX images.

    • vfp_image—Name of the glance image forthe VFP; same as the vfp-image-name parameter specifiedwhen running the script to install the vMX images (applicable forFor Junos OS Releases 17.3R1 and earlier).

    • fpc_flav—Change this parameter to linux-flav. Name of the nova flavor for the VFP; same as the pfe-flavor-name parameter specified in the vMX configurationfile.

    • vfp_flavor—Name of the nova flavor forthe VFP; same as the pfe-flavor-name parameter specifiedin the vMX configuration file (applicable for Junos OS Releases 17.3R1and earlier).

    • junos_flav—Name of the nova flavor forthe VCP; same as the re-flavor-name parameter specifiedin the vMX configuration file.

    • vcp_flavor—Name of the nova flavor forthe VCP; same as the re-flavor-name parameter specifiedin the vMX configuration file (applicable for Junos OS Releases 17.3R1and earlier).

    • junos_img—Name of the glance image forthe VCP; same as the vcp-image-name parameter specifiedwhen running the script to install the vMX images.

    • vcp_image—Name of the glance image forthe VCP; same as the vcp-image-name parameter specifiedwhen running the script to install the vMX images (applicable forJunos OS Releases 17.3R1 and earlier).

    • project_name—Any project name. All resourceswill use this name as the prefix.

    • gateway_ip—Gateway IP address.

  2. Start the vMX instance with the heat stack-create–f 1vmx.yaml –e 1vmx.env vmx-name command.

    This sample configuration starts a single vMX instance withone WAN port and one FPC.

  3. Verify that the vMX instance is created with the heat stack-list | grep vmx-name command.
  4. Verify that the VCP and VFP VMs exist with the nova-list command.
  5. Access the VCP or the VFP VM with the nova get-vnc-console nova-id novnc command, where nova-id is the ID of the instance displayed in the nova-list commandoutput.
Note:

You must shut down the vMX instance before you reboothost server using the request system halt command.

Related Documentation