These instructions are DEPRECATED as of the Ocata release and will be removed in a future release.

Virtual Environment

TripleO can be used in a virtual environment using virtual machines instead of actual baremetal. However, one baremetal machine is still needed to act as the host for the virtual machines.


Virtual deployments with TripleO are for development and testing purposes only. This method cannot be used for production-ready deployments.

Minimum System Requirements

By default, this setup creates 3 virtual machines:

  • 1 Undercloud
  • 1 Overcloud Controller
  • 1 Overcloud Compute

Each virtual machine must consist of at least 8 GB of memory and 40 GB of disk space [1].


The virtual machine disk files are thinly provisioned and will not take up the full 40GB initially.

The baremetal machine should meet the following minimum system requirements:

  • Virtualization hardware extensions enabled (nested KVM is not supported)
  • 1 quad core CPU
  • 32 GB free memory
  • 240 GB disk space

TripleO currently supports the following operating systems:

  • RHEL 7.1 x86_64 or
  • CentOS 7 x86_64

Preparing the Virtual Environment (Automated)

  1. Install RHEL 7.1 Server x86_64 or CentOS 7 x86_64 on your host machine.

RHEL Portal Registration

Register the host machine using Subscription Management:

sudo subscription-manager register --username="[your username]" --password="[your password]"
# Find this with `subscription-manager list --available`
sudo subscription-manager attach --pool="[pool id]"
# Verify repositories are available
sudo subscription-manager repos --list
# Enable repositories needed
sudo subscription-manager repos --enable=rhel-7-server-rpms \
     --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms \

RHEL Satellite Registration

To register the host machine to a Satellite, the following repos must be synchronized on the Satellite:


See the Red Hat Satellite User Guide for how to configure the system to register with a Satellite server. It is suggested to use an activation key that automatically enables the above repos for registered systems.

  1. Make sure sshd service is installed and running.

  2. The user performing all of the installation steps on the virt host needs to have sudo enabled. You can use an existing user or use the following commands to create a new user called stack with password-less sudo enabled. Do not run the rest of the steps in this guide as root.

    • Example commands to create a user:

      sudo useradd stack
      sudo passwd stack  # specify a password
      echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
      sudo chmod 0440 /etc/sudoers.d/stack
  3. Make sure you are logged in as the non-root user you intend to use.

    • Example command to log in as the non-root user:

      su - stack
  4. Enable needed repositories:

Stable Branch

Enable the appropriate repos for the desired release, as indicated below. Do not enable any other repos not explicitly marked for that release.


Enable latest RDO Mitaka Delorean repository for all packages

sudo curl -L -o /etc/yum.repos.d/delorean-mitaka.repo https://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo

Enable the Mitaka Delorean Deps repository

sudo curl -L -o /etc/yum.repos.d/delorean-deps-mitaka.repo https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo


Enable the CentOS Storage SIG Ceph/Hammer repository if using Ceph

sudo yum -y install --enablerepo=extras centos-release-ceph-hammer
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Hammer.repo


Enable latest RDO Newton Delorean repository for all packages

sudo curl -L -o /etc/yum.repos.d/delorean-newton.repo https://trunk.rdoproject.org/centos7-newton/current/delorean.repo

Enable the Newton Delorean Deps repository

sudo curl -L -o /etc/yum.repos.d/delorean-deps-newton.repo https://trunk.rdoproject.org/centos7-newton/delorean-deps.repo


Enable the CentOS Storage SIG Ceph/Jewel repository if using Ceph

sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo


Enable latest RDO Ocata Delorean repository for all packages

sudo curl -L -o /etc/yum.repos.d/delorean-ocata.repo https://trunk.rdoproject.org/centos7-oacata/current/delorean.repo

Enable the Ocata Delorean Deps repository

sudo curl -L -o /etc/yum.repos.d/delorean-deps-ocata.repo https://trunk.rdoproject.org/centos7-ocata/delorean-deps.repo


Enable the CentOS Storage SIG Ceph/Jewel repository if using Ceph

sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo

Enable last known good RDO Trunk Delorean repository for core openstack packages

sudo curl -L -o /etc/yum.repos.d/delorean.repo https://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/delorean.repo

Enable latest RDO Trunk Delorean repository only for the TripleO packages

sudo curl -L -o /etc/yum.repos.d/delorean-current.repo https://trunk.rdoproject.org/centos7/current/delorean.repo
sudo sed -i 's/\[delorean\]/\[delorean-current\]/' /etc/yum.repos.d/delorean-current.repo
sudo /bin/bash -c "cat <<EOF>>/etc/yum.repos.d/delorean-current.repo


Enable the Delorean Deps repository

sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/centos7/delorean-deps.repo


Enable the CentOS Storage SIG Ceph/Jewel repository if using Ceph

sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo
  1. Install instack-undercloud:

    sudo yum install -y instack-undercloud
  2. The virt setup automatically sets up a vm for the Undercloud installed with the same base OS as the host. See the Note below to choose a different OS:


    To setup the undercloud vm with a base OS different from the host, set the $NODE_DIST environment variable prior to running instack-virt-setup:


    export NODE_DIST=centos7


    export NODE_DIST=rhel7
  3. Run the script to setup your virtual environment:


By default, 2 overcloud VMs will be created with 1 vCPU, 6144 MiB RAM and 40 GB. To adjust those values set the following:

export NODE_COUNT=2
export NODE_CPU=1
export NODE_MEM=6144
export NODE_DISK=40

The undercloud VM will be created with 4 vCPUs and 8192 MiB and 30 GB of disk by default. To adjust those values set the following:



Download the RHEL 7.1 cloud image or copy it over from a different location, for example: https://access.redhat.com/downloads/content/69/ver=/rhel—7/7.1/x86_64/product-downloads, and define the needed environment variables for RHEL 7.1 prior to running instack-virt-setup:

export DIB_LOCAL_IMAGE=rhel-guest-image-7.1-20150224.0.x86_64.qcow2

RHEL Portal Registration

To register the Undercloud vm to the Red Hat Portal define the following variables:

export REG_METHOD=portal
export REG_USER="[your username]"
export REG_PASSWORD="[your password]"
# Find this with `sudo subscription-manager list --available`
export REG_POOL_ID="[pool id]"
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
       rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"

RHEL Satellite Registration

To register the Undercloud vm to a Satellite define the following variables. Only using an activation key is supported when registering to Satellite, username/password is not supported for security reasons. The activation key must enable the repos shown:

export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
# http://<satellite-hostname>
export REG_SAT_URL="[satellite url]"
export REG_ORG="[satellite org]"
# Activation key must enable these repos:
# rhel-7-server-rpms
# rhel-7-server-optional-rpms
# rhel-7-server-extras-rpms
# rhel-7-server-openstack-6.0-rpms
export REG_ACTIVATION_KEY="[activation key]"


To use Ceph you will need at least one additional virtual machine to be provisioned as a Ceph OSD; set the NODE_COUNT variable to 3, from a default of 2, so that the overcloud will have exactly one more:

export NODE_COUNT=3


The TESTENV_ARGS environment variable can be used to customize the virtual environment configuration. For example, it could be used to enable additional networks as follows:

export TESTENV_ARGS="--baremetal-bridge-names 'brbm brbm1 brbm2'"


The LIBVIRT_VOL_POOL and LIBVIRT_VOL_POOL_TARGET environment variables govern the name and location respectively for the storage pool used by libvirt. The defaults are the ‘default’ pool with target /var/lib/libvirt/images/. These variables are useful if your current partitioning scheme results in insufficient space for running any useful number of vms (see the Minimum Requirements):

# you can check the space available to the default location like
df -h  /var/lib/libvirt/images

# If you wish to specify an alternative pool name:
export LIBVIRT_VOL_POOL=tripleo
# If you want to specify an alternative target
export LIBVIRT_VOL_POOL_TARGET=/home/vm_storage_pool

If you don’t have a ‘default’ pool defined at all, setting the target is sufficient as the default will be created with your specified target (and directories created as necessary). It isn’t possible to change the target for an existing volume pool with this method, so if you already have a ‘default’ pool and cannot remove it, you should also specify a new pool name to be created.


If the script encounters problems, see Troubleshooting instack-virt-setup Failures.

When the script has completed successfully it will output the IP address of the instack vm that has now been installed with a base OS.

Running sudo virsh list --all [2] will show you now have one virtual machine called instack and 2 called baremetal[0-1].

You can ssh to the instack vm as the root user:

ssh root@<instack-vm-ip>

The vm contains a stack user to be used for installing the undercloud. You can su - stack to switch to the stack user account.

Continue with Undercloud Installation.


[1]Note that some default partitioning schemes may not provide enough space to the partition containing the default path for libvirt image storage (/var/lib/libvirt/images). The easiest fix is to export the LIBVIRT_VOL_POOL_TARGET and LIBVIRT_VOL_POOL parameters in your environment prior to running instack-virt-setup above (see note there). Alternatively you can just customize the partition layout at the time of install to provide at least 200 GB of space for that path.
[2]The libvirt virtual machines have been defined under the system instance (qemu:///system). The user account executing these instructions gets added to the libvirtd group which grants passwordless access to the system instance. It does however require logging into a new shell (or desktop environment session if wanting to use virt-manager) before this change will be fully applied. To avoid having to re-login, you can use sudo virsh.