Get to know your cloud

Let’s take a look at the various resources in your OpenStack private cloud from Cisco Metapod.

As a cloud admin, we figure you are comfortable with the command-line clients. We recommend using brew to install Python on Mac OSX, and then installing with pip within virtual environments to protect your system. Refer to Install the OpenStack command line clients on Windows or Install the OpenStack command line clients on Mac OSX for details.

The OpenStack CLI is a common client that many services are moving towards. Here are some common commands to get you started to learn the shape of your cloud. These should help with troubleshooting with the Metapod team also.

Your credentials are a combination of username, password, and project (previously named tenant). You can extract these values from the openrc.sh file downloaded from Project > Access & Security > API Access tab. Source those credentials in your environment before running these commands.

Look at your OpenStack service catalog:

$ openstack catalog list

You can see a list of the services installed, the type of service provided, and the endpoints including publicURL, internalURL, and adminURL. The long ID appended to some of the endpoints is the tenant ID.

How’s your control plane?

As an administrator, you have a few ways to discover what your OpenStack cloud looks like simply by using the OpenStack tools available. This section gives you an idea of how to get an overview of your cloud, its shape, size, and current state.

First, you can discover what servers belong to your OpenStack cloud by running:

# nova service-list

The output looks like the following:

    +----+------------------+------------------------------------+----------+---------+-------+----------------------------+-----------------+
    | Id | Binary           | Host                               | Zone     | Status  | State | Updated_at                 | Disabled Reason |
    +----+------------------+------------------------------------+----------+---------+-------+----------------------------+-----------------+
    | 1  | nova-conductor   | mcp3.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:37.000000 | -               |
    | 2  | nova-conductor   | mcp2.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:36.000000 | -               |
    | 3  | nova-conductor   | mcp1.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:36.000000 | -               |
    | 6  | nova-consoleauth | mcp3.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:36.000000 | -               |
    | 7  | nova-consoleauth | mcp1.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:32.000000 | -               |
    | 8  | nova-scheduler   | mcp1.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:34.000000 | -               |
    | 9  | nova-consoleauth | mcp2.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:32.000000 | -               |
    | 10 | nova-scheduler   | mcp2.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:35.000000 | -               |
    | 11 | nova-scheduler   | mcp3.exampl1.le.metacloud.in        | internal | enabled | up    | 2016-05-04T20:02:37.000000 | -               |
    | 12 | nova-compute     | mhv2.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:37.000000 | None            |
    | 13 | nova-compute     | mhv1.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:30.000000 | None            |
    | 16 | nova-compute     | mhv6.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:38.000000 | None            |
    | 19 | nova-compute     | mhv3.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:31.000000 | None            |
    | 22 | nova-compute     | mhv4.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:31.000000 | None            |
    | 25 | nova-compute     | mhv7.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:31.000000 | None            |
    | 28 | nova-compute     | mhv5.exampl1.le.metacloud.in        | trial6   | enabled | up    | 2016-05-04T20:02:35.000000 | None            |
    | 29 | nova-cert        | nova-cert-1.exampl1.le.metacloud.in | internal | enabled | up    | 2016-05-04T20:02:30.000000 | -               |
    +----+------------------+------------------------------------+----------+---------+-------+----------------------------+-----------------+

The output shows that there are seven compute nodes. You see all the services in the up state, which indicates that the services are up and running. If a service is in a down state, it is no longer available. This is an indication that you should troubleshoot why the service is down.

How is Identity set up?

You can also use the Identity service (keystone) to see what users, roles, and projets are set up. You can also add projects, users, and roles with the OpenStack CLI.

The following commands require you to have your shell environment configured with the proper administrative variables:

$ openstack user list
$ openstack project list
$ openstack role list

What’s your compute capacity and setup?

By default, your cloud has a memory subscription ration of 1:6 cores. You can request to have that modified for your workloads by logging a support ticket.

You can inspect each host by name with a nova command, host-describe and the name of the host from the nova service-list command output.

$ nova host-describe mhv2.exampl1.le.metacloud.in

   +-----------------------------+----------------------------------+-----+-----------+---------+
 | HOST                        | PROJECT                          | cpu | memory_mb | disk_gb |
 +-----------------------------+----------------------------------+-----+-----------+---------+
 | mhv2.exampl1.le.metacloud.in | (total)                          | 48  | 257700    | 51339   |
 | mhv2.exampl1.le.metacloud.in | (used_now)                       | 54  | 112640    | 1080    |
 | mhv2.exampl1.le.metacloud.in | (used_max)                       | 54  | 110592    | 1080    |
 | mhv2.exampl1.le.metacloud.in | ee61e9855f9645e68d9a6c6493f6f3af | 8   | 16384     | 160     |
 | mhv2.exampl1.le.metacloud.in | 9d1c4427eb6748ca859431007946af2c | 8   | 16384     | 160     |
 | mhv2.exampl1.le.metacloud.in | 537c2909455d4637bf3397205726ac8c | 4   | 8192      | 80      |
 | mhv2.exampl1.le.metacloud.in | 6163243e6d8c4a2f9398e24bc9f33efe | 7   | 14336     | 140     |
 | mhv2.exampl1.le.metacloud.in | 47a3145713f5407baef667d8fbbe21c7 | 7   | 14336     | 140     |
 | mhv2.exampl1.le.metacloud.in | 045cbfff96e042e4bfeef25c87ffda33 | 12  | 24576     | 240     |
 | mhv2.exampl1.le.metacloud.in | 1774129abbbf483fbf0ea2c8dea54e51 | 2   | 4096      | 40      |
 | mhv2.exampl1.le.metacloud.in | 7925bb88566c4b73838d2c8b3cbce5e8 | 6   | 12288     | 120     |
 +-----------------------------+----------------------------------+-----+-----------+---------+

You also have detailed compute capacity reports coming to you each month. In addition, on your Dashboard you have drill-in information for each instance. Click Admin > Instances > and then click an instance name to see CPU utilization %, Disk read/write operations, and Network usage in MB/second.

What storage do you have?

You can look at the volumes and storage controllers you have available with a cinder command and your admin credentials sourced in your environment.

$ cinder service-list

    +------------------+---------------------------------------------------+------+---------+-------+----------------------------+-----------------+
    |      Binary      |                        Host                       | Zone |  Status | State |         Updated_at         | Disabled Reason |
    +------------------+---------------------------------------------------+------+---------+-------+----------------------------+-----------------+
    | cinder-scheduler |            mcp1.exampl1.le.metacloud.in            | nova | enabled |   up  | 2016-05-05T13:54:12.000000 |        -        |
    | cinder-scheduler |            mcp2.exampl1.le.metacloud.in            | nova | enabled |   up  | 2016-05-05T13:54:14.000000 |        -        |
    | cinder-scheduler |            mcp3.exampl1.le.metacloud.in            | nova | enabled |   up  | 2016-05-05T13:54:12.000000 |        -        |
    |  cinder-volume   | cinder-volume.exampl1.le.metacloud.in@nova-images1 | nova | enabled |   up  | 2016-05-05T13:54:19.000000 |        -        |
    +------------------+---------------------------------------------------+------+---------+-------+----------------------------+-----------------+

In addition, your Dashboard has extra storage metrics available. Click Admin > Storage to see an Overview of total storage (GB), available (GB), raw used (GB), reported by hour, day, month, or year. You can also see graphs for read/write throughput, IOPS (op/sec), and Latency (ms). Your storage pools are also listed with the amount used and available.

Next, you can inspect the storage quotas for each project. Put the project name where Demos appears below. Remember that a -1 value indicates no quota limits on that resource.

$ project_id=$(openstack project show -f value -c id Demos)

$ cinder quota-defaults $project_id

       +----------------+-------+
       |    Property    | Value |
       +----------------+-------+
       |   gigabytes    |  1000 |
       | gigabytes_Ceph |   -1  |
       | gigabytes_NFS  |   -1  |
       |   snapshots    |   10  |
       | snapshots_Ceph |   -1  |
       | snapshots_NFS  |   -1  |
       |    volumes     |   10  |
       |  volumes_Ceph  |   -1  |
       |  volumes_NFS   |   -1  |
       +----------------+-------+

If the defaults have been changed, you can compare to the defaults with this command:

$ cinder quota-show $project_id

Allocating Resources with Quotas

For each project, you choose the maximum limits, or quotas for the project as a whole. You want to set resource quotas so that you don’t allow a single user to exhaust system capacity for computing or storage resources. That said, some quotas can be set to unlimited, which is -1. To take a look at what’s currently set up, use the OpenStack CLI and the name of the project:

$ openstack quota show Demos

+--------------------+----------------------------------+
| Field              | Value                            |
+--------------------+----------------------------------+
| cores              | 20                               |
| fixed-ips          | -1                               |
| floating-ips       | 2                                |
| gigabytes          | 1000                             |
| gigabytes_Ceph     | -1                               |
| gigabytes_NFS      | -1                               |
| injected-file-size | 10240                            |
| injected-files     | 5                                |
| injected-path-size | 255                              |
| instances          | 10                               |
| key-pairs          | 100                              |
| network            | 5                                |
| port               | -1                               |
| project            | 7ed0ac8276354696b3b5b73033fd6157 |
| properties         | 128                              |
| ram                | 51200                            |
| router             | 2                                |
| secgroup-rules     | 20                               |
| secgroups          | 10                               |
| snapshots          | 10                               |
| snapshots_Ceph     | -1                               |
| snapshots_NFS      | -1                               |
| subnet             | 5                                |
| volumes            | 10                               |
| volumes_Ceph       | -1                               |
| volumes_NFS        | -1                               |
+--------------------+----------------------------------+

Each project has a default set of values for each quota. For workloads that need higher quotas, ensure users are in that project when they need higher allocations. You can modify those in the dashboard, command line, or API.

Why Use Aggregates?

Host aggregates are only visible to administrators. You can assign key-value pairs to groups of hypervisors, and the scheduler uses the values to make scheduling decisions. You can define logical groups for migration, or enable advanced scheduling of which hosts launch which VM instances. One example would be providing a flavor that provides high I/O because of SSD storage, and then ensuring that users know they can launch on an aggregate where that flavor is present. Refer to

Using Host Aggregates for more Flexible Instance Management for more information.

Providing Images

You have default images already available. You may modify those as needed.

Your cloud uses flavors to describe how much compute (CPU) power, memory available, and disk space allocation for launching instances. You want to ask yourself and your team, what flavors do we want to create? Your cloud comes with a predefined set of flavors, and you can add more or change them by deleting one flavor and adding another flavor.

You can assess your flavor requirements compared to the hardware you have by calculating:

  • How many virtual machines (VMs) you expect each hypervisor to run? Example: (overcommit fraction × cores) / virtual cores per instance
  • How much storage is required per VM? Example: flavor disk size × number of instances

Beyond the base images, find out what your users need. You can build Windows images for example, or prepare images from existing VMWare or AWS images. A fast way to bring over AWS images is to use CloudFormation templates.