You can use Ceph for Block Storage in Metacloud. Ceph uses the RADOS protocol to offer basic storage array functionality such as data replication. Ceph provides administrators more control over data distribution and replication strategies. Using Ceph, you can provision a new volume very quickly. See Managing Volumes and Volume Types for more information.
Using an all-Ceph implementation can hinder performance for workloads that do not require the replication and redundancy that Ceph provides, for example, databases that need higher Input/Output performance and already have replication strategies or Hadoop Distributed File System (HDFS) workloads. In this example, you are replicating replication to the detriment of performance.
For environments that run a mix of workloads on the same cloud, you can mix ephemeral disks on each hypervisor with Ceph-backed Block Storage service. You can run Ceph for
cinder and leverage a local server storage for
nova. This enables VMs that need replication to use boot-from-volume workflows and VMs that only need direct access to a disk on the servers to use standard
nova boot workflows. By leveraging Host Aggregates, you can control which VMs appear on servers with SSD-based storage and HDD-based storage. In addition, using boot from volume or attaching volumes allows your VMs to leverage Ceph-based storage. See Creating and Managing Host Aggregates for more information.
Ceph storage must be configured and running on all servers in this configuration.
A Ceph-backed Block Storage configuration is only supported on Cisco UCS C240 servers running Firmware version 2.0(9f) or higher. The following hardware requirements must be met for this configuration:
- One 12GB MRAID Controller
- One 480 GB SSD drive (Ceph journal)
- Four HDD drives (Ceph OSDs)
- Two to four SSD/HDD drives (Local Storage)
When running this configuration with all SSD drives, the disks are configured in JBOD mode. Running SSDs in a RAID results in a higher likelihood that all drives will fail at the same time.
When running this configuration with HDD drives, you must configure the drives in JBOD or RAID 0, 1, 5, or 6. Note the following requirements:
RAID 5 requires 3 drives RAID 6 requires 4 drives
Supported Image Types
|Backing File Type||Supported|
|QCOW2 (KVM, Xen)||No|
Using Ephemeral Storage
Ephemeral storage includes a root ephemeral volume (that contains the bootloader and core operating system files) and an additional ephemeral volume. The root disk is associated with an instance, and exists only for the life of that instance. In most cases, the instance’s root file system remains in ephemeral storage, and the storage persists even after you reboot the guest operating system. However, when you delete the instance, the root ephememal volume is also deleted. You define the amount of the root ephemeral volume when you create the flavor of an instance. See Working with Flavors for more information.
Unless a root disk is sourced from
cinder, the disks associated with VMs are “ephemeral,” meaning that (from the user’s point of view) they effectively disappear when a VM is terminated.
In addition to the root ephemeral volume, all default types of flavors, except
m1.tiny, provide an additional ephemeral block device sized between 20 and 160 GB. It is represented as a raw block device with no partition table or file system. A cloud-aware operating system can discover, format, and mount such a storage device. Metacloud Compute defines the default file system for different operating systems as Ext4 for Linux distributions, VFAT for non-Linux and non-Windows operating systems, and NTFS for Windows.
To view your storage:
- Log in to the Dashboard.
- Choose a project from the drop-down list.
- On the Admin tab select Storage.
Select the Local or Ceph tab to view storage data.