Debug School

Cover image for Virtual Volumes
Suyash Sambhare
Suyash Sambhare

Posted on

Virtual Volumes

Summary

vVols is an integration and administration framework that abstracts SAN/NAS arrays, allowing a supplementary effective working model that is improved for virtualized settings and centred on the application rather than the infrastructure. vVols shortens actions via policy-driven automation that enables more agile storage consumption for virtual machines and dynamic adjustments in real time when they are needed. It simplifies the delivery of storage service levels to individual applications by providing finer control of hardware resources and native array-based data services that can be instantiated with virtual machine granularity.

With vVols, VMware offers a paradigm in which an individual virtual machine and its disks, rather than a LUN, become a unit of storage management for a storage system. vVols encapsulate virtual disks and other virtual machine files and natively stores the files on the storage system.

Overview

vVols are VMDK granular storage entities exported by storage arrays. vVols are exported to the ESXi host through a small set of protocol endpoints (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective vVols on demand. Storage systems enable data services on vVols. The results of these data services are newer vVols. Data services, configuration, and management of virtual volume systems are exclusively done out-of-band concerning the data path. vVols can be grouped into logical entities called storage containers (SC) for management purposes. The existence of storage containers is limited to the out-of-band management channel.

vVols and Storage Containers (SC) form the virtual storage fabric. Protocol Endpoints (PE) are part of the physical storage fabric.

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the storage system becomes aware of the vVols and their associations with the relevant virtual machines. Through VASA, vSphere and the underlying storage system establish two-way out-of-band communication to perform data services and offload certain virtual machine operations to the storage system. For example, operations such as snapshots and clones can be offloaded.

For in-band communication with vVols storage systems, vSphere continues to use standard SCSI and NFS protocols. This results in support with vVols for any type of storage that includes iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE), and NFS.

  • vVols represent virtual disks of a virtual machine as abstract objects identified by 128-bit GUID, managed entirely by Storage hardware.
  • Model changes from managing space inside datastores to managing abstract storage objects handled by storage arrays.
  • Storage hardware gains complete control over virtual disk content, layout, and management.

Many storage partners have added vVols support in their arrays. For end-to-end vVols support, HBA drivers need to support vVols-based devices. This necessitates the availability of an API to get the second-level LUN ID (SLLID) and use by the SCSI drivers.

When checking the I/O Devices section of the VMware Compatibility Guide, you will see a new entry called Secondary LUNID. (Enables vVols). The ESXCLI infrastructure has the provision to display SSLID capability output of the HBA with this command esxcli storage core adapter list

vVols

vVols Component Overview

vVols constitutes five major components namely vVol Device, Protocol End Point, Storage Container, VASA Provider, and Array, and all these components are managed/used/handled by different components in vSphere Stack such as Virtual Center (VASA, SPBM), ESXi (Hostd, VVOLD, VVOL FDS Driver). It becomes necessary to get a holistic view of the environment and configuration.

Characteristics of vVols:

  • No File System.
  • ESX manages the array through VASA (vSphere APIs for Storage Awareness) APIs.
  • Arrays are logically partitioned into containers, called Storage Containers.
  • Virtual machine disks, called vVols, are stored natively on the Storage Containers.
  • IO from ESXi host to the storage array is addressed through an access point called, Protocol Endpoint (PE).
  • Data Services are offloaded to the array. Snapshot, Replication, Encryption.
  • Managed through storage policy-based management (SPBM) framework.

VASA (vSphere APIs for Storage Awareness) APIs (VP)

A vVols storage provider, also called a VASA provider is a software component that acts as a storage awareness service for vSphere. The provider mediates out-of-band communication between the vCenter Server and ESXi hosts on one side and a storage system on the other.

  • Software component developed by Storage Array Vendors.
  • ESX and vCenter Server connect to VASA Provider.
  • Provides Storage awareness services.
  • A single VASA Provider can manage multiple arrays.
  • Supports VASA APIs exported by the ESXi host.
  • VASA Provider can be implemented within the array’s management server or firmware.
  • Responsible for creating vVols.

Storage Containers (SC)

Unlike traditional LUN and NFS-based vSphere storage, the vVols functionality does not require pre-configured volumes on the storage side. Instead, vVols uses a storage container, which is a pool of raw storage capacity or an aggregation of storage capabilities that a storage system can provide to vVols.

  • Logical storage constructs for grouping of vVols.
  • Logically partition or isolate virtual machines with diverse storage needs and requirements.
  • A single Storage Container can be simultaneously accessed via multiple Protocol Endpoints.
  • Desired capabilities are applied to the Storage Containers.
  • VASA Provider discovers Storage Container and reports to the vCenter Server.
  • Any new virtual machines that are created are subsequently provisioned in the Storage Container.

Protocol Endpoint (PE)

Although storage systems manage all aspects of vVols, ESXi hosts have no direct access to vVols on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the Protocol Endpoint (PE), to communicate with vVols and virtual disk files that vVols encapsulate. ESXi uses Protocol Endpoints (PE) to establish a data path on demand from virtual machines to their respective vVols.

  • Separate the access points from the storage itself.
  • Can have fewer access points.
  • Existing multi-path policies and NFS topology requirements can be applied to the PE.
  • Access points that enable communication between ESXi hosts and storage array systems.
  • Compatible with all SAN and NAS Protocols: iSCSI, NFS v3, FC, FCoE.
  • A Protocol Endpoint can support any one of these protocols at a given time.

vVols Objects

A virtual datastore represents a storage container in the vCenter Server and the vSphere Web Client. vVols are encapsulations of virtual machine files, virtual disks, and their derivatives.

  • Virtual machine objects stored natively on the array storage containers
  • There are five different types of recognized vVols:
  1. Config-vVol - Metadata
  2. Data-vVol - VMDKs
  3. Mem-vVol - Snapshots
  4. Swap-vVol - Swap files
  5. Other-vVol - Vendor solution specific

Guidelines when Using vVols

The vVols functionality offers several benefits and advantages. When you work with vVols, you must follow specific guidelines:

vVols has the following characteristics:

  • vVols supports offloading several operations to storage hardware. These operations include snapshotting, cloning, and Storage DRS.
  • With vVols, you can use advanced storage services that include replication, encryption, deduplication, and compression on individual virtual disks.
  • vVols supports vSphere features such as vMotion, Storage vMotion, snapshots, linked clones, Flash Read Cache, and DRS.
  • With vVols, storage vendors can use native snapshot facilities to improve the performance of vSphere snapshots.
  • You can use vVols with storage arrays that support vSphere APIs for Array Integration (VAAI).
  • vVols supports backup software that uses vSphere APIs for Data Protection (VADP).

vVols Guidelines and Limitations

Follow these guidelines when using vVols.

  • Because the vVols environment requires the vCenter Server, you cannot use vVols with a standalone ESXi host.
  • vVols does not support Raw Device Mappings (RDMs).
  • A vVols storage container cannot span across different physical arrays.
  • Host profiles that contain virtual data stores are vCenter Server specific. After you extract this type of host profile, you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host.

Key benefits of vVols:

  • Operational transformation with vVols when data services are enabled at the application level.
  • Improved storage utilization with granular-level provisioning.
  • Common management using Policy Based Management.

Ref: https://kb.vmware.com/s/article/2113013

Top comments (0)