Technology / System Admin

4 Essential vSphere Storage Types

by David Chapman
4 Essential vSphere Storage Types picture: A
Follow us
Published on June 9, 2021

VMware vSphere without storage simply would not work. Virtual machines are stateful constructs and require storage to persist those states. Not only is storage important for persisting the data of the VMs but also persisting the metadata and configuration.

In order to choose the right storage we need to understand some essential storage types. Once they are understood, they can be paired with the VM needs for storage.

How Does vSphere Use Storage?

vSphere consumes storage in a few different ways. The main way vSphere uses storage is for the VMDK files which store the data within the VM. VMDKs are the virtual disks or virtual harddrives for each VM. They are also used in conjunction with other features to provide the ability to snapshot or Changed Block Tracking (CBT) in order to optimize backup processing. vSphere stores various VM metadata that contains configuration details such as how many vCPUs, RAM. This is stored in the VMX file.

Logging is usage of storage that we think little about, whether its ESXi hypervisor logging or vmware.log from each guest. Another important, but rarely thought about usage of storage, is file locking. The VMDKs need to be locked so that multiple hosts do not try to open the same VMDK for write unless expected such as shared disks for clustering. This locking helps to prevent multiple hosts from booting the same VM. Each storage type has its own locking mechanism based on the features and functionality of that type.

Storage Type #1: Local Storage

Local storage is the most basic type of storage and where most administrators start out. This storage type uses a block-based protocol such as SCSI, SATA or SAS. Simply put, it refers to the disks or media in the server itself. It can be raw disks, disks on a RAID controller. Oftentimes for ESXi it may be SD Cards or USB sticks

The most common traditional usage for this storage is for installation of vSphere. Many administrators opt for small local storage to install the hypervisor. This is where USB sticks or SD Cards are used. Many vendors even allow for mirrored SD Cards to decrease failure rates.

In non-clustered systems where SAN or shared storage is not economical or the business need doesn't drive it, VM storage may reside locally on each server as well. Many small vSphere deployments start out in this configuration.

This type of storage differs from the other options in that it is local and dedicated to the vSphere host it is allocated to. Unless you are using vSAN, it cannot be shared amongst other vSphere hosts. The capabilities and features of this storage is the most limited because of the local nature of it.

Best Use Case: The best use case for this is a tossup between a local install of ESXi or environments where budget is extremely tight and lack of features is not a concern.

Storage Type #2: Fibre Channel

Fibre Channel (FC) is one of the older Storage Area Network (SAN) protocols. It is a block-based storage protocol and acts similar to SCSI in that regard, except it does this over the FC protocol. It has kept up with the times, but FC equipment can be quite expensive, requiring dedicated FC switches, Fiber Optic Cabling and expensive FC Host Bus Adapters (HBA) to make it work. Administration of FC can be confusing to those not familiar with it because proper configuration requires zoning or splitting up paths.

Businesses that already have an FC infrastructure and need shared storage between the vSphere hosts typically use this. Medium to large deployments wanting enterprise class storage with a proven track record for performance and reliability will look to implement this storage type.

Large or automated deployments may even opt for using the FC SAN for the boot drive of the install. This is very common in blade scenarios so that when a blade fails and is replaced, the blade profile can be reapplied to another blade and boot back up without a hiccup.

It differs from the others in that it requires FC specific equipment. This equipment can be fairly costly depending on the speeds required. It also tends to push the need for a storage administrator to manage the fiber equipment.

Best Use Case: The best use case for this is larger environments that already have a FC infrastructure.

Storage Type #3: iSCSI

iSCSI is a newer SAN protocol than FC and becoming more common. Like FC, it is a block based protocol. It allows SCSI communications to a SAN over traditional ethernet. Many times it allows for the reuse of existing switching gear to accomplish this. It does require a bit of tuning to ensure optimal performance but many times the SAN vendor has a custom Path Selection Plugin (PSP) to help optimize this.

vSphere administrators who need shared storage and want enterprise capabilities, but do not have the budget for FC-based SANs, tend to opt for this. iSCSI can be connected via software initiator or hardware HBA. Some administrators opt for the software option to avoid iSCSI specific cards while others prefer the hardware route so they can use SAN storage for the boot drive on the vSphere host.

It differs from the rest of the options as it is a good mix of cost, functionality, and performance. It is usually the most economical fully featured option.

Best Use Case: The best use case for iSCSI is in environments where shared storage and performance is required but cost needs to be minimized by utilizing existing infrastructure where possible.

Storage Type #4: Network File System (NFS)

Network File System (NFS) is a file-based protocol. This means its basic blocks are files instead of raw disk blocks. It has roots dating back to the 1980s. It is traditionally used to serve files on UNIX platforms much like SMB/CIFS works in the Microsoft Windows world.

NFS is great for Proof of Concept environments (POCs) and even production work loads in some cases. Many administrators opt for this when other SAN technologies are not available but they have NFS space or an NFS topology already set up.

This storage type differs from the rest of the options as it is the only file-based option. This can be beneficial for admins as it is very easy to access the NFS volumes and check and clear lock files. On block-based volumes that use VMFS, the file locking mechanism is a feature of VMFS, whereas file based backings like NFS have a lock file to indicate the lock status. NFS also does not support Raw Device Mapping or VM Clustering (shared disks).

Best Use Case: The best use case for NFS is environments that want to POC shared storage and some of the features but need to use existing NFS servers or storage appliances that only speak NFS.

Final Thoughts

These essential types of storage all have their place and use case in any environment. In most environments it typically comes down to what storage type has been deployed to the environment and whether that sufficiently meets the use case for the virtual environment. If it does not, the next decision point is typically budget.

Some environments are large enough and complex enough to utilize multiple types of storage. For example developer clusters may use local storage only whereas production may utilize iSCSI or FC. There is no wrong decision when choosing storage options as long as they meet the business needs.


Download

By submitting this form you agree to receive marketing emails from CBT Nuggets and that you have read, understood and are able to consent to our privacy policy.


Don't miss out!Get great content
delivered to your inbox.

By submitting this form you agree to receive marketing emails from CBT Nuggets and that you have read, understood and are able to consent to our privacy policy.

Recommended Articles

Get CBT Nuggets IT training news and resources

I have read and understood the privacy policy and am able to consent to it.

© 2024 CBT Nuggets. All rights reserved.Terms | Privacy Policy | Accessibility | Sitemap | 2850 Crescent Avenue, Eugene, OR 97408 | 541-284-5522