Technology / System Admin

4 Essential vSphere Storage Types

by David Chapman
4 Essential vSphere Storage Types picture: A
Follow us
Published on June 9, 2021

VMware vSphere would not work without storage. Virtual machines are stateful constructs that require storage to persist in their states. Storage is important for persisting not only the VMs' data but also the metadata and configuration.

In order to choose the right storage, we need to understand some essential storage types. Once we understand them, we can pair them with the VM's storage needs.

How Does vSphere Use Storage?

vSphere consumes storage in a few different ways. VSphere mainly uses storage for the VMDK files, which store the data within the VM. VMDKs are the virtual disks or virtual hard drives for each VM. They are also used with other features to provide the ability to snapshot or Changed Block Tracking (CBT) to optimize backup processing. vSphere stores various VM metadata that contains configuration details, such as how many vCPUs and RAM. This is stored in the VMX file.

Logging is the usage of storage that we think little about, whether it's ESXi hypervisor logging or vmware.log from each guest. Another important but rarely thought-about usage of storage is file locking. The VMDKs need to be locked so that multiple hosts do not try to open the same VMDK for writing unless expected, such as shared disks for clustering. This locking helps to prevent multiple hosts from booting the same VM. Each storage type has its locking mechanism based on the features and functionality of that type.

Storage Type #1: Local Storage

Local storage is the most basic type of storage and where most administrators start. This storage type uses a block-based protocol such as SCSI, SATA, or SAS. Simply put, it refers to the disks or media in the server itself. It can be raw disks or disks on a RAID controller. Oftentimes, for ESXi it may be SD Cards or USB sticks.

The most common traditional usage for this storage is for installing vSphere. Many administrators opt for small local storage to install the hypervisor. This is where USB sticks or SD Cards are used. Many vendors even allow for mirrored SD Cards to decrease failure rates.

In non-clustered systems where SAN or shared storage is not economical or the business need doesn't drive it, VM storage may also reside locally on each server. Many small vSphere deployments start in this configuration.

This type of storage differs from the other options in that it is local and dedicated to the vSphere host it is allocated to. Unless you are using vSAN, it cannot be shared amongst other vSphere hosts. Because of its local nature, its capabilities and features are the most limited.

Best Use Case: The best use case for this is a tossup between a local install of ESXi or environments where the budget is extremely tight and lack of features is not a concern.

Storage Type #2: Fibre Channel

Fibre Channel (FC) is one of the older Storage Area Network (SAN) protocols. It is a block-based storage protocol similar to SCSI, except it does this over the FC protocol.

It has kept up with the times, but FC equipment can be quite expensive. It requires dedicated FC switches, Fiber Optic Cabling, and expensive FC Host Bus Adapters (HBA) to make it work. FC administration can confuse those unfamiliar with it because proper configuration requires zoning or splitting up paths.

Businesses with an FC infrastructure that need shared storage between the vSphere hosts typically use this type. Medium—to large deployments wanting enterprise-class storage with a proven track record for performance and reliability will look to implement this type.

Large or automated deployments may even use the FC SAN as the install's boot drive. This is very common in blade scenarios, so when a blade fails and is replaced, the blade profile can be reapplied to another blade and booted back up without a hiccup.

It differs from the others in that it requires FC-specific equipment. Depending on the speeds required, this equipment can be fairly costly. It also tends to push the need for a storage administrator to manage the fiber equipment.

Best Use Case: The best use case is larger environments with an FC infrastructure.

Storage Type #3: iSCSI

iSCSI is a newer SAN protocol than FC and is becoming more common. Like FC, it is a block-based protocol. It allows SCSI communications to be sent to a SAN over a traditional ethernet. It often allows for the reuse of existing switching gear to accomplish this. It requires some tuning to ensure optimal performance, but often, the SAN vendor has a custom Path Selection Plugin (PSP) to help optimize this.

vSphere administrators who need shared storage and want enterprise capabilities but do not have the budget for FC-based SANs tend to opt for this. iSCSI can be connected via software initiator or hardware HBA. Some administrators opt for the software option to avoid iSCSI-specific cards, while others prefer the hardware route so they can use SAN storage for the boot drive on the vSphere host.

It differs from the rest of the options as it is a good mix of cost, functionality, and performance. It is usually the most economical, fully featured option.

Best Use Case: iSCSI is best used in environments where shared storage and performance are required, but the cost needs to be minimized by utilizing existing infrastructure where possible.

Storage Type #4: Network File System (NFS)

Network File System (NFS) is a file-based protocol, which means its basic blocks are files instead of raw disk blocks. Its roots date back to the 1980s. NFS is traditionally used to serve files on UNIX platforms, much like SMB/CIFS works in the Microsoft Windows world.

NFS is great for proof-of-concept environments (POCs) and even production workloads in some cases. Many administrators opt for this when other SAN technologies are not available, but they have NFS space or an NFS topology already set up.

This storage type differs from the rest of the options as it is the only file-based option. This can benefit admins as it is effortless to access the NFS volumes and check and clear lock files. On block-based volumes that use VMFS, the file locking mechanism is a feature of VMFS, whereas file-based backings like NFS have a lock file to indicate the lock status. NFS also does not support Raw Device Mapping or VM Clustering (shared disks).

Best Use Case: The best use case for NFS is environments that want POC-shared storage and some features but need to use existing NFS servers or storage appliances that only speak NFS.

Final Thoughts

These essential types of storage all have their place and use case in any environment. In most environments, it typically comes down to what storage type has been deployed and whether that sufficiently meets the use case for the virtual environment. If it does not, the next decision point is typically budget.

Some environments are large and complex enough to utilize multiple types of storage. For example, developer clusters may use local storage only, whereas production may utilize iSCSI or FC. There is no wrong decision when choosing storage options as long as they meet the business needs.


Download

By submitting this form you agree to receive marketing emails from CBT Nuggets and that you have read, understood and are able to consent to our privacy policy.


Don't miss out!Get great content
delivered to your inbox.

By submitting this form you agree to receive marketing emails from CBT Nuggets and that you have read, understood and are able to consent to our privacy policy.

Recommended Articles

Get CBT Nuggets IT training news and resources

I have read and understood the privacy policy and am able to consent to it.

© 2024 CBT Nuggets. All rights reserved.Terms | Privacy Policy | Accessibility | Sitemap | 2850 Crescent Avenue, Eugene, OR 97408 | 541-284-5522