NFS vs iSCSI for Accessing VM Data

NFS vs iSCSI – which protocol should you choose for storing VMware VM files? This question usually comes up when you need to configure shared storage to store virtual machines (VMs) that must be migrated between ESXi hosts, for using clustering features, and when there are no free slots for attaching physical disks to the server.

Organizations deploying VMware vSphere in a large datacenter prefer using Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), which are costly. However, NFS and iSCSI are attractive for small and medium-size datacenters used for VMware vSphere infrastructure because the hardware needed to configure file sharing with these sharing protocols is more affordable. This blog post compares NFS vs iSCSI with a focus on using them in a VMware vSphere virtual infrastructure.

NAKIVO for VMware vSphere Backup

NAKIVO for VMware vSphere Backup

Complete data protection for VMware vSphere VMs and instant recovery options. Secure backup targets onsite, offsite and in the cloud. Anti-ransomware features.

What is NFS?

Network File System (NFS) is a network protocol that allows you to share files stored on a disk or a disk array of a server with other computers in the network. NFS was developed by SUN Microsystems and the first version was presented in 1984. As of writing this post, the latest implementation of NFS is version 4.1. Version 4.2 is in development and has not been presented for use in production yet.

New features and improvements were added with each new version of NFS including features useful for virtualization storage. NFS v4.1 provides a mechanism to allow multiple users to share the same file and to ensure data consistency (parallel access). Multiple threads for operations are supported.

NFS is the protocol that operates at the application layer of the Open Systems Interconnect (OSI) model. Clients access files by sending Remote Procedure Call (RPC) requests to an NFS server for doing operations with files and directories on the NFS server.

RPC requests are transmitted to the XDR protocol (eXternal Data Representation) that works on the presentation layer and is the standard for data abstraction between platforms. XDR describes the unified and canonical form of data representation that doesn’t depend on the architecture of the computing system. When a client transmits data, the RPC client transforms the local data into a canonical form, and the server performs the inverse operation.

Once the data unification is completed, the RPC service on the client side ensures the request of remote procedures and their execution on the server (providing features of the Session layer). At this stage, the explanation of the NFS-specific layers is finished. Next, the data is encapsulated into standard TCP or UDP data units and is transferred to the underlying layers of the OSI model.

OSI Layers Protocols
Application NFS
Presentation XDR
Session RPC
Transport TCP
Network IP
Data Link Ethernet
Physical

NFS shares data on the file level. Standard network adapters with Ethernet interface and RJ-45 port can be used for implementing NFS shared storage.

The oldest NFS versions work over UDP through the IP network, and the newer versions (NFS v2 and v3) can work over TCP and UDP. NFS 4.0 and 4.1 use TCP over IPv4 as the standard. NFS v4 works through firewalls and via the internet.

VMware vSphere ESXi 6.0 and higher ESXi versions support NFS 3.0 and NFS 4.1. ESXi contains a built-in NFS client that connects to an NFS server over TCP/IP. Two different NFS clients are used for connecting via NFS versions 3.0 and 4.1. You can select which NFS version to use when creating a new NFS datastore. VMware doesn’t support the following features when NFS v.4.1 is used:

  • Storage DRS
  • Storage I/O Control
  • Site Recovery Manager

Using NFS datastores is convenient when storing VM templates and ISO images for installing operating systems on virtual machines.

What Is iSCSI?

Internet Small Computer Interface (iSCSI) is a network protocol that ensures the interaction of objects (initiators and targets) in the network for sharing data. An iSCSI initiator is configured on the client side and the iSCSI target is configured on the server side.

iSCSI initiators can be software-based and hardware based. Hardware-based ones help offload the central processing unit (CPU or processor) on the client machine and require the installation of a hardware host bus adapter (HBA). A hardware iSCSI HBA is a network interface controller (NIC) with an Ethernet interface. In this iSCSI vs NFS comparison, I consider the use of software-based iSCSI initiators. iSCSI was introduced in 2003 and is described in RFC 3720.

iSCSI is the session layer protocol (works at layer 5 of the OSI model) that operates on top of the TCP/IP stack. Data is shared on the block level, unlike NFS but similarly to FC. This is an important point in the iSCSI vs NFS comparison. SCSI commands are encapsulated in TCP/IP data units and are transferred by using standard Ethernet networks. As a result, one computer can send SCSI commands to storage block devices located on another computer by using a network.

Layers Description
Application File system, database, etc.
SCSI SCSI data, SCSI commands, SCSI statuses
iSCSI iSCSI protocol services, iSCSI Qualified Name (IQN), Internet Storage Name Service (iSNS), CHAP authentication, etc.
TCP A protocol with an error control mechanism (usually works in a TCP/IP stack)
IP A protocol for network communication and routing
Ethernet Switches, cables, ports (connectors), protocols

Note: There is an interesting fact about how iSCSI can be used for VMware VM recovery. When you use Instant VM Recovery in NAKIVO Backup & Replication, for example, to run a VM on an ESXi host directly from a backup, the VM is created on the selected ESXi host and virtual disks are mounted to the VM by using the iSCSI protocol as RDM disks in a virtual compatibility mode.

VMware NFS vs iSCSI – Key Differences

Both NFS and iSCSI can work in 1-gigabit and 10-gigabit Ethernet networks (1GbE and 10GbE) deployed by using copper wires. Higher network speed is better. Read more about network topologies. When using a shared datastore in VMware vSphere to store VM files, both implementations (NFS and iSCSI) can be used for VM live migration, load balancing, and VM migration between datastores. Both sharing protocols have significant overhead caused by the mechanism of multi-layered data encapsulation over TCP/IP networks.

NFS is supported on most vendors’ NAS devices, for example, Synology and QNAP. However, it is not difficult to select NAS with the support for iSCSI nowadays.

Let’s go over this VMware NFS vs iSCSI comparison in more detail.

Load balancing

When one network path fails or is overloaded, multipathing provides the ability to load balance between a server and storage if there are multiple paths.

NFS 4.1 supports multipathing if session trunking is available on servers (but not Client ID trunking). As a result, you can access a single NFS volume from multiple IP addresses. If you use NFS v3, use DNS round-robin for network load balancing.

In VMware vSphere, iSCSI multipathing works at the level of a VMkernel network adapter. For iSCSI load balancing in vSphere, you can use port binding.

Caching

When using NFS, a file system with file system cache is located on an NFS server and a client machine should check metadata on the NFS server consistently. Asynchronous data writes are supported by NFS v3 and v4 but metadata updates are synchronous.

When using iSCSI, a file system is created by the client device after having access to the shared storage on the block level (as for VMware vSphere, an ESXi host creates the VMFS file system on iSCSI LUN). A caching policy is defined by a file system for iSCSI shared storage and the file system cache is located on the client side. For example, if you use iSCSI as the sharing protocol and ext3 as the file system, you have the full write-back cache for data and metadata updates.

The most modern file systems use asynchronous metadata update and log-based journaling is used for data recovery. In general, asynchronous data update (used in iSCSI) is less reliable in terms of data and metadata persistence compared with synchronous update, such as that used in NFS.

Reliability

NFS. NIC teaming can be used to protect against network failures. If one NIC fails, another NIC can continue to work.

iSCSI. VMware Pluggable Storage Architecture (PSA) uses the Storage Array Type plugin for failover implementation when working with iSCSI arrays. iSCSI binding requires mapping multiple iSCSI targets on different subnets to the iSCSI initiator.

As both iSCSI and NFS use TCP for encapsulation, data delivery is checked on the network level.

VMFS on the iSCSI storage can be fragile if you store thin provisioned virtual disks for VMs. A power failure can make a volume unrecoverable. The behavior of NFS datastores in such situations is slightly more reliable. You can mitigate these issues if you perform regular VMware backups.

Security

iSCSI traffic is not encrypted in general, but this doesn’t mean that you cannot protect iSCSI traffic. Using a name and password is supported for authentication for iSCSI shares. Challenge Handshake Authentication Protocol (CHAP) allows the server and client to ensure that they trust each other.

NFS uses host-based authentication. The default configuration of NFS doesn’t provide encryption (sys=system), but when using NFSv4 with Kerberos enabled (sec=krb5p), the connection is secure. In the NFS server configuration, you have to define the IP address of the host that is allowed to access the NFS share. You can also define multiple hosts or the entire subnet. For example, the widely known SMB file-based sharing protocol relies on user-based authentication.

Configuring a dedicated VLAN or using a separate (private) physical network is the recommended practice for using shared iSCSI and NFS storage in VMware vSphere. This approach allows you to isolate storage traffic from other types of traffic. NFS v3 doesn’t have security features similar to those in NFS v4.1. An ESXi server mounts an NFS share with root access (when Kerberos is not used). So, keep this in mind when making a secure configuration. Read also about VLAN and VXLAN.

NFS v.4.1 supports Kerberos authentication with cryptography mechanisms in addition to the Data Encryption Standard (DES). The cryptographic algorithms in Kerberos prevent unauthorized users from accessing NFS traffic. ESXi supports the krb5 and krb5i implementations of Kerberos. ESXi 7.0 supports NFS 4.1 Kerberos Encryption and the AUTH_SYS security mechanism (but not simultaneously).

VMware NFS vs iSCSI – Raw Device Mapping

When using iSCSI as shared storage, you can configure raw device mapping for a VM. Raw Device Mapping (RDM) is a feature that allows you to attach the entire physical disk or iSCSI LUN to a VM as a device directly (instead of attaching a virtual disk in VM configuration). The RDM-based approach differs from the traditional approach when you create a datastore on a LUN (Logical Unit Number), create the VMFS file system, and store virtual disks used by VMs on that datastore. RDM is possible when using iSCSI because an iSCSI share works on the block level, and a VM can format an attached block RDM disk with a custom file system used by a guest operating system installed on the VM.

As for NFS, using an NFS share to attach as an RDM disk is not supported because NFS shares work at the file level, and RDM requires block devices to be attached to VMs. With NFS shares you can only create NFS datastores and store VMDK virtual disk files on the datastores. You can mount an NFS share and iSCSI share at the level of a guest operating system if the guest operating system has an NFS client or iSCSI initiator. Thus, in the category of raw device mapping in the VMware iSCSI vs NFS comparison the winner is iSCSI.

iSCSI vs NFS Performance

In a software iSCSI implementation, performance is slightly higher, but the CPU load on the client host is also higher. iSCSI also puts a higher load on the network. iSCSI generates more network traffic and network load, while using NFS is smoother and more predictable. When a large number of write operations are performed, you may notice performance degradation with an NFS share.

When using NFS in vSphere, it is better to use NFS with support for vStorage API for Array Integration (VAAI) on the storage side. VAAI allows you to create thick provisioned virtual disks on NFS datastores. Thin provisioned disks are created on NFS datastores by default. Both NFS and iSCSI support Jumbo frames to improve network performance.

At the end of this iSCSI vs NFS speed comparison, it should be mentioned that performance also depends on the storage array vendor.

Concurrent access

iSCSI doesn’t support concurrent or parallel data access to a block device. Data needs to be shared between two hops. However, parallel access is allowed at the level of a file system that supports parallel access to files, for example, VMFS or GFS. When using iSCSI shares in VMware vSphere, concurrent access to the shares is ensured on the VMFS level.

NFS supports concurrent access to shared files by using a locking mechanism and close-to-open consistency mechanism to avoid conflicts and preserve data consistency. NFS v3 and NFS v4.1 use different mechanisms. NFS v3 can use Network Lock Manager protocol (NLM), and NFS v4.1 uses native protocol specified locking. If NFS v3 is used on VMware ESXi to access file shares, ESXi doesn’t use the NLM protocol because VMware provides its own locking protocol in this case. Lock files naming with .lck-file_id are created on a file share when using NFS v3 shares in VMware vSphere.

NFS v4.1 uses reservations to lock files for concurrent access. If you create an NFS file share, all clients must use the same version of the NFS protocol (all ESXi hosts connect to the share via NFS v.4.1, for example). If two incompatible clients use different versions of NFS access files on an NFS server, there may be inconsistent behavior and data corruption.

Difficulty of configuration

When using NFS, it is easier to configure a server and a client. Using iSCSI to configure a shared storage is more difficult. You need to configure IQNs for storage and hosts; make a configuration of the iSCSI service, LUNs, and masking; and configure multiple VLANs for security reasons (isolating network segments used for iSCSI communication to provide a higher security level).

Read about VMware virtual volumes that can be used to store VM data.

NFS vs iSCSI in VMware vSphere – Summary Table

Let’s highlight the main features of each data sharing protocol in this iSCSI vs NFS VMware comparison in the summary table.

iSCSI NFS
Data sharing Block-level File-level
Raw Device Mapping for VMs Yes No
Difficulty of configuration Medium Easy
Boot from SAN Yes No
Error checking Yes Yes
Security features CHAP Kerberos
Storage vMotion Yes Yes
Storage DRS Yes Yes

Conclusion

Both sharing protocols are mature enough to be used in VMware vSphere. The main difference between iSCSI and NFS is that iSCSI shares data on the block level, and NFS shares data on the file level. Performance is almost the same, but, in some situations, iSCSI can provide better results. RDM disks for VMs can be used with iSCSI but not with NFS.

Both network sharing protocols are reliable. However, you still need to use a third-party data protection solution to avoid data loss and downtime. Avoid power failures and other hardware failures to avoid data loss on your shared storage. Use uninterruptible power supply units and create regular backups.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

People also read