What is Storage Area Network (SAN)?
Storage Area Network (SAN) is a high-performance iSCSI or Fibre Channel block storage network, for structured workloads, with shared pools of resources that are presented to multiple servers.
Purpose-built for critical workloads with low latency tolerance and high-performance requirements, SAN systems leverage Logical Unit Numbers (LUNs) to deliver a high speed architecture. Their ability to support IOPS intensive workloads make SAN systems the industry standard for a diverse range of enterprise applications and use-cases.
Use Cases of Storage Area Network (SAN)
Storage area network (SAN) systems are deployed in datacenters worldwide to facilitate use-cases, such as:
Databases such as Oracle or Microsoft SQL are commonly used for key day-to-day operations, which makes latency undesirable and high performance a must-have.
Often used to support thousands of Virtual Machines (VMs), storage area network systems are ideal for large scale deployments of industry standard hypervisors such as VMware, Microsoft Hyper-V, KVM, Citrix (or XenServer), and StoneFly Persepolis.
ERP and CRM Environments
Large deployments of enterprise applications such as SAP and other ERP or CRM applications require high availability, performance, and no downtime – making storage area network systems the most suitable host platforms.
Virtual Desktop Infrastructures (VDIs)
Storage area networks provide a high-performance centralized storage location capable of supporting large numbers of virtual desktops; which simplifies management of large-scale deployments.
Types of Storage Area Network (SAN)
Storage area networks use the following four types of block-level storage protocols:
Internet Small Computer System Interface (iSCSI)
iSCSI connects the iSCSI host and iSCSI storage using the Transport Control Protocol (TCP) on TCP/IP networks.
Fibre Channel over Ethernet (FCoE)
Similar to iSCSI, the FCoE uses the FC framework and connects the iSCSI host and initiator via the IP Ethernet network.
Fibre Channel (FC)
Fibre Channel or Fibre Channel Protocol (FCP) uses FC block storage protocols embedded with SCSI commands.
Non-Volatile Memory Express over Fabric (NVMe–oF)
The latest of the other three block storage protocols, NVMe over fabrics extend the faster NVMe storage network to Ethernet and Fibre Channel (FC) to enhance IOPS between the iSCSI host and storage.
Advantages of Storage Area Network (SAN)
Here are some advantages of implementing a SAN infrastructure:
Effective Storage Usage
The consolidated and centralized SAN architecture enables users to effectively leverage available storage capacity to its maximum potential.
Disaster Recovery (DR) for Mission-critical Data
SAN systems come with native support for enterprise DR applications. In the event of a disaster (natural or man-made), you can recover your critical workloads in no time and ensure business continuity without fail.
High Availability Architecture
Equipped with redundant storage controllers and RAID, storage area networks do not have a single point of failure. In the event of a key component failure, high availability SANs make sure that the data remains available with minimum downtime.
Highly Scalable Block-Level Storage
Storage area networks can support thousands of drives, with RAID arrays and expansion units, to build petabyte-scale systems. Users can start small with terabytes of storage capacity and easily scale up as data grows.
No more Bandwidth Bottlenecks
Storage area networks leverages dedicated high-speed networks to transfer block-level workloads. This practice ensures smoother IOPS and prevents Local Area Network (LAN) and bandwidth bottlenecks.
Data Loss Resistant Infrastructure
In addition to native DR support, SAN systems also support backup capabilities, such as immutable snapshots and more, to make sure that in the event of a disaster, your critical block data is safe and recoverable.
Disadvantages of Storage Area Network (SAN)
Here are some of the disadvantages of storage area networks that you need to keep in mind:
Not Suitable for Small-Scale Deployments
For projects that need to run a few VMs or set up DR for one or two applications, SAN systems tend to be too expensive and are therefore not ideal.
Expensive & Long-term Return on Investments (ROIs)
SAN systems are built for large scale deployments and high-performance workloads. Consequently, they tend to be more expensive than other storage types (file and object) and the ROIs improve gradually over time.
Trained & Dedicated IT Staff
SAN servers require trained IT professionals, who are familiar with the technology, to set it up, maintain, and manage. The need for dedicated staff also adds to the Total Cost of Ownership (TCO) of storage area networks.
SAN Versus NAS – What’s the Difference?
Storage Area Network
- Block-level storage for structured workloads
Standard protocols:iSCSI, FCP, FCPoE, NVMe-oF
Storage used for databases such as MySQL, NoSQL, and applications like SAP or other large-scale CRM and EHR deployments
Network Attached Storage
- File-level storage for unstructured workloads
- Standard protocols: NFS or CIFS/SMB
- Storage for files, folders, images, email servers, and videos,
How Storage Area Network (SAN) Works
Storage area networks pool available storage and present them as a centrally managed target storage to preconfigured iSCSI initiators. This is done using software such as StoneFly StoneFusion, which installs directly on bare-metal, or applications such as StoneFly SCVM or VMware vSAN, which leverage virtualization.
In terms of architecture, storage area networks can be divided into the following three layers:
The host layer
The servers attached to the storage infrastructure make up the host layer. This layer facilitates structured workloads, such as databases and applications, that need access to block storage.
The fabric layer
This layer consists of the communication medium used for SANs, such as the network devices and cabling that interconnect the hosts with storage.
The storage layer
This layer, as the name suggests, is the representation of the storage resources that have been collected for various storage pools and tiers. Storage is available in different forms such as integrated HDDs, flash (SSDs), and /or NVMe along with RAID arrays, expansion units, and tape arrays.
Storage Area Network offers universal connectivity of storage devices and computers as it depends on the client/server model of communication. Let us take a paradigm, to understand the situation completely. By deploying a number of multiple servers, an organization creates a number of unconnected islands of information. Each server island is accessible by one computer, but cannot be accessed by others.
Under these circumstances, if computer B needs to communicate with the data of computer A, it needs a copy of the data from the server A. In this situation, the transfer of data between the two computers depends on three popular techniques which are file transfer, inter-process communication and backup, which is a criterion. Let us assume that the data transfer from computer A to B is established. But the computer B will face a situation, in which it has to work with the data that is out of date simply because the copied data was not in a timely fashion. The copying of data may also cause costly errors, due to the extra operational complexity, taking place in between the two servers.
At this point of time, a SAN architecture concept will be offering a perfect solution for this entire situation. In a Storage area framework, all the servers are physically connected to all storage devices. If server B needs some data from server A, instead of requesting for a copy of data from server A, it can access the data directly from the devices, on which server A has stored the data. This is possible, when the data storage acts as a common access point to all servers, instead of a single server. Hence, as SAN operates on universal storage connectivity, it has power implications on Information technology.
SANs universal connectivity eliminates the need of scheduling data transfers in between the servers. It also eliminates the need for purchase and maintain of temporary storage, meant only for a data transfer to take place to another server. If two servers are running different applications, one need not worry about the sync of the data copies as they are working on the same content.
Since SAN storage only offers block level operations, it doesn’t provide file abstraction. But if the file systems are structured on top of a storage area network, then it can also provide file access and is known as SAN file system or shared disk file system.
In large enterprises, storage area network is seen as a storage pool for the servers, which are connected via a network. It simplifies administration, as it is better than managing each storage media, for each server. This will also make the maintenance of the storage easy and the scheduled backups will also be simple and can be managed in an easy way. In order to support data continuity, SANs are often deployed in remote locations and are maintained in conditions, which can offer instant disaster recovery.
Self-Healing Benefits Offered by SAN Storage
Self-healing features restore storage and data, after a failure, before RAID rebuilds the data of a failed disk. Moreover, end to end error detection and repair is offered without the need of replacing hard drives throughout the systems warranty period.
By having a self-healing storage network, users can eliminate potential causes of drive failure. To reduce drive failures, heat reduction and vibration reduction is necessary, as it improves a drive’s reliability. Vibration can cause drive failures and also can instigate skipping of reads-and-writes, of adjacent drives. No matter, how efficiently the drive manufacturers design the drives, vibration can cause failures, which can result in downtime. However, this problem can be overcome, by building individual drive housing, so that there is uniform rigidity throughout the system. This can be done by installing drive shelves front to back, alternating throughout the array shelf (known as counter mounting).
On the other hand, heat can be minimized by increasing the airflow in the drive enclosure. By placing the drives in a side by side position in front of the drive bay, tight packing of the components can be avoided. This practice will not only improve airflow and decrease heat upsurge, but also will reduce vibration.
If a storage system takes measures to resolve erroneous disk drive failures, this saves a lot of money, time and also eliminates the risk of facing downtime. By changing the physical design of the hardware, reduction in drive failures can be observed. When the drive shows initial signs of failure, one can reset or power cycle the drive in an automatic way, which doesn’t show any impact on the regular system operations. When the drive seems to be well enough to be restored, then it can be restored to normal operations. If still the drive power cycles are unable to correct the problem, then self-healing system will be having the ability to perform a full remanufacturing process.
In most of the cases, almost 70% of the drives deployed in SAN architecture can be repaired. So, as a result, one can save money, reduce data vulnerability and can make sure that mission critical operations achieve high performance levels.
StoneFly and Storage Area Networks
StoneFly pioneered the creation, development, and deployment of the iSCSI storage protocol and helped it become the standard block storage protocol used around datacenters worldwide and in 2002, we shipped our first IP SAN storage appliance.
Currently in their 10th generation, StoneFly’s enterprise SAN storage appliances available with the choice of multi-core single or dual Intel Xeon processor(s), high speed 10/40/100Gb iSCSI network ports or 8/16Gb FC, and terabytes to petabytes of highly scalable storage capacities.
Preconfigured with StoneFly’s patented SAN operating system (StoneFusion), the storage area networks offer a range of enterprise data services, such as deduplication, thin provisioning, advanced encryption, etc. to optimize block data storage and facilitate mission-critical workloads.