SAN stands for Storage Area Network. It is a dedicated high-speed network that provides access to block-level storage devices. SAN is primarily used to make storage devices accessible to servers so that the devices appear as locally attached drives to the operating system.
What is SAN?
SAN is a dedicated network of storage devices that enables multiple servers to access the storage devices. The key components of a SAN include:
- Storage devices such as disk arrays, tape libraries, and optical drives
- SAN fabric switches and directors that provide the connectivity between servers and storage devices
- Cables and connectors that link the various SAN components
- Host bus adapters (HBAs) in servers to provide access to the SAN
- SAN management software to monitor and manage the SAN fabric and storage devices
In a SAN, the storage devices themselves are not connected directly to the servers. Instead, each storage device is connected to the SAN fabric, which enables multiple servers to access the devices. Servers connect to the SAN through HBAs or converged network adapters that allow them to communicate with the storage devices.
The key advantage of a SAN is that it allows centralized management, scalability, flexibility and utilization of storage resources. Multiple heterogeneous servers can share storage devices on a SAN. Administrators can allocate storage capacity from a common pool to servers as required, without having to physically attach storage to each server.
SAN vs NAS
SAN provides block-level access whereas NAS provides file-level access to storage. This key difference stems from the way servers access storage in the two architectures:
SAN
- Provides block-level access to storage – SAN devices appear as locally attached drives to the OS
- Fibre Channel (FC) is the primary SAN protocol, with performance up to 16 Gbps
- iSCSI runs SAN protocols over TCP/IP networks
- Allows shared storage across multiple heterogeneous servers
- Excellent performance, scalability and availability
- Supports advanced features like thin provisioning, snapshots, replication etc.
NAS
- Provides file-level access to storage – NAS devices appear as file servers
- NFS and CIFS/SMB are common NAS protocols
- Runs over standard Ethernet networks
- Enables centralized file sharing across multiple servers and clients
- Easy to deploy and manage
- Lower performance compared to SAN, limited scalability
While SAN provides the performance and advanced features required for mission-critical applications, NAS is easier to deploy for basic file sharing requirements. Many organizations use a combination of SAN and NAS to meet different storage needs.
SAN Components
The key components that make up a SAN include:
1. SAN Switches and Directors
SAN switches and directors provide the connectivity fabric between servers and storage devices. They allow simultaneous and high-speed connections using technologies like Fibre Channel. Leading vendors include Cisco, Brocade, QLogic, and IBM.
2. Cables and Connectors
Fiber optic cables are predominantly used in SAN connections. Common options include multi-mode fiber for shorter distances and single-mode fiber for longer distances. Connectors include LC, SC, MPO and proprietary connectors. Copper cables like twinax are also sometimes used.
3. Host Bus Adapters (HBAs)
HBAs are adapter cards installed in servers to connect to the SAN fabric. They provide ports like FC, iSCSI, SAS, IB, and FCoE. Popular HBA vendors include Emulex, QLogic, Broadcom, and Intel.
4. Converged Network Adapters (CNAs)
CNAs integrate multiple network protocols like Ethernet, Fibre Channel and FCoE into a single adapter. This helps to consolidate server connections and reduces costs. Major vendors include Emulex, Intel and QLogic.
5. SAN Management Software
Management software is used to configure, monitor and manage SAN devices and connectivity. It provides centralized control and insight into the SAN. Examples include EMC ControlCenter, Hitachi Storage Command Suite, IBM Tivoli Storage Productivity Center, and NetApp SANtricity.
6. Backup Infrastructure
SAN environments typically include dedicated backup infrastructure like virtual tape libraries, tape drives and backup software to store backups and enable disaster recovery.
7. RAID Arrays and JBODs
Redundant arrays of independent disks (RAID) are used to provide different levels of performance, capacity and resilience. Just a Bunch of Disks (JBOD) refers to disks that are pooled but not configured in a RAID. Leading storage array vendors include EMC, NetApp, Hitachi, HP, IBM, Dell and Fujitsu.
SAN Architecture
The basic SAN architecture includes the following components:
- Servers – Connect to the SAN to access storage devices
- SAN switches – Provide connectivity between servers and storage devices
- Storage devices – RAID arrays, JBODs, tape libraries etc.
- Cables and connectors – To link SAN components
- HBAs – Adapter cards in servers to connect to SAN
Component | Role |
---|---|
Servers | Access shared storage resources on the SAN |
SAN switches | Provide connectivity between servers and storage |
Storage devices | Shared block-level storage capacity |
Cables and connectors | Link SAN components together |
HBAs | Enable servers to connect to SAN |
Servers connect to SAN switches via HBAs and fiber optic cables. The switches are also connected to the storage devices and enable connectivity between them. This allows servers to access the shared storage devices on the SAN. Management software provides centralized monitoring and control.
Benefits of SAN
SAN provides organizations with a high-performance shared storage infrastructure. The key benefits of SAN include:
1. Centralized Storage
SAN enables multiple heterogeneous servers to access consolidated, shared storage capacity. This eliminates silos of storage tied to individual servers.
2. Scalability
SANs make it easy to scale storage capacity and performance as needs grow by simply adding disk/array capacity or faster interfaces like 16 Gb FC.
3. Efficiency
Resources can be allocated from a shared storage pool as required, driving higher utilization. Features like thin provisioning and data deduplication enhance efficiency.
4. Availability
SAN facilitates clustering between servers for continuous data availability in case of outages. Other features like snapshots and synchronous/asynchronous replication further boost availability.
5. Performance
The Fibre Channel used in SAN provides very low latency and high throughput, enabling SAN to support performance-intensive workloads. SSD and all-flash arrays can significantly enhance performance.
6. Security
SAN allows advanced security policies like zoning to restrict server access to storage resources. Data encryption can be leveraged for enhanced security.
SAN vs DAS
SAN differs from Direct Attached Storage (DAS) in the following ways:
SAN | DAS |
---|---|
Uses dedicated network to access shared storage devices | Storage devices directly attached to individual servers |
Storage resources are consolidated and centralized | No centralized view of storage capacity across servers |
High scalability since expanding capacity is centralized | Expanding capacity requires upgrades at individual servers |
Enables advanced features like thin provisioning, snapshots etc. | Limited feature set |
Facilitates high availability | Availability limited to individual server resources |
Higher cost and complexity to implement | Lower cost, easy to implement |
While DAS is simple to deploy, SAN provides centralized storage, scalability and advanced capabilities. SAN is better suited for environments with growing and more demanding storage needs.
SAN Protocols
The primary protocols used in SAN are:
Fibre Channel (FC)
FC is the predominant SAN protocol with support for 8, 16 and 32 Gbps bandwidth. It provides low latency, high throughput and reliable connectivity. However, deploying end-to-end FC SAN requires specialized components like HBAs and FC switches.
iSCSI
iSCSI allows SAN to be deployed over standard Ethernet networks which helps to reduce costs. However, performance is lower compared to FC and requires a high-bandwidth network. Deployments typically involve dedicated iSCSI switches.
InfiniBand (IB)
InfiniBand offers very high throughput and low latency but requires deploying specialized IB switches and adapters which increases costs. It is used in high performance computing environments.
FCoE (Fiber Channel over Ethernet)
FCoE allows transmitting FC traffic over Ethernet while retaining the low latency, reliability and security of FC. However, specialized FCoE switches and converged network adapters are required.
SAN Topologies
Three common SAN topologies include:
1. Fabric Topology
This involves fabric switches, point-to-point connections between devices and network redundancy. Servers have redundant links to core fabric switches which connect to edge switches for device connectivity.
2. Arbitrated Loop
Devices are arranged in a loop or ring with each device connected to the next. Simple to deploy but provides limited performance and scalability.
3. Switched Fabric
This uses Fibre Channel switches to provide dedicated bandwidth between devices. Provides high performance but requires FC switches.
Fabric topology is most common in enterprise SAN deployments, providing robust connectivity and redundancy. Arbitrated loop is mainly used for attaching peripherals to a SAN fabric.
SAN Management
To manage the shared infrastructure efficiently, SAN requires robust management software that provides:
- Centralized monitoring – Single interface to monitor storage devices, network activity, latency, IOPS, bandwidth etc.
- Device management – Configure, provision and tune storage devices from a unified platform.
- Performance management – Track usage trends to identify bottlenecks and maintain service levels.
- Resource allocation – Dynamically allocate storage to servers from a shared pool.
- Troubleshooting – Diagnose and isolate faults to restore operations quickly.
- Reporting – Detailed reporting on utilization, availability, performance, inventory etc.
Leading SAN management platforms include EMC ControlCenter, NetApp SANtricity and IBM Tivoli Storage Productivity Center.
SAN Security
Since SAN enables shared infrastructure, security is critical. Recommended practices include:
- Zoning – Logically segment devices to restrict server access to only authorized storage resources.
- LUN masking – Control which HBAs can access specific LUNs.
- Credential management – Use role-based access control and robust credential policies.
- Auditing – Track all access attempts and changes made to configurations.
- Encryption – Encrypt data-at-rest and data-in-transit to prevent unauthorized access.
- Antivirus – Install antivirus software to detect and eliminate malware.
- Firewall rules – Define rules restricting unauthorized traffic between SAN components.
Proper security measures ensure only authorized servers can access SAN resources and data is protected.
Virtual SAN
A virtual SAN (vSAN) constructs shared storage from internal disks in hyperconverged infrastructure nodes. Features include:
- Combines computing and storage resources in a cluster
- Uses server-based flash devices and hard disks to create a distributed shared datastore
- Reduces need for external shared storage arrays
- Enabled natively by hypervisors like VMware vSphere and Microsoft Hyper-V
- Facilitates rapid deployment, scaling, and management
- Lower cost compared to traditional SAN
vSAN provides a simple, scalable software-defined storage architecture for virtualized environments. It reduces dependence on dedicated SAN storage.
Conclusion
SAN provides centralized, high-performance shared storage to servers, enabling scalability, efficiency and reliability. The dedicated SAN network offers faster access and advanced capabilities compared to NAS and DAS. Fibre Channel is the primary protocol, though iSCSI and FCoE are also used. Careful security and robust management are critical for secure multi-tenant access. Virtual SANs provide software-defined shared storage using internal server disks for lower costs. Overall, SAN is a proven enterprise storage infrastructure for critical applications and workloads.