
A storage area network, or SAN, is a dedicated network that connects servers to shared block storage. Instead of each server owning only its internal disks, a SAN lets hosts access storage volumes presented by arrays, appliances, or software-defined storage systems. To the server, those remote volumes usually appear like local block devices that can host databases, virtual machines, filesystems, clusters, and application data. For the broader practice, see storage area networking.
SANs became popular because they separated storage from individual servers. That made it easier to consolidate capacity, improve utilization, centralize backup, replicate data, migrate workloads, and build high-availability systems. The basic idea remains the same in 2026, but the protocols and performance expectations have changed enormously since early 1, 2, and 4 Gbit/s Fibre Channel deployments.
Common SAN Protocols
- Fibre Channel: the classic enterprise SAN fabric. It is purpose-built for block storage, uses HBAs and Fibre Channel switches, and remains common for mission-critical databases, virtualization, mainframe-adjacent systems, and large storage arrays.
- iSCSI: carries SCSI commands over TCP/IP. It uses standard Ethernet networks, making it attractive for smaller environments, branch sites, and workloads where cost and operational simplicity matter more than the lowest possible latency.
- FCoE: Fibre Channel over Ethernet encapsulates Fibre Channel frames on lossless Ethernet. It saw adoption in some converged data-center designs, but it did not replace native Fibre Channel or ordinary Ethernet-based storage as broadly as once expected.
- NVMe over Fabrics: extends NVMe commands across a network fabric instead of only across local PCI Express. NVMe-oF can run over Fibre Channel, RDMA transports such as RoCE or InfiniBand, and TCP.
- SAS and direct-attached storage: not usually called SAN at campus or data-center scale, but still important for shelves, JBODs, and smaller clustered storage designs.
Fibre Channel in 2026
Fibre Channel did not disappear when Ethernet became faster. It kept evolving because storage fabrics value predictable latency, mature multipathing, isolation, zoning, fabric services, and operational stability. The Fibre Channel Industry Association roadmap lists 64GFC, 128GFC, and 256GFC generations, with 256GFC shown for 2025 and market-demand timing. INCITS has also reported completion of 128G Fibre Channel work and movement into 256GFC development.
Modern Fibre Channel is also a transport for NVMe. FC-NVMe lets organizations use existing Fibre Channel skills and fabric design while connecting to NVMe storage systems. That matters because NVMe SSDs and all-flash arrays expose far more parallelism and lower latency than older SCSI-era disks, and the storage network has to avoid becoming the bottleneck.
NVMe over Fabrics
SNIA describes NVMe over Fabrics as a way for NVMe commands to transfer data between a host and SSD or storage subsystem over a networked fabric. It allows centrally managed NVMe storage to be shared by groups of hosts, reducing stranded storage capacity while preserving much of NVMe's low-latency design.
NVMe-oF is not one network. It is a family of transports. NVMe/FC uses Fibre Channel. NVMe/RDMA can use InfiniBand, RoCE, or iWARP. NVMe/TCP uses ordinary TCP/IP networks and is attractive because it can run over standard Ethernet without requiring a lossless RDMA fabric. The right choice depends on latency requirements, operational skill, host support, switch design, security model, and whether the environment already has Fibre Channel or Ethernet storage expertise.
Key SAN Components
- Host bus adapters: server adapters that connect hosts to Fibre Channel or other storage fabrics. HBAs offload protocol work and provide stable driver and multipathing behavior.
- Switches/directors: the fabric core that connects initiators and targets. Enterprise SAN directors may provide high port counts, redundant control planes, zoning, telemetry, and non-disruptive upgrades.
- Storage arrays: target systems that present LUNs, namespaces, or volumes to hosts, often with replication, snapshots, tiering, deduplication, compression, encryption, and quality-of-service controls.
- Optics and cabling: SFPs, QSFPs, multimode or single-mode fiber, copper twinax, patch panels, and cable management. Dirty or mismatched fiber connectors can cause problems that look like storage faults.
- Multipathing software: host software that uses redundant paths and handles failover, load balancing, and path health.
Design Practices That Still Matter
A SAN is only as reliable as its fabric design. Most enterprise SANs use at least two independent fabrics so a host and storage array have redundant paths without a single switch, cable, HBA, or controller becoming a single point of failure. Zoning limits which initiators can talk to which targets. LUN masking or namespace access control on the array limits what storage a host can actually use.
- Use dual independent fabrics for critical workloads.
- Keep zoning simple and explicit, often single-initiator zoning for Fibre Channel.
- Align host multipathing policy with the storage array vendor's guidance.
- Monitor latency, queue depth, credits, errors, link resets, congestion, and path failovers.
- Document host-to-volume mappings carefully; SAN mistakes can expose or overwrite the wrong block device.
- Test failover under maintenance conditions, not for the first time during an outage.
The 2005 QLogic and Agere Story
QLogic chose Agere as its system chip supplier for its new 4 Gigabits per second Fibre Channel application-specific integrated circuit and host bus adapter storage area networking solutions.
The 4 Gbit/s HBA FC SANblade 2400 series was deployed as both PCI Express and PCI-X 1.0 / 2.0 host bus interface products. That detail captures the server transition of the period: PCI-X was still common in enterprise servers, while PCI Express was becoming the dominant I/O architecture.
"QLogic selected Agere's chip technology because of its expertise in successfully integrating high-speed serializer/deserializer interface technology," said Roger Klein of QLogic.
"Agere's business relationship with QLogic demonstrates our intense focus on the highly attractive SAN market," said Samir Samhouri of Agere. "Agere's experience in integrating high-speed interface technology and advanced signal integrity technology made it possible for us to deliver high-performance system chips to QLogic the first time through the validation process."
The SANblade product line was designed for QLogic channel partners around the world to move large enterprises and small and medium businesses from direct-attached storage technology to SANs.
Agere, a leader in providing SoCs for storage applications, had one of the industry's broadest portfolios of SerDes high-speed interface technology. Agere's interface portfolio supported speeds from 155 megabits per second to 10 Gbit/s, including 1, 2, and 4 Gigabit Fibre Channel; 1.5 and 3 Gbit/s SATA and Serial Attached SCSI; PCI, PCI-X, PCI-X 2.0, PCI Express, SPI-3/4/5, SFI-4, serial RapidIO, and XAUI.
What Changed Since 2005
In 2005, 4G Fibre Channel was a significant step for SAN HBAs. Since then, Fibre Channel moved through 8G, 16G, 32G, 64G, and 128G generations, while Ethernet storage moved through 10G, 25G, 40G, 50G, 100G, and faster links. Storage media changed even more: spinning disks are still used for capacity, but all-flash and NVMe systems transformed latency and IOPS expectations.
The vendor landscape changed too. Cavium completed its acquisition of QLogic in 2016, and Marvell completed its acquisition of Cavium in 2018. QLogic Fibre Channel adapters and support therefore live in the Marvell lineage today. Agere also became part of a broader semiconductor consolidation story after the 2005 era.
SAN Versus NAS and Object Storage
SANs provide block storage. Network-attached storage provides file access through protocols such as NFS and SMB. Object storage provides API-based access to objects and metadata, often through S3-compatible interfaces. None of these is universally better; they solve different problems.
Databases, virtualization clusters, and latency-sensitive applications often prefer block storage. Shared user files, home directories, media workflows, and general departmental data often fit NAS. Backups, archives, cloud-native data, analytics lakes, and large unstructured stores often fit object storage. Modern platforms sometimes combine all three.
Planning Checklist
- Define workload requirements first: latency, IOPS, throughput, availability, replication, recovery point, and recovery time.
- Choose protocol based on operations as much as benchmarks. Fibre Channel, iSCSI, NVMe/TCP, NVMe/RDMA, and NVMe/FC each imply different skills and failure modes.
- Design redundant paths end to end: host adapters, switches, storage controllers, optics, cables, and power.
- Validate host drivers, firmware, switch code, storage array versions, and support matrices before upgrades.
- Segment and secure management planes. A SAN fabric is critical infrastructure, not a general-purpose user network.
- Measure queueing and tail latency, not only headline throughput.
A SAN is still the quiet backbone behind many critical systems. The names have changed from 4G Fibre Channel HBAs to NVMe-oF fabrics and 128G/256G roadmaps, but the central job is the same: connect servers to shared block storage with predictable performance, strong isolation, and failure behavior that operators can trust.
References
- Fibre Channel Industry Association: Fibre Channel roadmap
- INCITS: 128G Fibre Channel completed and 256G development begun
- SNIA: What is NVMe over Fabrics?
- NVM Express: NVMe over TCP transport specification
- Cavium: acquisition of QLogic completed
- Marvell: acquisition of Cavium completed
- Marvell: QLogic Fibre Channel adapters and controllers support