
A storage area network (SAN) is a specialized high-speed network that gives servers access to shared block storage. In 2003, Alacritech announced that its Alacritech Accelerators, used with the Microsoft iSCSI initiator, had met Cisco interoperability testing criteria for the Cisco MDS 9000 family of multilayer SAN switches. The combination was designed to help customers connect Windows servers to storage arrays and tape libraries using iSCSI, native Fibre Channel, or Fibre Channel translation; the broader discipline sits inside storage networking.
That announcement captured a key SAN transition. Fibre Channel was the established data-center SAN technology, while iSCSI promised block storage over familiar IP networks. Cisco's MDS 9000 platform brought multiprotocol SAN switching, Virtual SANs, security, debugging, and unified management into the same conversation. Alacritech's TCP offload adapters addressed a practical concern of the time: server CPU overhead when moving storage traffic over Ethernet.
What A SAN Provides
A SAN presents storage volumes to hosts as block devices. The application or operating system sees disks, LUNs, or namespaces, while the storage array and fabric provide shared access, redundancy, and management. SANs are common for databases, virtualization clusters, transaction systems, enterprise resource planning, and applications that require low latency and predictable I/O.
Unlike ordinary file sharing, SAN access usually happens below the file-system layer. That makes design discipline critical. Multiple hosts cannot safely write to the same block device unless the file system, cluster stack, or application is built for shared-block access.
The 2003 IP SAN Context
Microsoft released its iSCSI Software Initiator in 2003, helping Windows servers use block storage over IP networks. Cisco also introduced IP storage features for the MDS 9000 family, including ways to connect Fibre Channel SANs over local, metro, and wide-area IP networks using iSCSI and FCIP. Those developments gave smaller and midsize organizations more options than a pure Fibre Channel buildout.
At the same time, early IP SANs had to overcome skepticism. Storage teams worried about performance, congestion, reliability, and operational separation. Network teams worried about unfamiliar storage behavior. Products such as Alacritech's offload accelerators were part of that era's answer: make iSCSI efficient enough to be practical and interoperable enough to trust.
Fibre Channel SANs
Fibre Channel remains a core SAN technology because it was designed for storage fabrics. It provides deterministic behavior, fabric login services, zoning, multipathing, and isolation from general LAN traffic. Modern Fibre Channel environments commonly use dual independent fabrics so a host has separate paths to storage through redundant HBAs, switches, array ports, and power domains.
Good Fibre Channel design depends on careful zoning, consistent naming, firmware compatibility, path testing, and fabric health monitoring. Single-initiator zoning remains a common best practice because it limits blast radius and makes troubleshooting cleaner. Fabric aliases, WWPN documentation, and change control matter enormously when many hosts and arrays share the same directors.
iSCSI SANs
iSCSI carries SCSI commands over TCP/IP, making block storage available across Ethernet networks. It can be economical and operationally attractive when organizations already have strong Ethernet skills and 10 Gigabit Ethernet switching infrastructure. RFC 7143 consolidates the iSCSI protocol specification and describes the protocol that lets SCSI commands be transported across IP networks.
An iSCSI SAN should not be treated as just another VLAN. It needs stable latency, sufficient bandwidth, clean MTU configuration, sensible multipathing, and isolation from noisy traffic. Dedicated switches, physically separate fabrics, or carefully engineered Ethernet fabrics may be appropriate depending on risk and scale. Jumbo frames can help in some designs, but inconsistent MTU configuration can cause miserable intermittent failures.
NVMe Over Fabrics
NVMe over Fabrics extends NVMe storage commands across a network fabric. In SAN environments, the important variants include NVMe over Fibre Channel, NVMe over RDMA, and NVMe over TCP. NVMe/FC lets organizations use Fibre Channel operations and reliability while taking advantage of NVMe's parallelism and lower protocol overhead. NVMe/TCP can be attractive where Ethernet reach and simpler deployment matter more than the lowest possible latency.
NVMe-oF is most valuable when the rest of the stack is ready: fast arrays, modern HBAs or NICs, capable switches, current drivers, tuned multipathing, and applications that can use lower latency. It should be validated under real workload patterns rather than assumed to improve every application.
Core SAN Design Practices
Reliable SANs are built around redundancy and containment:
- Dual fabrics: separate A and B fabrics with no shared switches for critical block storage paths.
- Multipathing: host software that recognizes redundant paths and handles failover and load distribution correctly.
- Zoning and masking: fabric zoning controls which initiators can see which targets, while array masking controls which volumes are presented to which hosts.
- Path diversity: separate HBAs or NICs, switch paths, array controllers, power supplies, and cables.
- Change control: planned maintenance for firmware, zoning, cabling, driver, and array changes.
- Monitoring: port errors, credit starvation, congestion, discards, queue depth, path state, latency, and array controller health.
Security
SAN security is often underestimated because fabrics are private. That assumption is weak. A mistaken zoning change, compromised management interface, stolen credential, misconfigured iSCSI target, or exposed array API can cause data loss as surely as a public-facing application flaw.
Security controls should include isolated management networks, MFA for storage administration, role-based access, logging, zoning review, CHAP or stronger authentication for iSCSI where appropriate, encryption where risk requires it, secure firmware processes, and separation between backup, replication, management, and production access. Ransomware has made storage administration a privileged security function.
Operations And Troubleshooting
SAN outages are rarely polite. A marginal optic, bad cable, slow drain device, pathing error, queue-depth mismatch, firmware bug, or overloaded storage processor can look like an application problem. Effective operations require telemetry from hosts, switches, fabrics, and arrays in the same timeline.
Useful troubleshooting data includes host path state, HBA driver versions, queue depths, switch port counters, CRC errors, credit loss, iSCSI retransmits, array latency, controller ownership, cache state, and recent zoning or masking changes. Without that shared view, server, network, and storage teams can spend hours proving their own layer is innocent.
Design Guidance
For a SAN refresh or new deployment:
- Classify workloads by latency, throughput, availability, and recovery requirements.
- Choose Fibre Channel, iSCSI, NVMe/FC, NVMe/TCP, or another fabric based on workload and operational fit.
- Keep critical fabrics redundant and simple; clever topology is rarely worth a confusing failure mode.
- Document WWPNs, IQNs, aliases, zones, LUN IDs, masking views, and host ownership.
- Test path failover before production cutover and after major firmware or driver updates.
- Monitor latency at the host, switch, and array, because each layer can tell a different story.
- Protect storage management with strong identity controls and audited change processes.
- Integrate SAN planning with backup, snapshot, replication, and ransomware recovery design.
The 2003 Alacritech, Microsoft, and Cisco interoperability story was about making IP SANs practical alongside Fibre Channel. In 2026, the same SAN principles still apply: isolate critical paths, validate interoperability, monitor every layer, and design storage fabrics so failures are contained rather than amplified.
References
- Microsoft: iSCSI initiator and qualified storage hardware vendors
- Cisco: IP storage networking products for MDS 9000
- Cisco MDS 9000 Series Multilayer Switches
- SNIA: What is a Storage Area Network?
- SNIA dictionary: storage area network
- RFC 7143: Internet Small Computer System Interface (iSCSI)
- SNIA: What is NVMe over Fabrics?
- NVM Express: NVMe 2.0 support for fabrics and multi-domain subsystems