iSCSI vs. Fibre Channel storage performance

It’s something that I have been discussing a lot with one of my customers.

At the moment, they are hesitant to invest in a potentially very expensive SAN solution involving fibre channel connections. Currently, there are eight vSphere servers configured with no centralised storage. Hence, all virtual machines are running on the local disk arrays. Therefore, there is no DRS, no HA and no Vmotion! We are protecting the virtual machines only with the local server’s ability to recover from physical disk failure, in this case, on board RAID 5 controllers.

My temporary solution was an open source iSCSI storage solution until a more permenant solution can be found.

However, the big question that I keep coming up against is, “What’s the performance hit I will take?”

It’s undeniable that the fibre channel should be faster, as the article below describes, this may not be that much of an issue.

If we look at the way a virtual machine (and a physical machine, indirectly) runs, the most important resources are RAM and CPU. Disk storage is (sort of) secondary. The exception to this is streaming media or backup servers. Typically, performance difference is minimal.

As long as you have a properly implemented iSCSI solution. I have look at the security considerations previously (http://invurted.com/tutorial-iscsi-security/) and performance will always be optimal if iSCSI is isolated to its own infrastructure. This can be achieved by physical infrastructure or VLANs to isolate the traffic.

In short, there should be very little to stop small to medium enterprises from adopting iSCSI solutions for shared storage. Its performance is comparable to fibre channel in most circumstances and the relative cost is less than most fibre channel solutions for a minimal performance hit.

What weighs more: one pound of bricks or one pound of feathers? Which is faster: 2 Gb FC or 1 Gb Ethernet? Hint: Both questions have the same answer.

The area of iSCSI performance and how it compares to Fibre Channel is often misunderstood. Both of these SAN interconnects are typically measured by bandwidth with “2 Gb” FC SANs dominating the market today and “1 Gb” Ethernet used for the majority of iSCSI SANs.

Which would you say is faster: a 2 Gb FC connection or a 1 Gb Ethernet connection? It’s a trick question — they are equally fast. They both transfer data at the speed of light. Bandwidth is not an issue of speed but size. Tthink about a four-lane highway versus a two-lane highway. If there are just a few automobiles traveling on either highway, drivers will be able to go the maximum speed. As more drivers travel on each road, the two-lane highway will experience a bottleneck before the four-lane highway does.

This is the same with FC and Ethernet. A 2 Gb FC interconnect has twice the bandwidth (double the number of lanes) of 1 Gb Ethernet. Bandwidth has an impact on performance when large requests are being processed. In this case, most of the work is spent transferring the data over the network making bandwidth the critical path. However, for smaller read and write requests, the storage system spends more time accessing data making the CPU, cache memory, bus speeds and hard drives more important to overall application performance.

Unless you have a bandwidth-intensive application (e.g., streaming media or backup data), the difference in performance will be minimal. Enterprise Strategy Group (ESG) Lab has tested storage systems that support iSCSI and FC and the performance difference is minimal — ranging between 5% and 15%.

In fact, an iSCSI storage system can actually outperform a FC-based product depending on more important factors than bandwidth, including the number of processors, host ports, cache memory and disk drives and how wide they can be striped.

The slowest component of the storage performance chain is the hard disk drive. It takes a hard disk drive much longer — sometimes several thousands-percent longer — to access data in a storage system than the electronic components like processors, bus and memory. The timeline for an I/O starts with a read/write command being sent to the hard drive from the application. This is followed by long, mechanical access times waiting for the drive to move the actuator, referred to as the seek process.

The seek process is by far the slowest part of storage performance. The actuator then has to spin to the data that’s been requested, which is another long mechanical process that creates latency. Next, the data is transferred from the drive to the CPU and a status handshake is performed to terminate the request. Access time associated with all disk drives, which includes seek plus latency, is responsible for the majority of the “wait time.”

Striping data

Page 1 of 2 | Next page