A grueling couple of days of running the MS Exchange 2010 Jetstress tool and watching it fail for the read latency tests. This was for 4 mailbox servers and 12 databases with one passive copy in the DAG. The active copy disk lives on an EMC CX4 SAN and the passive lives on an EMC CX3 SAN. We were seeing failures on both sides, but only the first 3 of the six databases on each server. Very odd.
DB San Disk: RAID 5 (both active and passive)
Log San Disk: RAID 10 (both active and passive)
We first thought we needed to break up the 12 databases to seperate LUNs inside ESXi rather than having one giant disk in ESXi and then carving it out inside Windows 2008 R2.
This was NOT the case.
When creating disks for your databases and log volumes inside ESX make sure that you use the new virtual iSCI adapter to split the data traffic I/O. Meaning when you create a new disk and hook it to the LUNs that you already got simply select a new iSCSI port number. I I alternated between controller 1 and 2 keeping my OS on controller 0.
MBDB01 was put on SCSI controller: 1:0
MBDB01Log was put on SCSI controller: 1:1
MBDB02 was put on SCSI controller: 2:0
MBDB02Log was put on SCSI controller: 2:1
MBDB03 was put on SCSI controller: 1:2
MBDB03Log was put on SCSI controller: 1:3
Even though the physical part of this whole thing is that everything is traveling through the same fiber channel, the guest OS doesn’t know that and actually builds new scsci controller hardware for you for each new controller you setup.
Jetstress now passes with flying colors on all fronts.