Jump to content


Photo
- - - - -

Slow performance


  • Please log in to reply
9 replies to this topic

#1 teflux

teflux

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 28 December 2011 - 12:51 PM

I have very bad performance with my iSCSI drive. Please help me. :( What is wrong in my setting.

Find below the result from xBench

FROM THE ISCSI DRIVE
Disk Test 9.42
Sequential 6.39
Uncached Write 96.26 59.10 MB/sec [4K blocks]
Uncached Write 89.97 50.90 MB/sec [256K blocks]
Uncached Read 25.78 7.55 MB/sec [4K blocks]
Uncached Read 1.77 0.89 MB/sec [256K blocks]
Random 17.89
Uncached Write 235.51 24.93 MB/sec [4K blocks]
Uncached Write 183.87 58.86 MB/sec [256K blocks]
Uncached Read 1030.64 7.30 MB/sec [4K blocks]
Uncached Read 4.70 0.87 MB/sec [256K blocks]

FROM A NETWORK SHARE
Same test but directly to a network share from the same NAS
Disk Test 58.82
Sequential 34.16
Uncached Write 13.98 8.58 MB/sec [4K blocks]
Uncached Write 112.36 63.57 MB/sec [256K blocks]
Uncached Read 34.36 10.06 MB/sec [4K blocks]
Uncached Read 132.15 66.42 MB/sec [256K blocks]
Random 211.55
Uncached Write 90.33 9.56 MB/sec [4K blocks]
Uncached Write 219.41 70.24 MB/sec [256K blocks]
Uncached Read 1265.98 8.97 MB/sec [4K blocks]
Uncached Read 401.50 74.50 MB/sec [256K blocks]

READ performance from the iSCSI drive are so bad :(


MY CONFIG

NETWORK: GIGABIT RJ45 Cat6

NAS: QNAP TS439 3.5.2 Build 1126T
Globalsan: 5.0.0.279
MacOsX 10.6.8 i7 3,4 Ghz 12 Go DDR3

QNAP Config:
4 x 1 TB Western Digital Caviar Green WDC WD10EADS-00L5B1 01.0
-> RAID5 Volume 1234
Disk Management -> iSCSI
- Portal Management: Enable iSCSI Target Service: Checked -> Port 3260
- Enable iSNS: unchecked

-> iSCSI LUN: DISK1 fron the RAID 5 volume 1234
Capacity 1024 GB

Target Management:
HOMESCSI (iqn.2004-04.com.qnap:ts-439:iscsi.homescsi.8cb34a) Connected

Advanced CL: Default Policy Read/Write

LUN backup: None

globalSAN Config:
QNAP Target, connected persistent
-> Alias: QNAP Target
-> Error Detection: None
-> iSCSI Options: nothing
-> Authentication: None

#2 teflux

teflux

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 28 December 2011 - 08:36 PM

New benchmark with the version 5.0.0.288 -> Very slow perf

Disk Test 9.05
Sequential 6.19
Uncached Write 119.41 73.31 MB/sec [4K blocks]
Uncached Write 102.50 58.00 MB/sec [256K blocks]
Uncached Read 26.79 7.84 MB/sec [4K blocks]
Uncached Read 1.69 0.85 MB/sec [256K blocks]
Random 16.84
Uncached Write 246.41 26.09 MB/sec [4K blocks]
Uncached Write 185.79 59.48 MB/sec [256K blocks]
Uncached Read 1056.23 7.48 MB/sec [4K blocks]
Uncached Read 4.40 0.82 MB/sec [256K blocks]


#3 teflux

teflux

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 29 December 2011 - 06:16 PM

I just sent a support request to the help desk and my problem is now fixed. thanks again.

New benchmark with the version 5.1.0.314 beta and xBench... it's really faster !!

BUT I don't understand why the uncached write are slower than the previous version... It seems that improvements are still possible ;)

Version 5.0.0.288
Uncached Write 96.26 59.10 MB/sec [4K blocks]
Uncached Write 89.97 50.90 MB/sec [256K blocks]
Random 17.89
Uncached Write 235.51 24.93 MB/sec [4K blocks]
Uncached Write 183.87 58.86 MB/sec [256K blocks]


Version 5.1.0.314
Disk Test 77.08
Sequential 47.39
Uncached Write 55.24 33.92 MB/sec [4K blocks]
Uncached Write 52.27 29.57 MB/sec [256K blocks]
Uncached Read 27.11 7.93 MB/sec [4K blocks]
Uncached Read 97.33 48.92 MB/sec [256K blocks]
Random 206.30
Uncached Write 239.50 25.35 MB/sec [4K blocks]
Uncached Write 95.01 30.42 MB/sec [256K blocks]
Uncached Read 1014.04 7.19 MB/sec [4K blocks]
Uncached Read 270.04 50.11 MB/sec [256K blocks]

#4 teflux

teflux

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 23 January 2012 - 03:45 PM

Hi all

New benchmark with version 5.1.0.316. Again improved. Well done :)


Disk Test 90.60
Sequential 56.16
Uncached Write 100.66 61.81 MB/sec [4K blocks]
Uncached Write 91.95 52.02 MB/sec [256K blocks]
Uncached Read 24.24 7.09 MB/sec [4K blocks]
Uncached Read 109.26 54.91 MB/sec [256K blocks]
Random 234.29
Uncached Write 172.78 18.29 MB/sec [4K blocks]
Uncached Write 157.38 50.38 MB/sec [256K blocks]
Uncached Read 1027.76 7.28 MB/sec [4K blocks]
Uncached Read 252.63 46.88 MB/sec [256K blocks]

#5 ducksoft

ducksoft

    Newbie

  • Members
  • Pip
  • 5 posts

Posted 01 February 2012 - 03:51 PM

I installed 5.1.0.316 beta (upgrade from 5.0.0.286) and Disk Speed Test reported an increase as follows:
Write : 76.6 MB/s -> 87.2 MB/s
Read : 22.4 MB/s -> 50.4 MB/s
There is a significant increase in read performance. As I use network logins where the desktop is provided via OSX Server 10.6.8 / globalSAN / QNAP 859 this shows a real improvement in end-user experience.

Thank you.

Tim
iSCSI Initiator Version : 5.1.0.336
OS Version : OS X 10.8.2
NAS : QNAP TS-859U

3.8.1

Build 20121205


#6 teflux

teflux

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 18 March 2012 - 06:27 AM

Hi all

New benchmark with version 5.1.0.336. Again improved.



Disk Test 94.73
Sequential 56.47
Uncached Write 117.75 72.29 MB/sec [4K blocks]
Uncached Write 85.89 48.60 MB/sec [256K blocks]
Uncached Read 23.13 6.77 MB/sec [4K blocks]
Uncached Read 133.80 67.25 MB/sec [256K blocks]
Random 293.75
Uncached Write 320.64 33.94 MB/sec [4K blocks]
Uncached Write 151.22 48.41 MB/sec [256K blocks]
Uncached Read 915.22 6.49 MB/sec [4K blocks]
Uncached Read 358.05 66.44 MB/sec [256K blocks]

#7 ajs

ajs

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 19 April 2012 - 09:00 AM

Hi Folks, I thought I'm evaluating the initiator and thought I'd sure my initial performance results.

Initiating Machine:
Mac Book Pro (8,3 - Core i7@2.5GHz - 8MB Cache - 8MB RAM) OS X 10.7.3
Gigabit Ethernet w/1500B MTU
globalSAN 5.1.0.336

Target Server:
Synology DS1010+ running DSM 4.x
2xGigabit Ethernet Bonded interface w/9000B MTU

LUN/Target:
File-Level 120GB "Time Machine" HFS+ target on DS1010+ volume 1 (RAID 1)


Network:
Connected via Gigabit switch over Cat6


Disk Test 117.41
Sequential 71.57
Uncached Write 145.32 89.22 MB/sec [4K blocks]
Uncached Write 89.62 50.71 MB/sec [256K blocks]
Uncached Read 32.04 9.38 MB/sec [4K blocks]
Uncached Read 150.65 75.72 MB/sec [256K blocks]
Random 326.69
Uncached Write 312.46 33.08 MB/sec [4K blocks]
Uncached Write 177.94 56.96 MB/sec [256K blocks]
Uncached Read 1154.29 8.18 MB/sec [4K blocks]
Uncached Read 391.05 72.56 MB/sec [256K blocks]

Not too shabby. Also did a fresh (first time) Time Machine backup, excluding quite a bit of stuff, but still resulting in 60GB of data written to the target and it blew the socks off of the same over AFP to a sparsebundle. I didn't time the two, but it was an obvious difference.

I'll probably repeat this with Jumbo Frames enabled on the initiating machine as well as on an older Mac Pro. In addition, I'm going to try to create a second volume in the DS1010 and test with block-level.

-A

#8 bmr2012

bmr2012

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 24 April 2012 - 05:41 AM

i have the same problem.
do you have ideas (easy and step-by-step tutorials) how to improve the perfomance?
thanks
http://www.skyperecorder911.com - skype recorder tool

#9 flight553

flight553

    Newbie

  • Members
  • Pip
  • 3 posts

Posted 17 July 2012 - 09:46 AM

I have similar results to teflux' improved Xbench scores, but they are to a target on a Virtual Machine on my local computer. I have a 6Gb/s SSD that measures local disk speeds at 300MB/s read and 200MB/s write. But to the iscsi target on the virtual machine over a vnic, the disk speed is no faster than 50MB/s. This is actually reading/writing to the same fast SSD as my local computer writes to.

Is there some hard limit to the iSCSI protocol, or some amount of iSCSI overhead that will always knock down 60% of the potential speed?

#10 Chapindad

Chapindad

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 24 July 2012 - 01:21 PM

@flight553 The protocol itself has no hard limits but the ethernet it is running on does. To get the best performance you need to enable Jumbo frames on your ethernet switches and on your NIC cards. The big problem is that ethernet normally runs at 1500 MTU size and Jumbo frames allows you to go up to a 9000 MTU size.