Drobo: Difference between revisions
Line 216: | Line 216: | ||
Total seconds: 302.58 | Total seconds: 302.58 | ||
Bytes/sec: 27,817,604 | Bytes/sec: 27,817,604 | ||
==== dd from /dev/zero to drobo ==== | |||
<pre> | |||
rday@weasel:/drobo/dvds$ /usr/bin/time -v dd if=/dev/zero of=./test bs=16k count=163840 | |||
163840+0 records in | |||
163840+0 records out | |||
2684354560 bytes (2.7 GB) copied, 213.715 s, 12.6 MB/s | |||
Command being timed: "dd if=/dev/zero of=./test bs=16k count=163840" | |||
User time (seconds): 0.08 | |||
System time (seconds): 17.57 | |||
Percent of CPU this job got: 8% | |||
Elapsed (wall clock) time (h:mm:ss or m:ss): 3:33.88 | |||
Average shared text size (kbytes): 0 | |||
Average unshared data size (kbytes): 0 | |||
Average stack size (kbytes): 0 | |||
Average total size (kbytes): 0 | |||
Maximum resident set size (kbytes): 924 | |||
Average resident set size (kbytes): 0 | |||
Major (requiring I/O) page faults: 2 | |||
Minor (reclaiming a frame) page faults: 289 | |||
Voluntary context switches: 12677 | |||
Involuntary context switches: 243 | |||
Swaps: 0 | |||
File system inputs: 192 | |||
File system outputs: 5242880 | |||
Socket messages sent: 0 | |||
Socket messages received: 0 | |||
Signals delivered: 0 | |||
Page size (bytes): 4096 | |||
Exit status: 0 | |||
</pre> |
Revision as of 00:59, 11 November 2013
The drobo showed up yesterday. The unboxing was as cool as with an Apple product. And it is really easy to setup and start using... as long as you are plugging it into a Mac or PC. The instructions don't say anything about how to get it to work on Linux. Luckily I found this via google:
[root@192.168.1.1]# lshw [root@192.168.1.1]# /sbin/mke2fs -j -i 262144 -L Drobo -m 0 -O sparse_super,^resize_inode -q /dev/sdc [root@192.168.1.1]# mkdir /drobo [root@192.168.1.1]# mount -t ext3 /dev/sdc /drobo update# ls -l /dev/disk/by-uuid (copy the uuid for insertion into fstab) [root@192.168.1.1]# vi /etc/fstab UUID=7fcc72fe-0884-4d66-b4f3-962901875650 /drobo ext3 defaults 0 0
That was all I needed to get me going.
I thought I had a bunch of old hard drives lying around that I would be able to wedge into drobo until I could afford to buy real drives, but none of them are SATA. Darn. Another bummer is that my server has a firewire plug that is heart-shaped and resembles a mac firewire, but the firewire plug on drobo is square. So I think I'm stuck with USB. My first speed test writing to drobo shows 14.8MB/sec. That feels pretty lame.
Another weird thing is that the drobo shows the full capacity of the array, not the actual capacity. That is, I put two 1TB drives in it and it looks like this with df -h:
/dev/sde 2.0T 155G 1.9T 8% /drobo
I expected drobo to report the capacity of the array to be about 1TB instead of 2TB.
I added a few more drives to the drobo to give it the capacity to hold our vacation videos.
Interestingly, after adding two 1TB drives, drobo still reports the same capacity to df. Even more reason not to trust df numbers for drobo.
drobo performance on usb 2.0
But still the drobo is just too slow.
Drobo claims to have these throughput stats:
Max Sustained Transfer Rate: FireWire 800: Up to 52MB/s reads and 34MB/s writes USB 2.0: Up to 30MB/s reads and 24MB/s writes
How slow is it for me? Bonnie says this:
rday@weasel:/drobo/bonnie$ bonnie Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP weasel 6G 249 74 1019 0 500 0 876 63 1111 0 58.3 2 Latency 1516ms 21811ms 1552ms 76164us 261ms 727ms Version 1.96 ------Sequential Create------ --------Random Create-------- weasel -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1387 6 +++++ +++ 1815 4 1615 4 +++++ +++ 1811 4 Latency 63932us 1261us 725us 2761us 2696us 121us
I care most about block reads and writes. Those come out at 1,111 and 1,019 K/sec.
Hmm, what units is bonnie reporting? I'll guess bytes since bits would be silly. So I'm getting 1MB/sec on drobo. Bleck. That's a lot worse than the 24MB/sec that drobo advertises. I wonder what I'm doing wrong? I'm getting slightly faster reads than writes, which seems reasonable.
native internal disk performance on the same machine
Compare that to bonnie running on an internal drive on the same machine:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP weasel 6G 347 98 33362 24 17006 10 1311 95 46774 13 115.1 7 Latency 83296us 2552ms 1958ms 97333us 412ms 1399ms Version 1.96 ------Sequential Create------ --------Random Create-------- weasel -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2238 86 +++++ +++ +++++ +++ 2691 97 +++++ +++ 6854 97 Latency 35917us 410us 319us 53088us 119us 874us
That means block reads and writes at 46 and 33 MB/sec. About 30 times better. And curiously close to what drobo claims to be able to do. So I have to presume that the "block sequential read" that bonnie is measuring is a different metric than the "max sustained transfer rate" that drobo is measuring.
But if I just need to find the right firewire cable to switch off of USB, then I'm a dummy for using USB all this time.
Wikipedia says that USB 2.0 generally performs at 240Mbit/sec and 1394 does 800Mbit/sec with less CPU load. I've got to find that cable.
Hopefully, I'll be able to post bonnie stats for drobo over firewire soon.
drobo over firewire 400 (1394a)
Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP weasel 6G 265 98 16951 14 10546 6 961 97 28571 8 123.6 7 Latency 110ms 6345ms 537ms 67886us 230ms 796ms Version 1.96 ------Sequential Create------ --------Random Create-------- weasel -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 8777 28 +++++ +++ 16891 46 14137 43 +++++ +++ 15867 47 Latency 87774us 12285us 13658us 13962us 11168us 13427us
So let's see, that means 17MB/sec block writes and 28MB/sec block reads. Wow, that's much better. And if I wanted to invest in a $25 1394b card, I could probably double that. Well, I'd have to buy another $37 cable too. So maybe I'll save up for awhile.
Watching movies now seems much better. I only saw one stutter on a dvd served by the drobo and that was while the cache was still filling right after starting VLC. Now I just need to watch more movies and gather much more data.
drobo and the 2TB limit
I only just noticed that my second generation regular drobo has a 2TB limit on the LUNSIZE and that when I filled all four bays with 1TB drives, it didn't just give me (4TB - raid overhead) usable storage space. It gave me 2 x (2TB - raid overhead) of usable storage space. That is, I now have two devices, not one big pot to put all my files in. Bummer. And according to this, it is not advisable to use LVM to wrap the devices up into a single logical unit. I find that hard to believe, but I'm not immediately prepared to test my theory.
Maybe I can live with two medium-sized buckets for my data.
iostat while copying from one side of drobo to the other
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 2.00 20.80 22.40 104 112 dm-0 5.40 20.80 22.40 104 112 dm-1 0.00 0.00 0.00 0 0 sdb 0.00 0.00 0.00 0 0 sdc 0.00 0.00 0.00 0 0 dm-2 0.00 0.00 0.00 0 0 dm-3 0.00 0.00 0.00 0 0 dm-4 0.00 0.00 0.00 0 0 sdd 56.80 13633.60 0.00 68168 0 sde 18.80 0.00 9593.60 0 47968 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 1.80 6.00 2.40 1.00 33.60 56.00 26.35 0.05 14.71 4.71 1.60 dm-0 0.00 0.00 4.20 7.00 33.60 56.00 8.00 0.09 7.68 1.43 1.60 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 2.40 0.60 79.80 1.40 19126.40 16.00 235.74 2.00 24.70 12.29 99.80 sde 0.00 1633.20 0.00 13.80 0.00 13176.00 954.78 13.92 1008.41 30.00 41.40
The near 100% utilization of sdd tells me that reads are the bottleneck. And the number of blocks that have been read in a 5-second interval was 13633.60 with a standard block size of 4KB means that drobo is reading at a rate of 55.84MB in 5 seconds or 11.17MB/sec. That's a bummer when you are trying to move several hundred gigabytes.
looking for a good test
scp read from drobo, write to /tmp
rday@weasel:/drobo/dvds$ scp Zombieland.iso localhost:/tmp The authenticity of host 'localhost (127.0.0.1)' can't be established. RSA key fingerprint is be:7f:1e:a6:2a:e3:01:1e:5f:25:2d:9c:6a:0f:ef:10. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Enter passphrase for key '/home/rday/.ssh/id_dsa': Zombieland.iso 66% 5331MB 22.6MB/s 01:59 ETA
timed cp from drobo to /tmp
rday@weasel:/drobo/dvds$ /usr/bin/time -v cp Zombieland.iso /tmp Command being timed: "cp Zombieland.iso /tmp" User time (seconds): 0.11 System time (seconds): 81.54 Percent of CPU this job got: 23% Elapsed (wall clock) time (h:mm:ss or m:ss): 5:51.06 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 992 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 1 Minor (reclaiming a frame) page faults: 313 Voluntary context switches: 36004 Involuntary context switches: 2110 Swaps: 0 File system inputs: 16434904 File system outputs: 16439552 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0
Total bytes of this file: 8417050624 Total seconds: 351.06 Bytes/sec: 23,976,102
timed cp from drobo to /dev/null
I guess the write to a spinning disk still has some cost and is not completely parallelized with the read.
rday@weasel:/drobo/dvds$ /usr/bin/time -v cp Zombieland.iso /dev/null Command being timed: "cp Zombieland.iso /dev/null" User time (seconds): 0.10 System time (seconds): 17.98 Percent of CPU this job got: 5% Elapsed (wall clock) time (h:mm:ss or m:ss): 5:02.58 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 992 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 1 Minor (reclaiming a frame) page faults: 313 Voluntary context switches: 34180 Involuntary context switches: 292 Swaps: 0 File system inputs: 16434432 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0
Total bytes of this file: 8417050624 Total seconds: 302.58 Bytes/sec: 27,817,604
dd from /dev/zero to drobo
rday@weasel:/drobo/dvds$ /usr/bin/time -v dd if=/dev/zero of=./test bs=16k count=163840 163840+0 records in 163840+0 records out 2684354560 bytes (2.7 GB) copied, 213.715 s, 12.6 MB/s Command being timed: "dd if=/dev/zero of=./test bs=16k count=163840" User time (seconds): 0.08 System time (seconds): 17.57 Percent of CPU this job got: 8% Elapsed (wall clock) time (h:mm:ss or m:ss): 3:33.88 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 924 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 2 Minor (reclaiming a frame) page faults: 289 Voluntary context switches: 12677 Involuntary context switches: 243 Swaps: 0 File system inputs: 192 File system outputs: 5242880 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0