I have lately been curious about performance for low-end storage and asked myself questions like:
- Raspberry Pi or Banana Pi? Is the SATA of the Banana Pi a deal breaker? Especially now when the Raspberry Pi has 4 cores, and I don’t mind if one of them is mostly occupied with USB I/O overhead.
- For a Chromebook or a Mac Book Air where internal storage is fairly limited (or very expensive), how practical is it to use USB storage?
- Building OpenWRT buildroot requires a case sensitive filesystem (disqualifying the standard Mac OS X filesystem) – is it feasible to use a USB device?
- The journalling feature of HFS+ and ext4 is probably a good idea. How does it affect performance?
- For USB drives and Memory cards, what filesystems are better?
- Theoretical maximum throughput is usually not that interesting. I am more interested in actual performance (time to accomplish tasks), and I believe this is often limited by latency and overhead than throughput. Is it so?
Building OpenWRT on Mac Book Air
I tried building OpenWRT on a USB drive (with case sensitive HFS+), and it turned out to be very slow. I did some structured testing by checked out the code, putting it in a tarball, and repeating:
$ cd /external/disk 1 $ time cp ~/openwrt.tar . ; time sync 2 $ time tar -xf ~/openwrt.tar ; time sync (total 17k files) $ make menuconfig - not benchmarked) 3 $ time make tools/install (+38k files, +715MB)
I did this on the internal SSD (this first step of OpenWRT buildroot was not case sensitive-dependent), on an external old rotating 2.5 USB drive and on a cheap USB drive. I tried a few different filesystem combinations:
$ diskutil eraseVolume hfsx NAME /dev/diskXsY (non journaled case sensitive) $ diskutil eraseVolume jhfsx NAME /dev/diskXsY (journaled case sensitive) $ diskutil eraseVolume ExFAT NAME /dev/diskXsY (Microsoft ExFAT)
The results were (usually just a single run):
|Drive and Interface||Filesystem||time cp||time tar||time make|
|Internal 128GB SSD||Journalled HFS+||5.4s||16m13s|
|2.5′ 160GB USB2||HFS+||3.1s||7.0s||17m44s|
|2.5′ 160GB USB2||Journalled HFS+||3.1s||7.1s||17m00s|
16GB USB Drive USB3
8GB USB Drive USB2
8GB USB Drive USB2
- Timings on USB drives were quite inconsistent over several runs (while internal SSD and hard drive were consistent).
- The hard drive is clearly not the limiting factor in this scenario, when comparing internal SSD to external 2.5′ USB. Perhaps a restart between “tar xf” and “make” would have cleared the buffer caches and the internal SSD would have come out better.
- When it comes to USB drives: WOW, you get what you pay for! Turns out the Kingston is among the slowest USB drive that money can buy.
- ExFAT? I don’t think so!
- For HFS+ and OS X, journalling is not much of a problem
Building OpenWRT in Linux
I decided to repeat the tests on a Linux (Ubuntu x64) machine, this time building using two CPUs (make -j 2) to stress the storage a little more. The results were:
|Drive and Interface||Filesystem||real time||user time||system time|
|2.5′ 160GB USB2||ext2||8m53s||11m54s||3m38s|
|2.5′ 160GB USB2 (just after reboot)||ext2||9m24s||11m56s||3m31s|
8GB USB Drive USB2
- Linux block device layer almost eliminates the performance differences of the underlying storage.
- The worse real time for the SSD is probably because of other processes taking CPU cycles
My idea was to test connecting the 160GB drive directly via SATA, but given the results I saw no point in doing so.
More reading on flash storage performance
I found this very interesting article (linked to by the Gentoo people of course). I think it explains a lot of what i have measured. I think, even the slowest USB drives and Memory cards would often be fast enough, if the OS handles them properly.
The results were not exactly what I expected. Clearly the I/O load during build is too low to affect performance in a siginficant way (except for Mac OS X and a slow USB drive). Anyway, USB2 itself has not proved to be the weak link in my tests.