Using desktop SSD drives in HP DL360 G5 servers

November 8th, 2015 by bostjan

Disclaimer: Benchmark of used SSD drive does not represent its real world performance with modern hardware. This post analyzes performance of desktop SSD in specific older server hardware, and any modern desktop SSD would probably perform similarly.

HP DL360 servers seem like a reliable hardware, judging by their track record. Older generations, like G5 that was used for creating this post, may still be used for various chores around data center. Chores like running as LXC host for non-mission-critical services. They come with SAS disk interfaces and RAID hot swap functionality, so why not try to speed them up with some cheap desktop SSD disks? Reliability wise, judging by SSD endurance experiment and average IO of servers I currently manage, I am not concerned.

Server used for benchmarking: HP DL360 G5, with 2x Xeon X5450 CPUs, 32GB RAM, P400i controller
SSD Drives used: 2x Samsung 850 EVO 250GB drives (kindly provided by MR2 company)
SAS Drives used: 2x HP 146GB 10K
OS used: Linux with 3.19.2 kernel, Linux 3.14.50 (default with System Rescue CD 4.6.0)

P400i has a SATA 1 interface, which means it supports line-level transfer rates of 1.5Gbps max. Taking into account the 8b/10b encoding, this gives maximum teoretical throughput of 1.2Gbps, or 150MB/s.

Single SSD drive performance

P400i does not expose raw devices at all. Instead, you need to configure each disk as RAID0 array. This is how it is done with hpacucli command line controller management tool (which is FINALLY available as 64-bit binary:

Single SSD drive performance results (exposed to linux as RAID-0 logical drive with only one disk in array):
– sequential read: ~137 MB/s
– sequential write: ~107 MB/s (1GB of data) (125 MB/s with odirect)
– sequential write: ~74 MB/s (3GB of data) (80 MB/s with odirect)
– sequential write: ~64 MB/s (6GB of data) (69 MB/s with odirect)
– sequential write: ~62 MB/s (10GB of data)
– seeks (~5800/s)

This behaviour is quite funny. Initial write burst is quite fast, reaching theoretical bus limits (P400i write cache kicks in, I guess), but then writes become constant, and stop depending on the size of data transferred. The constant transfer rate I see when watching “iostat 1 -m” output while the test is running is 65 MB/s, after initial burst settles.

I repeated the test with this command, settling for 1GB test runs that are executed consecutively.

The constant sequential write result to single SSD drive was 67 MB/s.
These writes were verified at various disk offsets, and did not vary, as expected.

Disabling all caching

BBWC module contains 512MB of cache, all of which I normally allocate to writes, as disk contents are already cached in main memory, which is far larger and cheaper. I used to utilise 25%/75% cache divide for reads and writes, but not anymore.

This yields a constant performance of 46 MB/s.
(it was later found out that caching=* had no influence on the result, but both other settings had to be set properly to boost performance and have initial write boost)

Software RAID 10 with two SSD disks

With Linux, I always use RAID 10 instead of RAID 1, check the link for reason. So, with previous findings in mind, we started repeated writes to the same disk section, but now using software RAID 10.

Here are the results:
– sequential read: ~280 MB/s
– sequential write: ~68 MB/s (a bit better, how?:)
– seeks (~5500/s) – LOWER than single drive – this seems OS/controller limitation

Sequential read speed is expected to be two times faster than that of a single drive, using only one thread, consistent with expectations. This was the command used for testing:

Software RAID – hot swap test

P400i controller is very good at handling failures, and using its RAID implementation hides away all details of hot swapping. Pull dead disk out, put new one in and watch the magic happen.

This is obviously not true with Linux software RAID. After replacing the disk, you need to follow this procedure for system to rebuild raid and start using new drive:

Rebuild speed is around 60 MB/s which is quite consistent with single drive and SW RAID 10 sequential write benchmark results.

Hardware RAID – hot swap test

RAID disk hot-swap test using HP’s integrated RAID controller:
– only tested RAID 1 configuration
– removing drive (simulating failure) and adding it back again works as expected.

Hardware RAID 1 with SSD drives

Here are the results:
– sequential read: ~135 MB/s
– sequential write: ~68 MB/s
– seeks: ~4300/s

Hardware RAID 1 with SSD drives during rebuild

Benchmark results during RAID 1 rebuild:
– sequential read: ~70MB/s
– sequential write: ~45MB/s
– seeks: ~700/s

Hardware RAID 1 with SSD drives – MULTI THREADED ACCESS

I wanted to verify the expected behaviour with multiple clients, and it got me by surprise. Each client, up to 4, was having a 68 MB/s write speed. When going up to 6 clients speed dropped to 43 MB/s PER-CLIENT, which gives aggregate speed of 258 MB/s which is not bad at all. Well, that is/works until you start using real data. Then results drop again and align with single threaded performance.

Here are the results:
– sequential read, 6 clients: ~23 MB/s per client, 138 MB/s aggregate
– sequential write, 6 clients: ~42 MB/s per client, 252 MB/s aggregate, when using /dev/zero as a source
– sequential write, 6 clients: ~10 MB/s per client, 63 MB/s aggregate, when using real data, cached in memory
– seeks: ~4300/s

The reasons for failed benchmark above was usage of /dev/zero for data, which, as expected, outputs only zeroes. P400i is able to compress that very efficiently – this is a speculation, but it seems very plausible one.

Hardware RAID 1 with two SAS drives

OS was already installed on these two drives, so initial sectors were not the ones tested. Instead, space starting at 60th GB and up to 100th GB was used on those 146GB SAS drives, combined with LVM, so comparison is not entirely fair to benchmarks above.

Initial write test was similar to test with SSD HW RAID, where 1GB was written again and again. But this time, P400i was keeping everything in cache and results were around 270 MB/s, which was not expected nor realistic. First change was to write to sequential space, 500MB at a time, reading from /dev/zero. Again, it seems that controller is using data compression for its cache and write results were around 300MB/s. We changed this to read from previously created software RAID 10 volume, and benchmark finally stabilized to what is “reasonable” result.

Results:
– sequential read: ~76MB/s
– sequential write: ~75MB/s
– seeks: ~160/s

SSD Trim

Short version – not supported by P400i.

Verdict

So, what will I use in this case? Well, I like performance. A lot. But I also prefer my servers to boot every time, regardless of whether sda or sdb fails, so I usually balance configurations between those two “extremes”.
What I did in this case? This server has 8 SAS slots available, but there was only need for 2 SSDs. So, I created hardware RAID 1 array on first two disks, and for this I kept two old SAS 146GB drives. This array was used for host OS (boot, root and var partition), carefree booting.
Remaining DISK slots were intended for SSD drives. I created RAID 10, put it in LVM group (future expansion!) and mounted LV(s) under /var/lib/lxc to boost performance of containers that do the actual usable work (running applications that people and businesses need).
This seems like a good compromise – I got reliability, performance and expandability. Someone might ask what is going to happen once I hit 6 SSDs and fill all disk slots? Well, at that point this server will be well overdue for replacement :)


One Response to “Using desktop SSD drives in HP DL360 G5 servers”

  1. jackmiller says:

    Hello

    You are really an expert in IT industry ,and.I do really respect editors like you .composing massive words and pics into an exquisite article to solve prob for readers.
    I do also want one of my how-to IT article to be shown in front of many readers who has this issue,

    would you post it on your blog or just simply add a link?
    Of course. i can give you something in return.

    Email:sunny.sun@minitool.com

    Regards

Leave a Reply

*