record |
2008-01-09
|
The new year has barely begun and already one of our customers shattered the peak traffic for a single stream. They offered up an episode of a series of theirs for download *before* broadcasting it live, and hit 1.5 Gbit/sec on that file alone.
Meanwhile, we've finally decided to get serious about storage and have bought three enclosure+server combo's, each of them having 15 disks of 1 TB. We'll use two on our main platform (using glusterfs for transparant failover), and a third complete copy on a remote platform.
Matthias has been hard at work benchmarking the systems, and after running into a huge performance problem which neither Dell support nor the unofficial Dell Linux mailing list could help with, he kept poking at it until he found the reason for the problem and solved it.
We will need to tweak our code a little to handle the error codes you get while reading from a file during a failover between gluster nodes, and then do some harder testing, but we're basically close to deploying.
And with the current line-up of new customers, it won't take long before we outgrow those measly 15 TB. Now if only I could find a good argument for needing the same solution at home so I could start FLACcing my CD's...
At work, we have a few MD1000’s loaded up with 750GB SATA connected two Debian Etch servers. I also had issues with multiple simultaneous reads, especially if one was random seeks (like doing a long directory listing of tens of thousands of files as we tend to do all the time).
Bumping up the read ahead to 16MB helped a good bit, but I found that using the deadline i/o scheduler caused a huge boost in performance for mixed workloads. You might want to give that a try that too.
Comment by Ryan — 2008-01-09 @ 02:06
Congratulations on the record!
Comment by Fons — 2008-01-09 @ 11:24