Jump to content
pcowandre

ZFS Storage box build

Recommended Posts

Afternoon all!

 

I've been lurking around here a little lately, and I noticed there's the occasional bit of discussion about ZFS. Since I just built a ZFS storage box, I figured I'd post some of the details of the build in case they're useful to someone.

 

The challenge, of course, with building any low-cost storage platform is balancing RAS (reliability, servicability, availability) and performance with cost.

 

The role - the workload - for this box is media storage and backups. My critical data lives on a Sun Fire V880 on FC-AL disks and I was previously wasting time changing LTO2 tapes for backups. Urgh, right?

 

Because the role of this host is media and backups, if it goes down - that sucks, but it isn't a weeping moment.

 

Anyway, having got that out of the way, here's the bill of materials.

 

* Case: Fractal Design Refine XL.

 

It's a nice case with 4 x 5.25" bays and 10 x 3.5" bays inside. The fans are well positioned including a pair of fan slots right in front of the disks - good. The fan at the back of the case really wasn't moving enough air for my liking so I pulled a better fan from an old RS/6000 I was scrapping and installed that - speed control keeps the noise down but the case runs cold now. Overall, if I was building another box, I'd buy another Fractal case without hesitation. Easy to work in, plenty of screws and bits shipped and the drive sleds have rubber grommets to keep the vibration down.

 

* PSU: Thermaltake Toughpower XT 775W

 

Enough power, 80plus rated, modular, tidy, simple and not too expensive. Reading power supply reviews was putting me to sleep and this one seems to be up to the job. It runs cold and quiet and delivers enough power.

 

* The "engine": Asus Crosshair IV Formula, Phenom 1090T and 4 x Kingston KVR1333D3E9S/4G

 

ECC memory is a must-have for a fileserver, especially with ZFS. The checksums in ZFS will ensure your data survives controller glitches, noise on the cables, drive defects, bus faults and everything between the processor and spinning media and back - but if your RAM can glitch without report, ZFS can't do that job properly. Building a nice system with non-ECC memory is liking filling your Ferrari with shitty cheap fuel.

 

So, if we want ECC memory we have two choices - AMD or Xeon. Given cost was a concern for this build, AMD wins.

 

The Crosshair IV Formula has a list of certified DIMMs and they include the Kingston KVR1333D3E9S/4G - an ECC part. Cheaper boards tend not to be tested for ECC memory. The Kingston parts weighed in at about $100 a stick.

 

The Phenom 1090T delivers the right mix of performance and price. And compiles run damn fast, so no complaints at all.

 

* The HBAs: 2 x LSI SAS3081E-R

 

Sun (now Oracle) use mpt cards for SAS. LSI has a long history of Solaris support. The cards have an IOC to cut down on CPU waste and pump the data. Each card has eight ports and while I'm only running eight drives in the data pool, this means

a) the load is split over two HBAs for performance

b) if I lose an HBA, I can move those four disks to a port on the other card - a bit of built-in sparing considering replacement cards can take a few days to arrive.

 

* Drives: Hitachi 7200rpm 2TB, 32MB cache

 

At a common street price of $124 a drive, these are the cheapest 2TB 7200rpm drives. They're very quiet, suitably fast (no problem sustaining 120Mbyte/sec on sequential write loads) and "Just Work". Nice.

 

A mirrored pair for root and an 8-way RAIDZ for the data pool. No raidz2 - don't require that level of redundancy as this isn't a mission-critical box. Using the same spindles for root and data pool means in a pinch, I can scavenge the root mirror for a spare for the data pool.

 

* NIC: Intel Quad Gigabit server adapter.

 

I already had one in my parts box. e1000g is the right choice for a Solaris on x86 box. Configured in a four way LACP aggregation back to the switch, I know that ethernet isn't going to be a problem on this host. Can't recommend these cards enough; they just damn well work.

 

* Other hardware:

 

I might want to add SSDs for ZIL slog or L2ARC, so I've installed a four-slot 2.5" bay in a 5.25" bay. Pre-cabled to a spare LSI controller port so adding SSD is just a matter of screwing them into a sled and ramming it home. The little fans in the bay were nasty screamers and SSDs don't require so much cooling, so I re-cycled the little fan speed controller that shipped with the Fractal to regulate them down a little and take the edge off. A quick job with a soldering iron.

 

As I mentioned before, I beefed up one of the fans with a fat Panasonic that I stripped from a vintage RS/6000.

 

The Crosshair motherboard won't boot without a damn PCIe video card, so the box got a cheap-arse passive-cooled Radeon. It has never been out of VGA text mode. I tried classic PCI in the legacy slots but no joy. Bastards. I could have put a fibre channel card in that slot. Grrr.

 

Hassles:

 

While the Fractal case is nice and has the nifty cable management space, it was a bit of a pain to get all the cables routed.

 

The front fan bays only take thin fans, I'd like to have the option to bolt another couple of big chunky bastards in there if the temperatures get high. SMART tells me the hottest drive is hitting 50 degrees at times which is a little toasty.

 

Software:

 

So far, I've run the machine up with Solaris 10 update 9. I'm using Samba for CIFs, mediatomb for DLNA, transmission-daemon, etc. Everything is compiled seperately and is designed to be portable. Changing OS is a matter of exporting the data zpool, reinstalling, setting up the network, importing the pool, pushing some users back into /etc/passwd and re-importing the SMF manifests. Config files are all checked into a local git repo for version control and quick restoration.

 

Well, that's about it. I just thought someone might find the hardware loadout useful, or might have a few questions I can help with. I've been living in Solaris-land since 2.5.1 was hip 'n' trendy, so I've seen a lot of "fun" along the way.

Edited by pcowandre

Share this post


Link to post
Share on other sites

You may find some benefit to my untested idea. Currently im not floating in cash to test it.

 

since your running 16gb of ram in that sucker, and are considering a ssd for l2arc. I think around the biggest you want to go there is 160gb. But from what i can gather a l2arc paired with the zfs precache option should keep your disk fairly idle most of the time. could reduce the power consuption during media playback and in turn keep it a bit quieter.

 

The fractals are loverly cases.

The powersupply is decent but I'm a seasonic man. My game box runs and older toughpower 750, have not had a problem with it.

The motherboard is nice, just for interest did the marvel nic work on sol11e. Gigabyte use realtek and they either don't work or run like a bag of crap.

2 lsi's, I think your enterprise gear kind of poisoned you there, thats a little extreme. Yeah i know you done it for redundancy but damn. Those things are sucking down 10w a piece, not exactly economical there.

The fujitsu's are an interesting choice. I know their enterprise stuff is good but I'm not sure about their consumer space gear. The winner around these forums and my personal favorite is samsung. (in the 2tb bulk storage category that is)

also most have found that a mirror pair root is overkill for a home server since virtually all of the config is in etc and is bugger all. just back it up to your data pool and your pretty safe.

 

Thats a good idea with the ssd cradle. I was considering this since i dont think solaris can run the sandforce software to perform a secure erase to get your performance back if you push your ssd to hard.

 

Welcome to the forums Andre. oh and yeah I think I know you in RL.

 

Guess I'll list my build for perspective. Currently sitting on the floor waiting for power.

 

one solid as F aopen h700 case.

2 lian li 4-in-3 drive trays.

Seasonic MII bronze 520w

Athlon II x2 240

8gb ram

Radeon 3450 passive

LSI 3081

Gigabyte 790FXT-Ud5p

Noctua C12P-SE14

12 * 1tb Hitachi's

wd 320gb blue 2.5" hdd for the os

 

To be added

Intel e1000g

Noctua 80mm fans

 

Running on Nexentastor community.

Share this post


Link to post
Share on other sites

Hehehe...

 

Was wondering if I'd ever find any of the purple cow crew around here...;).

 

Anyway - there is something I could add here. I think your build is OK for the most part, and you have some solid design decisions in there, from my standpoint. Especially your ECC consideration. One thing I would add however, on the "glue" side of of things, if you're running S 11 Ex:

 

# pkg install smtp-notify

# svccfg setnotify -g from-online,to-maintenance mailto:you@someplace.com.au

# svccfg setnotify problem-diagnosed,problem-updated mailto:you@someplace.com.au

 

Also - there is a lovely utility or two for the LSI SAS3081E-R, in the form of the MSM tools, useful to keep a sharp eye on the card health, if you're a proper monitoring and telemetry freak (like some of us here....>_>.....).

 

I personally don't have an LSI SAS3081E-R, but I do have a decent (and ridiculously fast) STK RAID INT 256MB Adaptech/Intel controller, and simply tell the card to use single drives and present the drives out as a single device per module, rather than telling the card to use any of it's RAID functionality. Silly waste of money...but eh...was free.

 

Within this, there are some quite powerful and configurable card monitoring tools available, when you get near that end of the market, too.

 

Posted Image

 

Also - I'm on the lookout for a cheap LSI SAS 9211-8i, if anyone has seen one hanging about!

 

z

Edited by zebra

Share this post


Link to post
Share on other sites

Oh, Team Cow turns up everywhere ;-)

 

I meant to type _Hitachi_ there, not Fujitsu. I was in-conversation while writing that post and Fujitsu came up and out the fingers - definately Hitachi spindles. Had to grab a batch before WD killed 'em ;-)

 

The LSI cards are normally a bit spendy, but I spent some time hunting and managed to find 'em for about a hundred bucks. The ten watts isn't worrying me, given this box is sitting next to a big SPARC box o' doom.

 

The onboard Marvell Yukon didn't work in Solaris 10 and I'm not running 11e due to the hassles I had with link aggregations in my lab. Frankly, I already had the quaddie and I don't trust much else in x86-land.

 

For those in Melbourne, I'll be walking through the config at the next MSOSUG.

 

Oh, and zebra, what is a cheap/fair price on a LSI SAS 9211-8i? I could try hitting up my source for such things and see what they've got.

Share this post


Link to post
Share on other sites

For those in Melbourne, I'll be walking through the config at the next MSOSUG.

Oh, and zebra, what is a cheap/fair price on a LSI SAS 9211-8i? I could try hitting up my source for such things and see what they've got.

:(. I was head of QOSUG until it sort of self imploded/broke itself into little bits with the Oracle acquisition. No reason I cannot bring it back to life, that said!

 

As for the 9211-8i, I've seen it for as cheap as $308 shipped from USA, over ebay. Not tried hitting up the local Oracle guys for an EDU pricing etc. I suspect it won't be much different. Let me know what you find, eh?

 

Also...dude...you've got a V880 in your HOUSE?! :(. Do you plug all 3 PSU's in? :S.

 

I have one turned off in the basement at work. I honestly cannot get rid of it. Fully populated with 73GB 15K FC-AL disks, maxed out RAM and CPU boards. Darned if anyone wants it locally, however...

 

 

cheers.

 

z

Edited by zebra

Share this post


Link to post
Share on other sites

yeah, MSOSUG thats where I know you from.

 

I also had the same issue as you with trying a pci radeon and it not working. Might give a pci TNT2 vanta a shot at the job :)

Share this post


Link to post
Share on other sites

:(. I was head of QOSUG until it sort of self imploded/broke itself into little bits with the Oracle acquisition. No reason I cannot bring it back to life, that said!

I reckon you'd get a few people interested in QOSUG restarting.

 

I have one turned off in the basement at work. I honestly cannot get rid of it. Fully populated with 73GB 15K FC-AL disks, maxed out RAM and CPU boards. Darned if anyone wants it locally, however...

I'm trying to kick my habit of collecting machines, otherwise I'd snaffle it for spares or something. No! NO MORE! MUST NOT ADD MORE MACHINES! Does it have 1.2Ghz CPUs? Do I need more CPUs? Is it stuffed full of exciting cards? Do I need a sack of exciting cards?

 

http://mexico.purplecow.org/index.php/Taco_SATA_Drives

Share this post


Link to post
Share on other sites

your taco drives is kind of like what i might do to a v240 some double sided tape and an ssd/l2arc.

Share this post


Link to post
Share on other sites

your taco drives is kind of like what i might do to a v240 some double sided tape and an ssd/l2arc.

When I put a couple of 2.5" SATA drives in a V240, I screwed them to a blank PCI card - a slot filler from an SF6800 - and it worked rather well. Burns a PCI slot, of course, but stays tidy.

Share this post


Link to post
Share on other sites

I was going to double side tape them to the big purple shroud over the cpu's, if i ever get to it.

Share this post


Link to post
Share on other sites

I was going to double side tape them to the big purple shroud over the cpu's, if i ever get to it.

You can stick the world together with double-sided tape. LSI four-port PCIx cards are pretty cheap now.

Share this post


Link to post
Share on other sites

It feels good to be here again. I can feel the old school UNIX love.

 

/hugs you guys.

 

z

Share this post


Link to post
Share on other sites

Sorry Zeb. I only came into the fray with sol10 u4. I've known what I want to do with my life since then.

Share this post


Link to post
Share on other sites

Sorry Zeb. I only came into the fray with sol10 u4. I've known what I want to do with my life since then.

Heh. Yeah, it's a bit of a religion, isn't it?

 

I know guys that have had full scale holy wars when it comes to upholding the SPARC name...

 

z

Share this post


Link to post
Share on other sites

Well, of course, the SPARC is a holy CPU. Except, err, some of those "less than fortunate" Exxxx modules with the packaging problem.

 

Posted Image

 

It feels good to be here again. I can feel the old school UNIX love.

I reckon we need another kernel conference to feel the beer. Err, love. Err, beery love? Well, 4am in a pancake joint with bottles of vino love?

Share this post


Link to post
Share on other sites

That kernel conference was great, I asked James what happened and he said it was the acquisition that killed it last year.

 

 

We're ditching Solaris at work, for small business the cost of anything Oracle is hard to justify, but it's still nice to see this sort of stuff happening.

Share this post


Link to post
Share on other sites

That kernel conference was great, I asked James what happened and he said it was the acquisition that killed it last year.

 

 

We're ditching Solaris at work, for small business the cost of anything Oracle is hard to justify, but it's still nice to see this sort of stuff happening.

Ahem.

 

>_>

 

<_<...

 

Watch this (or some other) space, gentlemen. James and I actually had a chat about our next kernel-insanity-lovein we're planning to unleash on the world, in a few months time, about 20 mins ago........

 

 

z

Share this post


Link to post
Share on other sites

Watch this (or some other) space, gentlemen. James and I actually had a chat about our next kernel-insanity-lovein we're planning to unleash on the world, in a few months time, about 20 mins ago........

Hmm, you really should drop into the usual IRC channel and we'll cook up a cunning plan!

Share this post


Link to post
Share on other sites

Watch this (or some other) space, gentlemen. James and I actually had a chat about our next kernel-insanity-lovein we're planning to unleash on the world, in a few months time, about 20 mins ago........

Hmm, you really should drop into the usual IRC channel and we'll cook up a cunning plan!

 

Lynx?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×