Jump to content
Sign in to follow this  
codemaster

Thecus N5500 NAS Server

Recommended Posts

One of my 2TB NAS boxes died yesterday, along with a lot of semi-important data. Of course, it was the one without a RAID, so it looks like I may have lost the lot. I'll pull the HDD out of it and see if I can recover anything using another system.

 

I'd been thinking about getting a Thecus N5500 NAS server for quite some time because they seemed to have been getting favourable reviews.

 

The crashed 2TB NAS box just made my mind up that little bit sooner.

 

The N5500 was overnighted and arrived here today.

 

Posted Image

 

Thecus N5500 NAS Server Overview

 

 

It's a heavy little bugger, especially with 5 x 2TB Hitachi Enterprise hard drives installed, but it's also surprisingly compact and quiet.

 

I'm still configuring the drives into a RAID5 array. The estimated time on the screen says approximately 13 hours before it's ready to go.

 

RAID5 is great for redundancy, but instead of 10TB, I only get about 7.4TB of usable storage space.

 

I'll be using this NAS box as both a Media Server (FLAC and 1080p HD content) and for file storage. I'm not too sure if it will work directly with my PS3's yet, but that can be fixed by simply sharing it through a PC running PS3 Media Server software.

 

Anyway, once the drives are ready, I'd like to run some speed tests on the new NAS box. Can anyone suggest any programs I can use to benchmark the data transfer performance? I've set it up with an XFS file system and would like to see if that is the fastest (or best) alternative to ZFS or ext3.

Share this post


Link to post
Share on other sites

bugger would have been nice to see benchy of one of those Ultrastars by itself before you configured the whole thing.

 

I'm also going with 10tb but a little differently.

 

Thats a good little unit i've heard some good things about the thecus boxes.

Share this post


Link to post
Share on other sites

I just ran a (very) simple test by copying 90Gb of miscellaneous files (25,651 files to be exact) from my PC to the N5500. It took approximately 23 minutes.

Share this post


Link to post
Share on other sites

Look like a nice unit CM.

 

13+ hours to sync a Raid array - Ouch!

90GB upload in 23mins - sweet.

 

I'm taking a different road to you as I type. I also had a drive loss last week and was spurred to action.

OpenSolaris ZFS homebrew NAS. Inspired by Zebra's ZFS project from last year.

 

 

CPU : AMD Athlon II X3 400e 2.2Ghz, 1.5MB Cache, Triple Core, 45w Energy Efficient

PSU : Antec 650W TruePower ATX Power Supply

Motherboard: ASUS - M4A78LT-M-LE

Case : Fractal Design Define R2, Titanium Grey ATX Case,

NIC : Intel EXPI9301CTBLK PRO/1000 CT Desktop ADAPTER/FULL-HEIGHT/PCIe

RAM:Kingston KVR1066D3E7S/2G 2GB 1066MHz DDR3 ECC CL7 DIMM X 2

Data Drives : Samsung 1.5TB Spinpoint F2, SATAII, 5400rpm, 32MB Cache, NoiseGuard, Silentseek, EcoGreen X 4

Mirrored OS Drives : Samsung 500GB Spinpoint F3, SATAII, 7200rpm, 16MB Cache, NCQ, NoiseGuard, SilentSeek X 2

 

Got the parts Tuesday, just finished the build and OS install (looks good so far). But unfortunately it looks like one of my 1.5TB drives is DOA. I'll try to swap it at the supplier on Friday and continue the project then.

 

Creating the Raidz1 pool and file system should only take a couple of minutes but I recon I'll spend 13+ hours on the project before I'm done. So maybe you're the smart one.

 

I'll do a proper build thread in the open source forum once I get it all sorted.

Share this post


Link to post
Share on other sites

CptnChrysler, a home-grown NAS is a great idea. Plus you get to specify how fast it will be depending on your budget.

 

I did my own NAS box a while back using a TwelveHundred case, Q9400 CPU, 8Gb RAM, a couple of PCI SATA RAID controllers, and a bunch of hard drives totalling 14.5Gb. It's main role is as a media server for my home theatre. It just sits in a corner and is always on 24/7. My whole house is wired with Cat6 outlets linked back to a data cabinet full of switches, router, and UPS's, so streaming uncompressed audio and HD video across the home is fast and simple.

 

The new N5500 dedicated NAS box will be more for my network storage, as well as media that is not being used for the home theatre.

 

Good luck with finishing your own NAS. It's not a very nice feeling when you open a brand new package and find that the component is DOA. We look forward to hearing how well it works and if it does everything you need it to do.

Share this post


Link to post
Share on other sites

I just can't enter into a full build without some hardware issue tripping me up.

 

The NAS portion of my build should actually be the easy part.

 

The real trick will be the IMAP & WEB mail server I'm planning to add.

I still haven't decided whether to go Virtualbox or use a Solaris zone for that. One thing at a time - I'll get the ZFS NAS portion up and running first before I migrate my mail over. My old P1 233MHz mails server is still rock solid reliable.

 

I need more practice with OpenSolaris before I get too adventurous.

Share this post


Link to post
Share on other sites

Ahh. I see where you're going. I just use a Server with MS Exchange for my mail. It's very handy with my mobile device and for centralisation of the various family member's mail accounts. Microsoft sent me Exchange Server 2007 and more recently 2010, but I still prefer 2003.

 

Can I ask why you're using ZFS for your NAS?

Share this post


Link to post
Share on other sites

ZFS = Sheer Awesomeness on a stick. After reading ZEBRA's article last year I did some further reading and there's nothing else comes close IMHO.

 

As for the Mail server, I'm thinking a Fetchmail + Postfix + Dovecot + Horde/Squirrelmail based solution - Who knows, I may even just throw a ClarkConnect into a Virtualbox VM. For the limited volume of mail it's handling that would work very well.

Share this post


Link to post
Share on other sites

Can I ask why you're using ZFS for your NAS?

 

 

Why would you use anything else?

Share this post


Link to post
Share on other sites

Can I ask why you're using ZFS for your NAS?

 

 

Why would you use anything else?

 

 

The only reason I mention it is because I've read mixed reports about ZFS. Some people swear by it, whilst others avoid it like the plague. Initially, I set the file system on my new N5500 to XFS to see what speeds it can do. The next step is to try the same tests again using ZFS. I'll post results when done.

Share this post


Link to post
Share on other sites

Do you mind if I ask how much that was with the drives and all?

I wouldn't mind getting something like this for home.

Share this post


Link to post
Share on other sites

Do you mind if I ask how much that was with the drives and all?

I wouldn't mind getting something like this for home.

 

RRP is $1,899.00, but you can get it for less. That's cheap, considering the value of the data I lost using a cheaper single-disk NAS box.

Share this post


Link to post
Share on other sites

ZFS = Sheer Awesomeness on a stick. After reading ZEBRA's article last year I did some further reading and there's nothing else comes close IMHO.

...and in terms of how far it's come since then, we can now global block level dedupe on the fly with ZFS, for even more storage-saving insanity......

 

z

Share this post


Link to post
Share on other sites

Well, I'm afraid ZFS is anything BUT awesome on the NAS box.

 

When I transferred 90Gb of miscellaneous files (25,651 files to be exact) from my PC to the N5500:

 

Using XFS - It took approximately 23 minutes

 

Using ZFS - It took approximately 57 minutes

 

 

 

I'm going back to XFS.

Share this post


Link to post
Share on other sites

It makes me question the ZFS implementation they have on the device. Not to mention the CPU cycles that ZFS loves to chew.

 

A properly made ZFS array out of those drives should be able to flatten a gig pipe no worries, so, umm, yeah.

 

Also, if the NAS is actually running a form of Linux, it will most likely be passing ZFS through FUSE.

 

I'd rather run FAT32 in that case. I'm surprised they even bother offering it, if the above is true.

Share this post


Link to post
Share on other sites

The only options available are: ext3, XFS, and ZFS.

 

I haven't tried ext3 yet... not sure if it will be any better

Share this post


Link to post
Share on other sites

does it not work out that xfs is faster for small files than ext3. I don't know i always get xfs and the other one (native linux but not ext*) confused. I would definitely stay with a simple and native filesystem on that device. Also i think a xfs strength was fast file delete aswell, so you may like to try out deletion as well.

Share this post


Link to post
Share on other sites

I'm sticking with XFS for the moment.

 

Also, deletion of the same test batch of files mentioned previously too exactly 2 minutes.

Share this post


Link to post
Share on other sites

WOOT! - It's Alive...

 

Last login: Sat Jan  5 12:31:20 2002 from 192.168.1.10
Sun Microsystems Inc.   SunOS 5.11	  snv_111b		November 2008

dan@OpenSolaris:~$ zpool list
NAME	 SIZE   USED  AVAIL	CAP  HEALTH  ALTROOT
mypool  5.44T   836M  5.44T	 0%  ONLINE  -
rpool	464G  5.41G   459G	 1%  ONLINE  -

dan@OpenSolaris:~$ df -h  /mypool
Filesystem			Size  Used Avail Use% Mounted on
mypool				4.1T   33K  4.1T   1% /mypool

dan@OpenSolaris:~$ zpool status mypool
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

		NAME		 STATE	 READ WRITE CKSUM
		mypool	   ONLINE	   0	 0	 0
		  raidz1	 ONLINE	   0	 0	 0
			c10t1d0  ONLINE	   0	 0	 0
			c10t3d0  ONLINE	   0	 0	 0
			c11d0	ONLINE	   0	 0	 0
			c12d0	ONLINE	   0	 0	 0

errors: No known data errors

dan@OpenSolaris:~$

Also, codemaster: I use XFS for the videos partition on my MythTV box and it seems to cope with big files better than ext3 does.

Share this post


Link to post
Share on other sites

Just wait until you expand it a bit...

 

[root@mdc:/] $ zpool list
NAME		  SIZE   USED  AVAIL	CAP  HEALTH  ALTROOT
archive	   118T  19.7T  97.8T	16%  ONLINE  -
rpool		 136G  11.0G   125G	 8%  ONLINE  -

[root@mdc:/] $ df -h /archive 
Filesystem		   size   used  avail capacity  Mounted on
archive			116T   2.4T	96T	 3%	/archive

[root@mdc:/] $ zpool status archive
  pool: archive
 state: ONLINE
 scrub: none requested
config:

		NAME									  STATE	 READ WRITE CKSUM
		archive								   ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Ad0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Bd0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Cd0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Dd0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Ed0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C0000000Fd0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C00000006d0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C00000007d0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C00000008d0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C00000009d0  ONLINE	   0	 0	 0
		  c10t60060E80100549C0052FB14C00000010d0  ONLINE	   0	 0	 0

errors: No known data errors
Edited by brains

Share this post


Link to post
Share on other sites

those are all iscsi jbods right. got a couple of boot environments going there too looks like.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×