Jump to content
Can't remember your login details? Read more... ×
p0is0n

File Servers & NAS

Recommended Posts

Wouldn't that be more to do with zfs not being on Windows?

 

Honestly, ZFS is native on Solaris which your gaming friends most likely wouldn't be using. If they are building their own servers and are even slightly Linux savvy, there is no excuse for not understanding how to work in bash.

 

Are they across windows 8 storage spaces? Could be a better fit.

Share this post


Link to post
Share on other sites

I wont argue with that for a second, and is in fact one of my points.

On top of my usability concerns, Dev needs to slow on feature progression, and speed up on compatibility.

 

I know it wont come to Windows; the OpenSource team seem to have issues making 'user space' apps to emulate technologies. There have been a lot of cool things that would be very slow, but still full features run in user space (like ZFS). I suppose VM's are sort of an answer.

 

But untill recently, even the major Linux distros didn't have ZFS incorporated (i know it used FUSE mainly) and even the people who use things like Ubuntu need to learn bash to use it properly which IMO is a flaw.

 

I can see your side, for an Admin its perfectly OK, you'll script and know bash. Any other angle (average power user, home user, etc) its too hard for most of them.

 

It gets very lonely having the biggest and best toy if only you're able to play with it.

Share this post


Link to post
Share on other sites

So I ran into a previous problem again after setting up samba.

ZFS doesn't mount the zpool on startup.

Running "zfs mount -a" mounts it fine, so will I need to run that as a launchdaemon to get around any auto restart/power failure issues?

 

Side note I ordered a 140mm fan instead of a 120mm, waiting for the later to ship.

I bought a bitfenix PWN fan, it's virtually silent to my ears from 5~ inches, air pressure felt very much the same.

The new fan didn't affect the microservers noise output at all though. The 4x deskstars i have in there are putting out more noise then the stock fan did. There is also a slight electrical whine from the PSU.

Not worth the hassle!

Haven't been around here much recently, started a new job where I have much less time to post.

 

Something I might have forgotten to add to the guide.. I edited /etc/rc.local and added this:

 

smbd stop
nmbd stop
zfs mount -a
smbd start
nmbd start

Never had any issues with ZFS not mounting, apprently it can be an issue with ubuntu 12.04+

Hope that helps

 

Thanks that fixed the mounting issue!

Share this post


Link to post
Share on other sites

Thanks that fixed the mounting issue!

NP! I think I read somewhere when I was setting it all up to do that and just forgot to include it.

 

So I was just reading through this thread, specifically your CentOS scripts The Tick, thanks for sharing. I've been playing around with my server a fair bit, it's now also hosting a small private minecraft server (just because, why not? single player with invites) and a few other things. Nothing noteworthy. I was most of the way through setting up a private wow server, built it, setup SQL, imported data etc. but then couldn't find a copy of that exact client which I needed to export maps from, so this is TBC. This got me thinking I would like to reinstall, and try and do a better job now that I know what I am doing a little more. I no longer have any desire to use this as a HTPC as my RaspberryPi does a fantastic job at that. I'm still using Ubuntu 12.04 (desktop edition) which means it's wasting some resources on a GUI that hasn't been used in months, Unity at that. I removed the 6450 from the machine (and stuck it in my girlfriends machine to run a 3rd screen) so basically the UI runs like crap and was not fun to use last time I tried. I know it's probably possible to remove it but I'd rather re-install and do it right.

 

Few things I never really did much of this time around want to learn/do this time around

- learning how to start/stop things on boot and generally keeping a tidy system, I have a script that runs and does everything but I'm sure there is a better way such as /etc/init.d ?

- scheduling with cron, such as automating tasks like snapshotting or restarting ushare regularly to update the files

- install KVM

- probably more

 

I'm pretty set on trying out CentOS just to try something a little different and get used to any differences. I am wondering if I should just stick to what I know for now though and get ubuntu server, or maybe a minimal debian install. Any thoughts?

I'm also looking at a way to install the OS on my USB drive (/, /boot) and create my /home directory on my zpool, and possibly try and create a 1GB RAM drive for /swap. Not sure if this is just being silly or asking for problems.

 

I also saw someone on OCAU who has managed to fit 7x 3.5" drives and an SSD into one of these so I was thinking of doing the same, as it's actually filling up quicker than I thought, now that I carelessly download in HD. It will involve installing a cheap SATA card which should be no problem with no other cards installed.

Here is a pic

 

There is also a new version of the microserver coming out, along with the rest of HPs Gen8 stuff. It's looking pretty nice so far with a more powerful CPU available (intel), more accessible RAM/PCIe slots, an ILO port, dual gigabit lan ports, USB3 (not bootable?) and a few other changes which aren't so good, like less room to pack it full of drives and a slim optical drive bay. It does look pimp though. There is a guy at OCAU blogging about them a bit, lots of info here which I won't repost.

blog.themonsta.id.au

 

I will steal this pic though.

Posted Image

 

Anyway when I decide what OS I want to reinstall with, I will probably re-write this and do a better job of it if I have time. I got a lot of positive feedback about this so I think it has helped a few people, I have learned a lot more and will learn more still, so will continue to try and share it if it seems worth sharing.

Edited by p0is0n

Share this post


Link to post
Share on other sites

I added a 7th 3.5" drive to the micro server last week, required a little bit of modding so I took a couple pics to show the steps as I couldn't find any good pics online.

 

First thing you need to do is strip the case down a bit so you can do what you need. I took pretty much everything out, just so no metal dust/shavings got where they shouldnt. If you go with tin snips, you probably could avoid this. I used a dremel though.

 

Cut like so.

Posted Image

 

Then just grab some pliers and pull the pieces out, they are riveted in but doesn't take much force, and then clean up the rough edges if you care.

Posted Image

Posted Image

 

Make sure there are no metal filings left in the case before you put hardware back in it.

 

I had some thin adhesive foam so I stuck it over the bare metal then stuck a HDD in and cable tied it just so it doesn't move around. I also tucked the LED into a suitable spot, you can do whatever you want with it.

Posted Image

 

Then I put 2 more drives in the 5.25" bay like normal.

Posted Image

 

Power Cables:

Posted Image

 

Data Cables:

Posted Image

 

7x3.5" drives total:

Posted Image

 

I haven't got my HBA in the last pic, it was in my desktop PC having firmware upgraded, I'm also only using it to run 1 drive for now. Exuse my crappy phone pics.

 

Hope this is of some use to anyone. Pretty easy mod and 7 is a nice number for ZFS.

Edited by p0is0n

Share this post


Link to post
Share on other sites

Well here are the temps after a few days uptime. Pretty happy with the results.

 

/dev/sda: WDC WD20EARX-00PASB0: 28°C
/dev/sdb: WDC WD20EARX-00PASB0: 29°C
/dev/sdc: WDC WD20EARX-00PASB0: 27°C
/dev/sdd: WDC WD20EARX-00PASB0: 27°C
/dev/sde: WDC WD20EARX-008FB0: 27°C
/dev/sdf: WDC WD20EARX-00PASB0: 30°C
/dev/sdg: Patriot Memory: S.M.A.R.T. not available
/dev/sdh: WDC WD20EZRX-00DC0B0: 28°C

Also found a new cover for my 5.25" bay - its from a prodigy build I did around a week ago. Fits in really nicely, secure, looks alright and will let air in and keep dust out. Hasn't increased temps at all.

Posted Image

Edited by p0is0n

Share this post


Link to post
Share on other sites

Another small update to this thread, if anyone is still looking at it.

 

I replaced my shitty 3081E-R with a M1015 from ebay (LSI 9240-8i) and am now running all disks off this instead of the motherboard. Only a few ports on the 3081 were working, and I didn't really trust it.

Results are looking good so far, on average size files (10gb) I'm getting 300MB/s writes and 500MB/s reads.

 

Here is a few quick tests I did under normal conditions, which means all the normal software is running eg. sab and deluge.

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 34.8552 s, 301 MB/s
patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 20.9997 s, 499 MB/s

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 33.738 s, 311 MB/s
patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 20.6503 s, 508 MB/s

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 33.7375 s, 311 MB/s
patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 20.8237 s, 504 MB/s

I also decided to test a 100GB file to see if this changed anything, the write speed was a little lower, but that's likely related to other programs running.

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 381.441 s, 275 MB/s
patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 207.824 s, 505 MB/s

Not that I will ever really see any of this speed, given I only access it across the network :)

Cheers

Share this post


Link to post
Share on other sites

Layer 2 switch with LAG enabled.

Dual NIC PCI-E card

 

While you could get away with using a single PCI-E NIC and bonding that and your onboard, I guess I kind of get OCD about mixing different NICs.

 

Sure - doing this may break the budget but ... you know ... speed and all that. I guess you may want to explore setting up your desktop with some bonding too.

Share this post


Link to post
Share on other sites

so i found out you can now use arcstat with zfs on linux, as of a recent update. I believe there were perl scripts available previously but this is now included in zfs-0.6.2

 

if you'r using ubuntu then just do an apt-get upgrade to get the latest version of zfs which should include the following.

/usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py

 

I've tried to get some data/testing on my performance, but didn't have a lot of luck.

based on the numbers below, it would appear to be cached, but couldn't get anything to appear in my arc.

the size did reduce though after testing.

 

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
11:40:02	 0	 0	  0	 0	0	 0	0	 0	0   7.8G  7.8G

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 1.26052 s, 416 MB/s

patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 0.325862 s, 1.6 GB/s

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest bs=1M count=5000
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 38.396 s, 137 MB/s

patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=1M
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 3.49754 s, 1.5 GB/s

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
11:46:12	 0	 0	  0	 0	0	 0	0	 0	0   7.6G  7.8G

patrick@N40L:~$ dd if=/dev/urandom of=/fs/ddtest bs=64M count=16
16+0 records in
16+0 records out
1073741824 bytes (1.1 GB) copied, 157.997 s, 6.8 MB/s

patrick@N40L:~$ dd if=/fs/ddtest of=/dev/null bs=64M
16+0 records in
16+0 records out
1073741824 bytes (1.1 GB) copied, 0.697052 s, 1.5 GB/s

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
12:03:06	 0	 0	  0	 0	0	 0	0	 0	0   3.9G  7.8G

EDIT: so I did a bit more testing, the data was definitely in the ARC, however I don't know why I don't get data in any of the other columns.

maybe I need to do some more reading.

 

patrick@N40L:~$ dd if=/dev/zero of=/fs/ddtest1 bs=32M count=2000
2000+0 records in
2000+0 records out
67108864000 bytes (67 GB) copied, 516.973 s, 130 MB/s

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
12:23:17	 0	 0	  0	 0	0	 0	0	 0	0   7.8G  7.8G

patrick@N40L:~$ dd if=/fs/ddtest1 of=/dev/null bs=32M
2000+0 records in
2000+0 records out
67108864000 bytes (67 GB) copied, 226.104 s, 297 MB/s

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
12:29:23	 0	 0	  0	 0	0	 0	0	 0	0   7.8G  7.8G

patrick@N40L:~$ dd if=/fs/ddtest1 of=/dev/null bs=32M
2000+0 records in
2000+0 records out
67108864000 bytes (67 GB) copied, 235.136 s, 285 MB/s

patrick@N40L:~$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
12:35:04	 0	 0	  0	 0	0	 0	0	 0	0   7.8G  7.8G

patrick@N40L:/fs$ rm ddtest ddtest1

patrick@N40L:/fs$ python /usr/src/zfs-0.6.2/cmd/arcstat/arcstat.py
	time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz	 c
12:35:29	 0	 0	  0	 0	0	 0	0	 0	0   1.8G  7.8G
Edited by p0is(+)n

Share this post


Link to post
Share on other sites

Hey guys.

 

Noticed that Scorptec has the older model HP ProLiant N54L Mini Server on special at the moment, is it still worth picking on of those up for a little home NAS/Server. Or are the newer models that much better to justfy the step up in price?

 

I am kind of inexperienced in the NAS department, have set up a QNAP TS-469L at work, and played around with the QNAP operating system on them and got most things to work properly (server backups to iSCSI, shared folders etc.), but as far as the HP goes, whats the best software to use?

I was going to set up and old PC with a FreeNAS install, would this still be ok to use in the HP?

Are there any NOOB guides to setting up a HP ProLiant?

 

I take it all I would need to add to the HP miniServer is Hard Drives?

 

My current home setup includes:

My Main PC - Gaming and Design Work

My SimRig - SimRacing FTW

XBMC machine in the Lounge

iMac in the study for my Lady to use

PS3 in Lounge

Plus an iPad, Surface RT and couple of phones...

 

Setting up something to use as a backup storage of my main rig, and to store media to share through all the devices is really all I am after at the moment...

 

Any thoughts from you knowledgeable chaps?

Edited by millen

Share this post


Link to post
Share on other sites

I personally think the older microservers are still the better choice for a NAS, compared to the gen8 newer models. If you plan on using it as more of a server, the gen8 might be better as it has things like iLO and a more powerful processor, however for a simple NAS there is nothing wrong with the N40L/N54L and you can actually fit more disks in them with a little effort.

 

There are plenty of guides out there to set them up, kinda like this thread. I use mine for a lot of different things, but mostly it's a central point for storage and backups and does all my downloading so my PC doesn't need to be on which saves on power.

 

You are correct in that all you need to do is install an OS and put in some disks. I'm running ubuntu from a usb on the internal usb port with 7x 3.5" hdds.

 

It can stream to xbox/ps3/xbmc/tablets/htpc so it's pretty flexible as far as a media server goes.

 

Sounds like it would be ideal for your situation. Any specific questions ask away I will check back later.

Share this post


Link to post
Share on other sites

Thanks heaps mate. The $50 discount has finished, but even at $300 it seems like a good deal compared to a dedicated 4bay NAS device, plus would give me a bit more flexibility I guess, wherein I can load whatever OS onto it I please.

Share this post


Link to post
Share on other sites

This should hopefully come in very handy for me soon. Will hopefully setup something myself in the new year if I get around to it

Share this post


Link to post
Share on other sites

For those wondering about the Gen8 vs the N54L I got one in before Christmas.

 

I like the fact that they have made adding PCI-E devices and RAM so much easier. You no longer have to slide out the motherboard to do it.

 

Dual NICs as standard is great if you plan to use it as a router or to bond your NICs - I have two additional NICs installed in my N54L.

 

While the N54L sports an AMD Turion II at 2.2Ghz and the Gen8 in it's cheapest configuration has the Intel Celeron G1610T at 2.3Ghz, you would have to delve into what each can really do to see if it matters to you.

http://www.cpu-world.com/Compare/470/AMD_T...ore_G1610T.html

 

I plan on having a look to see if Plex, when performing the transcoding gets any benefit from the CPU as for what I run mine for, I could see this as the only advantage. I have two Samsung TVs now in the house both with the Plex app. One is plugged into the network and the other is on wireless. The secondary TV would run better leveraging the server to do the heavy lifting.

 

I like the fact that the CPU is upgradeable in the Gen8.

 

There is not a lot of space in the upper part of the N54L. Some of the drive configurations people have done, as seen in p0is0n's posts wouldn't be possible in the Gen8. You would be able to fit a 2.5" drive but a 3.5" seems out of the question without some bending or cutting.

 

From the basic tests I have done I am noticing about a 3W power difference between the two. With 3 x 3TB Red drives and the 250GB HDD that came with the N54L, I am seeing Idle power usage at: N54L - 47.8w Gen8 - 51w.

 

Posted Image

 

Posted Image

 

Posted Image

 

Posted Image

Share this post


Link to post
Share on other sites

How'd it go with Plex? It seems like a fantastic little program, and definitely something I'd like to incorporate into mine

Share this post


Link to post
Share on other sites

I avoided reading the contents of this for so long, but I'm finally looking to move to a NAS/file/media/backup server in the next six months, so thanks to The Tick and P0is0n for the detailed reading and suggestions.

 

And hey, mudg3, if you come back, still interested in reading about your setup.

 

Interested in how you went with the IP cams, too, Tick

Share this post


Link to post
Share on other sites

I installed ... some program (I think it was zoneminder), ran into a couple of issues, got those sorted out but decided the old IP camera I had sucked and went about looking into POE switches and cameras to suit. I never got around to actually finishing that project though.

Share this post


Link to post
Share on other sites

Hey guys, I finally got my Gen7 n54l up and running with FreeNAS in the last couple of weeks. Running 2 x 4TB WD Reds, in RaidZ (mirrored) at the moment. Got a 2GB dataset carved out for my media, and am going to carve out a couple more for Windows Backup and a Time Machine. But thinking I might buy 2 more 4TB Reds first. Just seems easier to get it filled up with drives beforehand, rather than dealing with any expansion issues down the track.

 

Also, at the moment, I didn't buy any RAM for it when I got it, so I've just replaced the original 2GB stick with some old DDR3 sticks, so it's currently running with 4GB of RAM. I know 8GB is minimum spec for ZFS, so that will be an upgrade in the near future, but to be honest I have not had any performance issues so far. The main question I wanted to ask is about ECC RAM, is it especially critical to get ECC? From what I understand ECC RAM just prevents some read/write errors that may occur, excuse my very simple terminology, I'm not 100% up to speed with it obviously. :P

Otherwise I'll just grab some standard DDR3 stuff and whack that in.

Share this post


Link to post
Share on other sites

I installed ... some program (I think it was zoneminder), ran into a couple of issues, got those sorted out but decided the old IP camera I had sucked and went about looking into POE switches and cameras to suit. I never got around to actually finishing that project though.

If you get interested in it again, let me know, keen to see how it all plays out.

Share this post


Link to post
Share on other sites

Finally looking to jump into purchasing. Shoving extra drives in to the older models/cases is appealing, but I'd also like the extra CPu grunt from the Gen8 (even if only celeron) so that transcoding and usenet processing don't make everything grind to a standstill.

 

Not sure I have the time/energy to dedicate to setting up ZFS tho', so I'll probably be sticking with Windows and storage spaces or something. Just not sure how to go about it - Win10? Think I read somewhere that non-server Windows installs crap out due to unsupported drivers : \


I'd consider maybe a Gen9 but I haven't seen anything about them in microserver form :

Share this post


Link to post
Share on other sites

Depending how often your data changes, Look into Ubuntu and SnapRaid.

 

It made me wet my pants with excitement, and buy 3x 3TB toshiba drives, and then proceed to wet myself again.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×