Jump to content
Forum upgrade is live! Read more... ×
p0is0n

File Servers & NAS

Recommended Posts

Any particular reason why Mac Dude ?

 

DASA, install WINE and see if that automation program plays nice with it.

It's horses for courses :)

 

I played around with it for a month and took a bit of dicking around with samba, users & permissions to get it working. What I often found was that a 'simple' solution was never that simple - install package X to get functionality Y, but you find out along the way that X requires Z which requires....

 

You get the idea.

 

Also when things don't work I find it very easy to get around windows error messages and logfiles.

 

I think in the long term ubuntu would be ultimately more configurable, but as a media server and centralised backup system, I want it to be as hands off as possible.

 

Also given that the majority of client devices are Windows PCs, the integration is better with a Windows server.

Share this post


Link to post
Share on other sites

This has been such an informative read.

 

I have the previous gen Microserver and have been meaning to move over to it for a while.

 

Many moons ago, I posted in the Apple section building a NAS with time machine support. That project didn't exactly die but rather, went through some bursts of evolution.

 

Based on this thread, I decided to rebuild with ZFS.

 

I went with CentOS a while back, mainly because I was more used to RedHat and have stuck with it which is where the massive amount of scripts I wrote got moved to.

 

I went down the path of adding the REPO and installing zfs-fuse but it was slow and horrible. Thanks to your posts earlier, I had a way of checking and comparing. Looking at what you did, I realised I created the pool without the ashift=12 option. When I detroyed the pool and tried to create it with that option, I was getting an error stating that the option wasn't available.

 

I found a how to regarding building it from source and just finished that. The speed difference is astounding.

 

time dd if=/dev/zero of=/vol1/ddtest bs=1024000 count=20000
20000+0 records in
20000+0 records out
20480000000 bytes (20 GB) copied, 133.345 s, 154 MB/s

real	2m13.350s
user	0m0.111s
sys	0m33.435s

----------------------------
time dd if=/vol1/ddtest of=/dev/null bs=1024000
20000+0 records in
20000+0 records out
20480000000 bytes (20 GB) copied, 57.3812 s, 357 MB/s

real	0m57.384s
user	0m0.065s
sys	0m22.765s

Rsync transfers have lifted too. Writes were typically around the 20MB/s mark - now up around 60MB/s

 

My hat off to you sir - you have provided me a massive leg up with this.

Edited by The Tick

Share this post


Link to post
Share on other sites

i have been saving up to build a file server. in a norco 4u 20 or 24bay rack case. as i already have a rack and a couple of dell servers.

 

i was going to use freenas 8 but have decided to use ubuntu and set everything up i need and strip everything i dont. im also going to build a low power draw server for a router.

 

so when i start this build i will post it up on here

Share this post


Link to post
Share on other sites

For any peeps using ZFS - how do you currently handle email alerts based on fault conditions?

 

Linux RAID is a one liner that works pretty well. There doesn't seem to be the same thing with ZFS.

Share this post


Link to post
Share on other sites

Good to see some activity in this thread :)

 

Tick, I noticed in the post above you mentioned ZFS Fuse, is this what you installed, or did you go with ZFS-on-Linux? They are 2 separate implementations and so far I have heard that ZFS fuse is somewhat inferior to the native ZFS on Linux. Just curious which one you're using.

 

I admit I didn't bother setting up email alerts yet... but I have this saved for when I get around to it.

 

#!/usr/local/bin/bash
zfsstat=`zpool status -x | awk '{print $4}'`

if [ $zfsstat != 'healthy' ]; then
/bin/date > /tmp/zfs.stat
echo >> /tmp/zfs.stat
/bin/hostname >> /tmp/zfs.stat
echo >> /tmp/zfs.stat
/sbin/zpool status -x >> /tmp/zfs.stat
cat /tmp/zfs.stat | /bin/mail -s "Disk failure in server : `hostname`" user@email.com
fi

just need to make it executable (chmod +x <name>.sh) and set it as a cron job to run as often as you want.

here is a link to the site I ninja'd this useful script from, full credit to the original author.

Edited by p0is0n

Share this post


Link to post
Share on other sites

I went down the path of adding the REPO and installing zfs-fuse but it was slow and horrible. Thanks to your posts earlier, I had a way of checking and comparing. Looking at what you did, I realised I created the pool without the ashift=12 option. When I detroyed the pool and tried to create it with that option, I was getting an error stating that the option wasn't available.

I just re-read my first post and in retrospect, some of it is quite poorly explained. I could probably do a much better job if I find time to re-write it.

 

I can't see that I ever explained what the ashift=12 switch does so I will quickly go over that here. Basically, most, if not all, new/large capacity HDDs such as 2TB drives or greater, use Advanced Format which is basically a 4kB sector size instead of a 512B sector size. Without specifying that the drives use a 4kB sector size, ZFS will use the default of 512B and this will result in poorly aligned data and poor performance. [source]

 

In an attempt to keep this thread as informative as possible...

I also feel ZFS snapshots are under appreciated so I will attempt to describe the benefits here in hope of converting a few others. Basically a ZFS snapshot is a copy of the entire file system at the time the snapshot is created. This is not to be confused with a clone, which is a writable copy of the file system. I believe you can create a clone from a snapshot which has significant benefits in the enterprise world and possibly for normal users too.

 

When you create a snapshot, you basically create a map to every file as it exists on the HDD. Think of this as a pointer to the physical block on the HDD where that data exists, and not a reference to /home/user for example. Snapshots are created instantly, and consume no disk space upon creation, which might seem like magic, but upon further examination it's actually very clever. If you delete a file after taking a snapshot, you will not free up disk space as you might expect, this is because the snapshot is still holding onto certain blocks of data which it might need to use to restore. If you eventually delete a snapshot and no further reference to this block of data exists, you will free up disk space. This means that a snapshot will only consume as much space as the difference between it and the current instance of the file system. If you instead simply modify a file, when it is saved, it will be saved to a different location on the disk so as not to overwrite data you may wish to restore in future. You can take a snapshot as often as you want, snapshots can be created manually such as before, or after some manipulation of the file system, or automatically such as every hour or every night depending on your needs. You could even create a snapshot every minute, which may be useful if you run an online store or a bank (for example) and have a lot of logs or transactions which you wish to preserve. The theoretical maximum number of snapshots is 2^64 so there is no risk of creating too many. Hopefully this all makes sense so far...

 

When you restore from a snapshot, you basically just point the file system at the location on the disk where things previously existed, this is why disk space is not freed up when something is deleted or modified. You can browse a snapshot and pick out individual files to restore, or you can just restore the entire file system to a specific point, depending on what or why you need to recover something, be it accidental deletion or whatever. This is in my opinion one of the most powerful features of ZFS and so I wanted to really emphasize it.

 

My explanation may not be the best, so here is a quick video I found that complements my description above quite well...

 

Lastly, I have found another useful program for when I am connected to my machine remotely via SSH. It's called screen and basically allows you to tab between multiple instances of your terminal if you're doing several different things. This is probably well known amongst experienced linux users and indeed an experienced linux friend put me onto it. I installed it from the Ubuntu repository but the tarball can also be found in the link above. Once installed you just have to type 'screen' in a terminal session to launch it, commands are issued using 'control+a' at the same time and a command, such as 'c' to create a new window or 'n' to switch to the next window. A fill list can be found here. I've found this very useful since discovering it :)

Edited by p0is0n

Share this post


Link to post
Share on other sites

Appreciate your write up.

I just bought a microserver and I'm currently config'ing all this before my hdd's arrive.

Didn't know any of these web ui programs existed!

Share this post


Link to post
Share on other sites

There is a new model coming out from HP, the N54L. Doesn't look like much of a significant change but should mean that N40Ls will be available at clearance prices until stock runs dry. :)

Share this post


Link to post
Share on other sites

eventually worked out how to install the program with wine but it would crash when trying to paste login details or if typed in it would crash when clicking continue

 

so tried patching the program which is another step in its install took a while working out how to find the hidden files so i could copy the patches over the installed files

now after being patched it gives run-time error 91 as soon as i try to run it

 

after some googling apparently this addon can help with that error but i cant find it

winetricks jet40 mdac28

winetricks is installed but i have no idea how it works so i just assume it automatically assists wine

 

any ideas? if not it may just go back to windows although this old program doesnt work on vista either due to com port problems it apparently works on windows 7

Edited by Dasa

Share this post


Link to post
Share on other sites

eventually worked out how to install the program with wine but it would crash when trying to paste login details or if typed in it would crash when clicking continue

 

so tried patching the program which is another step in its install took a while working out how to find the hidden files so i could copy the patches over the installed files

now after being patched it gives run-time error 91 as soon as i try to run it

 

after some googling apparently this addon can help with that error but i cant find it

winetricks jet40 mdac28

winetricks is installed but i have no idea how it works so i just assume it automatically assists wine

 

any ideas? if not it may just go back to windows although this old program doesnt work on vista either due to com port problems it apparently works on windows 7

Not really sure about getting it going with WinE. You could always try setting it up is a VM with KVM?

What else are you running on your server?

Share this post


Link to post
Share on other sites

Not really sure about getting it going with WinE. You could always try setting it up is a VM with KVM?

What else are you running on your server?

at this stage nothing it will just be a file server with a remote desktop for using a old automation program which is the first thing i tried to install as i new it would be the problem if there was one

ubuntu 12.04 wine 1.4

so with the vm i would still need to install windows inside of it? in which case i may as well just buy another coppy of windows and not bother with linux

Edited by Dasa

Share this post


Link to post
Share on other sites

Not really sure about getting it going with WinE. You could always try setting it up is a VM with KVM?

What else are you running on your server?

at this stage nothing it will just be a file server with a remote desktop for using a old automation program which is the first thing i tried to install as i new it would be the problem if there was one

ubuntu 12.04 wine 1.4

so with the vm i would still need to install windows inside of it? in which case i may as well just buy another coppy of windows and not bother with linux

 

Yes if you go with the VM you would need to install windows once you create it, if you have a copy of XP this would probably work well to run the old program.

If you're not set on linux for anything else though, it may be just as easy to go down the windows path.

Share this post


Link to post
Share on other sites

Great little write up bud, Good to see someone not using Windows server around here look forward to the next part. I've documented my setup when I built and configured my 20tb all in one whitebox but I dont know if atomic really has the audience for it.

I run a 10TB that i've tried to setup on anything but windows server a few times (back on windows *sad*) I'd be hella interested. Post it up!

 

p0is0n, thats a Green Drive, If you find ZFS starts reporting dead drives, or data loss, replace it with a RED drive.

 

When Greens HEAD PARK they often dissapear from RAIDs. RaidZ in my experience doesn't let it sleep in the conventional way (no idea why though) so it never seemed to park. But run a normal RAID under BSD or *nix and it'd vanish all day.

 

I got sick of it all and since I get almost daily power outs for longer than a UPS can manage, I just went with no raid, and a schedualed SyncToy each night.

 

Would love to do it right thought....

Share this post


Link to post
Share on other sites

Great little write up bud, Good to see someone not using Windows server around here look forward to the next part. I've documented my setup when I built and configured my 20tb all in one whitebox but I dont know if atomic really has the audience for it.

I run a 10TB that i've tried to setup on anything but windows server a few times (back on windows *sad*) I'd be hella interested. Post it up!

 

p0is0n, thats a Green Drive, If you find ZFS starts reporting dead drives, or data loss, replace it with a RED drive.

 

When Greens HEAD PARK they often dissapear from RAIDs. RaidZ in my experience doesn't let it sleep in the conventional way (no idea why though) so it never seemed to park. But run a normal RAID under BSD or *nix and it'd vanish all day.

 

I got sick of it all and since I get almost daily power outs for longer than a UPS can manage, I just went with no raid, and a schedualed SyncToy each night.

 

Would love to do it right thought....

 

Green drives are fine for ZFS :) Low heat, low power, no noise. I might have considered red drives, or hitachi drives but I already had a few greens in my desktop, and they are by far the cheapest so I could afford to buy a few more and a spare in case one ever dies. I do consider temps important, I keep an eye on them regularly and haven't seen any of my drives above 36C which is acceptable for me. I haven't had a single checksum error since I got this NAS up and running, which has been a few months now, and it operates and downloads 24/7 so I'd day the green drives are doing alright. As it's just for a NAS, as long as it can match gigabit in transfer speeds it's adequate for me. I might use 'normal' drives if it were for storage where performance was more important.

 

I think the head parking only occurs when the drives are idle/powered down, which windows loves to do, but I don't think ZFS does this, or not as much, so they never stop spinning. I'm not certain about that though. I can say I haven't noticed the parking issues at all with these drives, it may be occuring all the same, just not causing my pool to shit its pants.

 

I'm aware of the issues using them with a RAID controller, and would never do it. ZFS is really designed for commercial drives though, not enterprise drives. If you're familiar with NetApp it's basically identical technology, the guys who started NetApp broke away from Sun when they were in the development stages of ZFS and took some ideas with them. I actually look after 3 largeish NetApp librarys here in Perth, about 10 racks of disks total, the older ones use 300GB/450GB 10-15k RPM SCSI drives, but the newest one uses just 750GB/1TB hitachi/seagate desktop drives. I probably replace a drive every other week, but out of ~3 racks worth of disk that's not too bad.

 

Servers & storage is my day job, I'm not an expert, but have a bit of experience with all sorts of configurations.

Edited by p0is0n

Share this post


Link to post
Share on other sites

If you're familiar with NetApp it's basically identical technology, the guys who started NetApp broke away from Sun when they were in the development stages of ZFS and took some ideas with them.

Well, ZFS was cooked up by Bonwick and Moore. Not a big dev team and certainly way *after* NetApp had been in business for some time.

Share this post


Link to post
Share on other sites

I can't believe I missed this thread until it poped up in POTM - Grat's p0is0n, good job, excellent write-up.

 

I'm still running my 'Happy Place' server after nearly 2 years. http://forums.atomicmpc.com.au/lofiversion...php?t41354.html

 

BTW the RDM howto is at http://www.vm-help.com/esx40i/SATA_RDMs.php for anyone interested.

 

It's basically an ESXi server using RDM passthrough to 4 x 1.5 TB Samsung drives as the ZFS storage pool.

The storage drives are running in RAIDZ and they are connected to an instance of OpenIndiana running as a VM in the ESXi Hypervisor.

Sounds strange I know, but the drives are ZFS on the bare metal which would enable me to connect them to any ZFS capable host either on a bare metal install or a VM under the ESXi Hypervisor. Simple migration of the intact storage pool to any new host I want to use.

 

Looks like ZFS on Linux has progressed to the point where it might now be a better option than OpenIndiana for me.

 

Now If I can just find the time to spool up another VM and have a tinker.

And add some more RAM..

And add and SSD to the zpool...

Share this post


Link to post
Share on other sites

looks like my guess of ~30w with a more appropriately sized psu was spot on

http://i3.photobucket.com/albums/y83/dasa0...zps95e7748f.jpg

dropping to 28w isnt to shabby for a decent dual core system with a high end mb and 400w psu

still no hdd at this stage just finished installing win7

 

transferred ~1tb of video over last night it was cruising along at ~100mb/s over the network

Edited by Dasa

Share this post


Link to post
Share on other sites

Here is my current server setup.

 

Posted Image

 

File server is on the left, backup server is in the middle and currently playing with pfsense on the right.

 

The HP microserver is currently running CentOS 6.3 with one hard drive installed with the OS, 4 x 2TB Seagates for zfs pool and 1 x 2TB as a hot spare.

 

Posted Image

 

The backup server is a Gigabyte G31M-ES2L running an Intel E3300. I believe this one actually has 4 x 2TB drives running on a large LVM giving it max space with no redundancy. There is a duplicate system which is identically specd (although I am not sure what mix of drives are in it) which gets swapped out every month or so. It's sibling lives at my brother's house. My internet is shitful which means a swap and replace backup made more sense back in the day.

 

The HP server does a nightly rsync to the backup box. I have two Macs in the house which backup to it via Time Machine. My Windows PCs should probably get in on the action too although any essential files tend to live on dropbox these days.

 

The HP Microserver is also fitted with 2 x 4GB ECC RAM. It handles file server duties for Windows and Mac (samba and netatalk), runs transmission for torrents, plex for mediastreaming and freePBX for VoIP.

 

I currently have UPS monitoring working too getting email alerts when the condition changes and having it shutdown in about 5 mins. I really need to figure out how to get the backup server on that too.

 

As mentioned in another thread, I have pretty much scripted the entire install process as well as some management too. Adding shares and users are all done via script although webmin installs by default if needed (although I don't use it for much these days). The scripts allow for remote access configuration via SSH and VNC. There is also a basic PPTP VPN setup included.

 

The installation steps are pretty simple (a more detailed guide is included in the PDF below).

 

Download the CentOS 6.3 ISO

Make a USB boot stick

Install CentOS to a single drive (or you could waste some space and go RAID1)

Do a yum update and reboot (kernel patch required for zfs to install properly)

Run the first script to prepare the system.

Use the scripts to install netatalk if required.

Use the script to build a zfs raidz1

Make some folders

Add some users (scripts)

Use the scripts to create the shares.

 

Posted Image

 

I have included them here if anyone is interested.

 

I have also created a how to for anyone wanting some hand holding through it.

 

Let me know if something isn't working.

 

This project started out a while ago originally using Ubuntu. Much of the functional parts of the scripts would probably still work for Ubuntu although the installer sections probably need adjusting if anyone wanted to modify them.

 

Besides some downloading of packages though (which can make it longer), I can usually have a system up and running in about an hour.

Edited by The Tick

Share this post


Link to post
Share on other sites

The HP microserver N54L's are out although I was only able to find mine though a wholesaler.

 

Based on my supplier's pricing, there seems to be currently a $80 premium over the old model, and that was for one without the obligatory 250GB HDD.

 

I haven't really played with mine yet. My old system will be repurposed as a backup server though.

Share this post


Link to post
Share on other sites

I recieved an email from my ZFS server yesterday. :-(

 

 

Date: Sat, 12 Jan 2013 15:08:56 +1100 (EST)

Error on openindiana from 10.01.2013 15:30

  pool: mypool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scan: scrub repaired 0 in 5h5m with 0 errors on Sun Jan  6 08:05:17 2013
config:

	NAME		STATE	 READ WRITE CKSUM
	mypool	  DEGRADED	 0	 0	 0
	  raidz1-0  DEGRADED	 0	 0	 0
		c3t3d0  ONLINE	   0	 0	 0
		c3t0d0  ONLINE	   0	 0	 0
		c3t2d0  ONLINE	   0	 0	 0
		c3t1d0  UNAVAIL	  0	 0	 0  cannot open

errors: No known data errors

  pool: rpool
 state: ONLINE
 scan: none requested
config:

	NAME		STATE	 READ WRITE CKSUM
	rpool	   ONLINE	   0	 0	 0
	  c2t0d0s0  ONLINE	   0	 0	 0

errors: No known data errors

I've just ordered a replacement drive. At least no data loss. :-)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×