Jump to content

Recommended Posts

I've been running an OpenSolaris ZFS file server for some months now. On top of OpenSolaris I was running an instance of virtualbox and running my linux mail server from a VM inside virtualbox.

All was well, though not as elegant to administer as I would have liked.

 

So in the spirit of "If it ain't broke - Fix it until it is!" I decided to have a play with VMware's ESXi and build my own version of the Atomic "Build your own Happy Place ESXi tutorial".

 

So I exported the zpool with my data and installed ESXi on one of the mirrored OS drives.

ESXi went on smooth, and I installed Nexenta into a VM for a change of flavour.

 

Now comes the 'fun' part.

 

I wanted Nexenta to be able to directly access my existing zpool of 4 SATA drives in RAIDZ1 configuration.

I followed the ESXi SATA RDM howto link found in the Atomic article and created my RDM's for each of the drives in my zpool. Easier than I had expected and worked first time - woot!

I then attached the drives to my VM and imported the zpool. Again it all worked as advertised.

 

Problem was, I'd created physical RDM's instead of virtual RDM's. This resulted in data corruption which became evident upon doing a zpool scrub.

 

Bugger.

 

I off-lined the array until I could research the source of my error and work out how to fix the issue.

Then it was a matter of deleting the physical RDM's and re-creating them as Virtual ones instead, re-attaching the drives and re-importing the zpool into the VM and correcting the errors. This last consisted of deleting the corrupt ISO file and the snapshots that referenced it.

 

If I had of been using some other file system the data errors would likely have gone unnoticed until too late. With ZFS, I was alerted to the problem (Which was entirely my fault I might add.) almost immediately, was given the details of the exact files that were damaged (Only one non critical file luckily) and that enabled me to take remedial action which I have no doubt saved my data.

 

It's taken a lot more effort than I initially expected, but now I have my mail server and a ZFS based file server with RDM'd passthrough to the SATA drives sitting on top of ESXi and my LAN is a happy place again.

It's now a lot easier to administer too.

 

Lessons learned:

1: Backup before attempting any changes like this.

2: FULLY understand any CLI switches or options BEFORE running the command.

3: ZFS actually does save idiots from themselves sometimes.

4: Zpools are a breeze to work with when migrating from machines or OS versions.

Share this post


Link to post
Share on other sites

Hi.

 

You:

 

1. Just made my day.

2. Made everything I've written about and encouraged, over time, here, worth it.

 

Thanks :).

 

z

 

 

I've been running an OpenSolaris ZFS file server for some months now. On top of OpenSolaris I was running an instance of virtualbox and running my linux mail server from a VM inside virtualbox.

All was well, though not as elegant to administer as I would have liked.

 

So in the spirit of "If it ain't broke - Fix it until it is!" I decided to have a play with VMware's ESXi and build my own version of the Atomic "Build your own Happy Place ESXi tutorial".

 

So I exported the zpool with my data and installed ESXi on one of the mirrored OS drives.

ESXi went on smooth, and I installed Nexenta into a VM for a change of flavour.

 

Now comes the 'fun' part.

 

I wanted Nexenta to be able to directly access my existing zpool of 4 SATA drives in RAIDZ1 configuration.

I followed the ESXi SATA RDM howto link found in the Atomic article and created my RDM's for each of the drives in my zpool. Easier than I had expected and worked first time - woot!

I then attached the drives to my VM and imported the zpool. Again it all worked as advertised.

 

Problem was, I'd created physical RDM's instead of virtual RDM's. This resulted in data corruption which became evident upon doing a zpool scrub.

 

Bugger.

 

I off-lined the array until I could research the source of my error and work out how to fix the issue.

Then it was a matter of deleting the physical RDM's and re-creating them as Virtual ones instead, re-attaching the drives and re-importing the zpool into the VM and correcting the errors. This last consisted of deleting the corrupt ISO file and the snapshots that referenced it.

 

If I had of been using some other file system the data errors would likely have gone unnoticed until too late. With ZFS, I was alerted to the problem (Which was entirely my fault I might add.) almost immediately, was given the details of the exact files that were damaged (Only one non critical file luckily) and that enabled me to take remedial action which I have no doubt saved my data.

 

It's taken a lot more effort than I initially expected, but now I have my mail server and a ZFS based file server with RDM'd passthrough to the SATA drives sitting on top of ESXi and my LAN is a happy place again.

It's now a lot easier to administer too.

 

Lessons learned:

1: Backup before attempting any changes like this.

2: FULLY understand any CLI switches or options BEFORE running the command.

3: ZFS actually does save idiots from themselves sometimes.

4: Zpools are a breeze to work with when migrating from machines or OS versions.

Share this post


Link to post
Share on other sites

Yep! You're the one I blame for getting me started on this entire project originally Zeb. - Thanks

You're almost as bad as Ashton with his Uber-linux server series some years ago.

 

How about a follow up article discussing ACL's on Solaris?

That's been one area I for one have had issues with.

Share this post


Link to post
Share on other sites

Is freebsd ZFS just as good?

 

I'd like to use freenas.

 

EDIT: and can i convert my NTFS drives without losing data?

AND, can i expand a 'pool'? (i think thats the right term).

 

Currently I have drive pairs, manually synchronised. like a manual -daily- raid1.

 

So my options are:

-convert both into the same pool, and make them software RaidZ (raid1 style)

-Format each drive into RAIDz one at a time, copying each over, then expand the pool to include the old drive.

 

Which one is easiest and possible? And how do i tell RAIDZ to mirror? hopefully freenas has a GUI for it.

Edited by Master_Scythe

Share this post


Link to post
Share on other sites

My understanding is that it is just as good - Just not the latest release of ZFS.

 

So you'll get the reliability, snapshots, clones etc. but dip out on features like being able to expand a Raid1Z pool by adding one disk at a time.

 

FreeBSD has the ability to recompile the Solaris code as the licences are compatible. Linux doesn't have that luxury.

Share this post


Link to post
Share on other sites

I'd be using FREENAS-8..... does this have the versoin of ZFS i want?

 

I wish to learn more about it, but right now speed of getting my serve rup and running is more important than 'how'.

 

I'll do full research once i'm up again.

 

EDIT: I have 6 disks i'm happy to play with, but cant lose data. 3 contain duplicates of the other 3. How am i best to set this up?

 

3disk RaidZ?

Edited by Master_Scythe

Share this post


Link to post
Share on other sites

I'm not sure what options FreeNAS offers through it's interface.

 

However the underlying ZFS allows for any of the following

 

Striping (RAID 0) - No redundancy

Mirror (RAID 1) - Will tolerate one disk failure with no loss of data

RaidZ1 (RAID 5) - One disk parity, will tolerate 1 disk failure with no loss of data

RaidZ2 (RAID 6) - Two disk Parity, will tolerate 2 disk failures with no loss of data

 

Pick your poison.

 

I use RAIDZ1 on my server at home. ( 4 x 1.5TB drives)

Edited by CptnChrysler

Share this post


Link to post
Share on other sites

Probably worth pointing out... FreeNAS is not a direct descendent of FreeBSD, but rather is based on m0n0BSD, which is a highly stripped-down version of FreeBSD. Doesn't make it any less useful as a file server, and I dare say if it isn't in there now, ZFS will eventually make its debut in FreeNAS.

 

Likewise with Linux; I think it'll take a few people some time to come up with some written spec on the ZFS file system, from which, a GPLed implementation of ZFS can be written. Likewise, the same should happen with EXT4 and BTRFS so that they can transition across to BSD.

Share this post


Link to post
Share on other sites

ZFS has been in freenas for a LONG time, and version 8 is a complete re-write, its not at all related to monobsd anymore.

 

why freenas? There is nexentastor community.

 

this may make your day.

http://www.nexentastor.org/projects/site/w...ommunityEdition

kinda. though its feature set is very limited compared to FreeNAS

 

 

EDIT: am i right, in reading this: If i go with RaidZ1, no matter how many disks I have, i can lose one? So I could have 50disks in the one pool, and still only HAVE to have one 'wasted' disk?

 

If so, i may go a 9 disk RaidZ2......

 

also, if disks in the pool are different size, and the largest fails, how could it recover? (250gb, 250gb, 1TB, and the 1TB falls over).

 

still googling for myself, just fill me in if you beat me to it :P

 

EDIT: making my own thread.

Edited by Master_Scythe

Share this post


Link to post
Share on other sites

This is digging up some history but I stumbled across this forum post while trying to solve a sort of related problem I'm having. I hope you knowledgeable folk can lend me a hand, insight or alternative.

 

I have a server running VMware ESXi 5.0 U1. Attached to that I have 3 new 3TB Seagate hard drives i am attempting to set up in RAIDZ. They are mapped using RDM, I was forced to create them using physical mode because I get an error about the files being too big (>2tb limit) when creating in virtual mode. Ok fair enough.

 

Next i installed NAS4Free which is my solution of choice on a small VMFS drive. I have N4F already running on the system i am migrating from using 2TB disks (virtual RDM, they work just fine).

The issue i ran in to is that NAS4Free (and i also tried freenas, various versions including beta's) shows the drives has having 0MB free. The issue isn't there if i use Linux, so it seems to be something related to how the drives are recognized.

 

I have tried using nexentastor. It is a fairly slick solution, however i need something with encryption capabilities. It looks like i may end up rolling my own Linux NAS system, its just hard to get a system as nicely pre-customized as the FreeNAS/Nas4free solutions..

 

Any ideas?

 

Cheers.

Share this post


Link to post
Share on other sites

Unfortunately if you want to use ZFS + encryption you're limited to Solaris I believe.. Most open source OS like *BSD & Linux are only using ZFS version 28 and encryption wasn't introduced until version 30 or 32. I could be wrong here but at least this is my understanding.

 

I've read somewhere it might be possible, with Linux, to encrypt the disks using something like LUKS and a key file on a USB, then build your zpool on top of the encrypted disks, but this doesn't seem like a great idea to me and would probably come at a cost of either features or performance. I did a quick google and actually came up with this which might help, it seems like a much more elegant solution, placing an encrypted FS on top of the zpool :)

 

Else, I would look at SE11.

Edited by p0is0n

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×