Jump to content


Photo

File level Encryption? I'm doing this the hard way....


  • Please log in to reply
15 replies to this topic

#1 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 20 April 2017 - 01:55 PM

So I just set up a ZFS file server.

5x3TB disks in a RaidZ2, yes it should be 6 for 'max performance' but I'm one user. I wanted the double parity.

 

ANYWAY.

 

Right now at home, my PC is copying my 'data' drive into an encrypted VeraCrypt archive (TrueCrypt).

These are fixed size (mine is 1TB), and I realized that I'm missing out on the features of ZFS by keeping my files in a container. like that.

 

What was appealing, was that I can 'mount' the VeraCrypt archive and it's just "another hdd" to my windows system.....

This sounded great, until I realised I derped on something I do every day; mapping shared folders to drive letters.....aren't I clever.

 

So, I can make a 'Data' share on my ZFS and map it, great... what this DOESN'T help with, is encryption.

a Cryptographic Archive still has the advantage that once it's mounted, it's "all unencrypted" to the end user.... but the down side that ZFS can only checksum the one file (the container) which isn't ideal.

 

So I got to thinking; What do those "Crypto Locker" viruses use? They encrypt on a per-file basis.

On top of that, is there a 'tool' that I can run, that lets me decrypt them on the fly?

 

File names aren't specifically 'secret', I just don't want the data readable in the event of a breach. I have things like contact lists (in plain text) and such.

 

What would be nice, would be something that encrypts all the files, on a per file basis, but has a system tray type tool that decrypts on the fly, passively, so the computer never knows.

 

For example:

JPEG is encrypted.

JPEG is stored on ZFS server (network share)

I Open GIMP on my local server, and want to edit the JPEG.

I navigate the network share to find the JPEG, and open it.

The 'Tool' (system tray?) notices that file access, and feeds a decrypted stream to GIMP.

 

Meanwhile a hacker gets into my share WITHOUT the 'tool' and password.

Steals my JPEG.

And it's junk.

 

 

Anything like this even remotely exist?


Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#2 Jeruselem

Jeruselem

    Guru

  • Atomican
  • 14,026 posts
  • Location:Not Trump-Land

Posted 20 April 2017 - 02:08 PM

So you want something like the Windows EFS system for Linux?

 

https://msdn.microso...y/cc875821.aspx


Having trouble with A [?]OS11.1?

 

2018 FIFA World Cup Russia - Australia in but Italy, Chile, Netherlands, USA = FAIL.


#3 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 20 April 2017 - 02:41 PM

Well, I don't think so, because Linux doesn't come into the equation at any point.

 

Thing is I don't want to encrypt the entire drive. I want to encrypt the folder, but make it transparent to an SMB share.

Even if that means a password or some such.

 

Nas4Free is FreeBSD.


Hmm, what happens if I mount an SMB share as a Drive letter and point Veracrypt at it, selecting "encrypt a HDD"....


Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#4 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 20 April 2017 - 05:00 PM

So you want something like the Windows EFS system for Linux?

 

https://msdn.microso...y/cc875821.aspx

 

You know what, nevermind, I've come up with a better solution.

I was trying to figure a way to have my 'data' available on the NAS, while being secure.

However, I've decided against that.

 

Considering VeraCrypt (and Truecrypt) has been basically flawless its entire lifetime, I'm going to trust it.

What I'm going to do is this:

 

Encrypt my "Entire Drive" for my data.

I'll also encrypt my Boot Drive.

 

If I use the same password for both, and set the data drive as a 'favorite' drive in Veracrypt, it'll auto-mount, once the password for the boot drive has been given!

Neat! :)

 

I'll just forget about being able to access my 'private data' from the NAS, there's no need, I'll just make sure all my Media is available to the NAS, and go from there.

 

I'm going to use AOMEI backup to drop encrypted backups onto the ZFS share, nice and simple, and makes me feel warm and fuzzy.

I've never had a good backup scheme before :P

 

I'm really excited!


Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#5 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 22 April 2017 - 05:58 AM


help me out here. i am totally new to ZFS. and also dont have clear idea of what your original goal was.

So I just set up a ZFS file server.
5x3TB disks in a RaidZ2, yes it should be 6 for 'max performance' but I'm one user. I wanted the double parity.
 
ANYWAY.
 
Right now at home, my PC is copying my 'data' drive into an encrypted VeraCrypt archive (TrueCrypt).
These are fixed size (mine is 1TB), and I realized that I'm missing out on the features of ZFS by keeping my files in a container. like that.


so we are talking about a 1TB container file sitting on the 5x3TB RaidZ2 thing — amongst other stuff, right? like, what fraction of the total logical storage space is that — i have no idea?

and would you be using this as a secondary backup, or as THE place where you store your shit?

anyway, if ZFS is doing fancy redundancy voodoo in the background with that container file, what features would you be missing out on? looking at the 'Summary of key differentiating features' of ZFS in wiki, would it mainly be that you would forego some of the efficiencies of fancy caching (because the contained file system would be opaque)? but otherwise, even though any corrupted part of that container file data could represent one or more actual files in the unencrypted volume, ZFS shouldnt need to know or care about these boundaries — either the 1's and 0's are being stored faithfully or they are not. i guess checksumming might be less robust if the container file is HUGE...but it really depends how it works at low level and i dont know enough about it.

(of course, if some part of the container is fucked, the whole volume may be fucked, but thats a downside of encrypted volumes irrespective of ZFS).
no pung intended

#6 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 22 April 2017 - 06:24 AM

I'll just forget about being able to access my 'private data' from the NAS, there's no need, I'll just make sure all my Media is available to the NAS, and go from there.
 
I'm going to use AOMEI backup to drop encrypted backups onto the ZFS share, nice and simple, and makes me feel warm and fuzzy.


okay...still so confused! :)

do i have this right — your "boot drive" and "entire drive" are just local drives/partitions (for example, C: system drive, D: other shit)?

and now, instead of having a mapped remote share you would be securely reading and writing to routinely for backup, you are opting to use the NAS as a dumping place for cold dead backups of whole local drives?

if i have this right, just be aware that if you are ever tempted to store multiple container files (for example backups of the same volume from different months), using the same password/keyfile for each would significantly reduce the over all encryption strength -- (assuming veracrypt is the same as truecrypt in this regard).
no pung intended

#7 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 24 April 2017 - 10:50 AM

help me out here. i am totally new to ZFS. and also dont have clear idea of what your original goal was.
 

So I just set up a ZFS file server.
5x3TB disks in a RaidZ2, yes it should be 6 for 'max performance' but I'm one user. I wanted the double parity.
 
ANYWAY.
 
Right now at home, my PC is copying my 'data' drive into an encrypted VeraCrypt archive (TrueCrypt).
These are fixed size (mine is 1TB), and I realized that I'm missing out on the features of ZFS by keeping my files in a container. like that.


so we are talking about a 1TB container file sitting on the 5x3TB RaidZ2 thing — amongst other stuff, right? like, what fraction of the total logical storage space is that — i have no idea?

and would you be using this as a secondary backup, or as THE place where you store your shit?

anyway, if ZFS is doing fancy redundancy voodoo in the background with that container file, what features would you be missing out on? looking at the 'Summary of key differentiating features' of ZFS in wiki, would it mainly be that you would forego some of the efficiencies of fancy caching (because the contained file system would be opaque)? but otherwise, even though any corrupted part of that container file data could represent one or more actual files in the unencrypted volume, ZFS shouldnt need to know or care about these boundaries — either the 1's and 0's are being stored faithfully or they are not. i guess checksumming might be less robust if the container file is HUGE...but it really depends how it works at low level and i dont know enough about it.

(of course, if some part of the container is fucked, the whole volume may be fucked, but thats a downside of encrypted volumes irrespective of ZFS).

 

 

It's the only 'backup' so yes, there is one other copy.

 

And yes, there is other shit, so it gets checksummed too. So it still has a use.

 

And you're right. It's just that I only have a single file to checksum, and if it faults, then the whole container is dead. It's % loss, or 1% loss from a single failed checksumm. its just the concern between losing 100% of the container, or or 1% if it was individual files.

 

 

okay...still so confused! :)

do i have this right — your "boot drive" and "entire drive" are just local drives/partitions (for example, C: system drive, D: other shit)?

and now, instead of having a mapped remote share you would be securely reading and writing to routinely for backup, you are opting to use the NAS as a dumping place for cold dead backups of whole local drives?

if i have this right, just be aware that if you are ever tempted to store multiple container files (for example backups of the same volume from different months), using the same password/keyfile for each would significantly reduce the over all encryption strength -- (assuming veracrypt is the same as truecrypt in this regard).

 

Yep correct. and yeah thats a point.

But I found a solution. My file names aren't too sensitive, just the content.

 

The solution is Encrypted ZIP files.

 

Each file has its own checksumm, if I go for RAR's it has its own 'repair' and recovery methods. could be great.


Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#8 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 24 April 2017 - 11:17 AM

Actually, I take that back the solution is Encrypted RAR files.

 

These RAR files allow BLAKE2 checksums for individual file recovery.

RAR5 allows a "recovery Volume" which is about 5% of the total data in size, allowing repair of the 'external' rar archive itself.

 

It uses AES 256 encryption.

It can encrypt file names.

Plus,There is a tiny bit of security through obscurity,  since RAR5 can only be opened by WinRAR; things like 7zip don't support it.

 

It even has some basic backup features, using the 'Archive' flag to know when a file has changed for differential backups.

This, all on top of the fact that you'd need to get access to a part of my Drive that's not shared in the first place; kinda win win :)


Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#9 Jeruselem

Jeruselem

    Guru

  • Atomican
  • 14,026 posts
  • Location:Not Trump-Land

Posted 24 April 2017 - 11:23 AM

RAR does better compression than ZIP files too.


Having trouble with A [?]OS11.1?

 

2018 FIFA World Cup Russia - Australia in but Italy, Chile, Netherlands, USA = FAIL.


#10 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 24 April 2017 - 03:41 PM

hmm... maybe i will have to revisit RAR. 

 

i dropped a passworded RAR on a server for someone just the other day — because it wasnt sensitive enough to warrant the PITA that is PGP — but i didnt feel comfortable with it.  why?  because not too long ago there were utils around that could easily bruteforce RAR passwords.  i just got used to thinking of it as flimsy.


no pung intended

#11 Rybags

Rybags

    Immortal

  • Super Hero
  • 35,058 posts

Posted 24 April 2017 - 03:46 PM

Off-topic a bit... Winzip has become a joke.  The multi-file archive in comparison to Winrar is handled badly.  The later versions of Winzip take ages to just open the interface even though it has an always running background service to supposedly speed it up.  And it installs it's stupid file extension helper without even asking, and it's way worse than the default one.  And the 1-click unzip, an utter joke.

 

WinRar is now much better.  It used to be lacking in the user interface but not by a lot.  And the drop-down system of recent target folders from the right-click/context-launched UI is great.  Winzip now so bad that even the context based interface is junk and takes several seconds to load.



#12 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 24 April 2017 - 04:00 PM

so we are talking about a 1TB container file sitting on the 5x3TB RaidZ2 thing — amongst other stuff, right? like, what fraction of the total logical storage space is that — i have no idea?


just on this point. i am still trying to understand ZFS. how much actual storage space do you get out of your 15TB with this setup?

anyway, in the case of a container file, there's bound to be multiple copies of every part of it distributed across those 5 HDDs. so thats already extremely robust!


and as far as checksums are concerned, my instinct is to assume they wouldnt work in the traditional way.

this quote from here, suggests something like i imagine,

"The recordsize is one of those properties of a given ZFS filesystem instance. ZFS files smaller than the recordsize are stored using a single filesystem block (FSB) of variable length in multiple of a disk sector (512 Bytes). Larger files are stored using multiple FSB, each of recordsize bytes, with default value of 128K.

The FSB is the basic file unit managed by ZFS and to which a checksum is applied.
"

 

which suggests to me that multiple checksums would be protecting integrity accross a very large file.


Edited by @~thehung, 24 April 2017 - 04:01 PM.

no pung intended

#13 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 25 April 2017 - 10:05 PM

hmm... maybe i will have to revisit RAR. 

 

i dropped a passworded RAR on a server for someone just the other day — because it wasnt sensitive enough to warrant the PITA that is PGP — but i didnt feel comfortable with it.  why?  because not too long ago there were utils around that could easily bruteforce RAR passwords.  i just got used to thinking of it as flimsy.

Thats odd, there is an inherant design 'flaw' (which is actually on purpose, and a feature) of RAR, both the old 4 and even more so in 5.

Im forgetting the complicatd details, but there is a complex 'initial step' to unecrypting a rar archive that takes something significant like a literal real world One Second, per each decryption attempt.

It's actually part of the container somehow (i'll have to look it up again). But this makes sense.

 

I used to use brute force rar things back in highschool, and it'd guess.... slowly. About 2 passwords a second.

Where as doing the same thing to ZIP would do hundreds a second.

 

The delay is there on purpose to hinder brute force.


 

so we are talking about a 1TB container file sitting on the 5x3TB RaidZ2 thing — amongst other stuff, right? like, what fraction of the total logical storage space is that — i have no idea?


just on this point. i am still trying to understand ZFS. how much actual storage space do you get out of your 15TB with this setup?

anyway, in the case of a container file, there's bound to be multiple copies of every part of it distributed across those 5 HDDs. so thats already extremely robust!

 

Thats a good point, thats actually pretty robust considering its distributed.... But with this new RAR discovery, that initial question is going to be come a moot point, RAR archives are now better in every way.

 

How much usable?

When I had 5x3TB, I had 9.8TB usable.

Now with 6x3TB I have 12.3TB usable.

 

You 'sacrifice 2 drives' in Z2, just like RAID6, however due to the variable block size, it seems you get a little more than just 'losing 2 full drives'.


Edited by Master_Scythe, 25 April 2017 - 10:05 PM.

Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#14 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 26 April 2017 - 03:44 PM

and what about the ZFS checksums?  it seems that shouldnt have been a concern.  

 

ive backed up large container files before and would always do an MD5 hash on the backup.  (btw i have to look up BLAKE2 now...thanks to MS keeping me up to date).   necessary, but never had one fail, even on very old media.  if i was backing up to a good RAID solution, there would be far less need.  however, one downside is there is always the chance that you are backing up undetected write errors already produced within the source container.  if, however, the source and backup is a volume mounted locally from a remote container on ZFS — that problem would be solved, and this would be just about perfect.  almost.  there would still be the problem of all eggs being in the same physical basket (ie. a single NAS box, not necessarily off-site). 

 

i suppose your RAR solution kind of deals with that.  but how are you using it? 

 

- are you making multiple archives of folders as you go and dumping them to the NAS?

- how does the archive thing work — to make a new backup of a folder do you just somehow update the old .rar?

- any issues with the cryptographic strength of re-using (i assume) the same password?


no pung intended

#15 Master_Scythe

Master_Scythe

    Titan

  • Hero
  • 20,177 posts
  • Location:QLD

Posted 26 April 2017 - 04:48 PM

and what about the ZFS checksums?  it seems that shouldnt have been a concern.  

 

i suppose your RAR solution kind of deals with that.  but how are you using it? 

 

- are you making multiple archives of folders as you go and dumping them to the NAS?

- how does the archive thing work — to make a new backup of a folder do you just somehow update the old .rar?

- any issues with the cryptographic strength of re-using (i assume) the same password?

 

First of all; https://blake2.net/

There's a decent wiki on it too.

 

I'm no expert, but from my research, the fact that it's one big file is still a concern; if the hash gets damaged, and it 'fixes my file' Its ALL toast, warm buttery toast.

Well, that 'slice' of it is, which might be on a single drive in the ZPOOL. (it was pointed out above that each 'piece' would need its own checksum)

However, the 'protect archive' feature of RAR5 does fix that somewhat.

Space is NOT something I'm shy on yet, meaning I can take my 400GB archive, apply a 50% "protect this archive" feature, and end up with a 600GB file, which can then handle 50% of its container being corrupted and still be able to be repaired.

In my experience RAR (the old rar4) was ALREADY pretty good at repairing damaged archives, had 3 or 4 'work' (better than 0 in the case of Zips), so this new 'protect' feature is golden.

 

I'm not making multiple archives, it's a hassle.

It's probably more ideal, however if I'm going to actually USE the backup server, I need it to be 'easy'.

Besides, it's mainly MY data that I'm worried about. 'Media' can usually be replaced. So it's (personal data) already in duplicate on my original PC.

 

The archive thing? That's a windows feature :)

IIRC, the archive flag is automatically set on ALL files (go into properties, might have to click advanced, there is a checkbox 'file is ready for archiving').

That flag is restored by windows if the file\timestamp is edited. (try it with a notepad file!)

WinRAR now allows you to "remove the archive flag" as it compresses. meaning, only new files will be compressed next time. As it's ignoring the 'non archive' files.

I'm yet to try, but it seems I could just 'add them' to the RAR.

Or, I plan to do a 'full backup' bi-monthly, and a 'Differential' backup using the 'archive'flag monthly; so those will be separate.

Messy if I need to restore, yep, but a copy is a copy, and I'm a single end user.

 

Cryptographic strength over this whole topic seems spot on.... RAR5 has gone 256bit AES, and unlike zip can encrypt filenames.

I mean my tinfoil hat is still a little bit ruffled by the idea that some secret agency might have a magic bullet into AES, but I'm no mastermind criminal or anything, It's purely a 'paranoia over nothing' scenario that lets me sleep at night.

To reveal to the world that AES is broken, would be a big step to steal my data and falsify a crime, lol.

So even IF AES is breakable, my 'diary', family photos, and CD keys, just aren't worth revealing that :P


Edited by Master_Scythe, 26 April 2017 - 04:55 PM.

Wherever you go in life, watch out for Scythe, the tackling IT support guy.

"I don't care what race you are, not one f*cking bit, if you want to be seen as a good people, you go in there and you f*ck up the people who (unofficially) represent you in a negative light!"


#16 @~thehung

@~thehung

    Guru

  • Hero
  • 8,638 posts

Posted 26 April 2017 - 09:15 PM

dude, cmon, of course i know about the archive bit.  it was very useful back in the day with xcopy in DOS.  but ive rarely used it windows.

i just thought maybe WinRAR is now doing some more involved syncing thing.  seems like you can make this work for you if you keep things orderly.
 
i probably couldnt, because i would probably archive part of something on one day and the whole of it on another and the flags might get all messed up.  or maybe restoring an old file would cause problems, i dont know.  i do know that when i use Synkron to back up stuff theres occasional collisions, and once in a while reasons why i need to not overwrite the older version of a file.  but thats me.

 

partly for those reasons, and partly for no good reason, ive always avoided any kind of update feature in archiving tools.


no pung intended




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users