Jump to content
Sign in to follow this  
Electr0

Learning How To Linux

Recommended Posts

Wow, thanks very much.

 

I've go to go through all this again with the downgrade so ^ will help out.

 

Before I just set the two folders of my RAID array (TVShows & Movies) to be owned by 'nobody' and then turned on guest access and set writable to yes.

 

Probably not the best security model. But it makes it a lot easier for everyone at my place to browse the folders without having to enter a username and password every time.

Edited by Electr0

Share this post


Link to post
Share on other sites

copypasta from here

 

Guest Access

 

As mentioned earlier, you can configure a share using guest ok = yes to allow access to guest users. This works only when using share-level security, which we will cover later in this chapter. When a user connects as a guest, authenticating with a username and password is unnecessary, but Samba still needs a way to map the connected client to a user on the local system. The guest account parameter can be used in the share to specify the Unix account that guest users should be assigned when connecting to the Samba server. The default value for this is set during compilation and is typically nobody, which works well with most Unix versions. However, on some systems the nobody account is not allowed to access some services (e.g., printing), and you might need to set the guest user to ftp or some other account instead.

 

If you wish to restrict access in a share only to guests—in other words, all clients connect as the guest account when accessing the share—you can use the guest only option in conjunction with the guest ok option, as shown in the following example:

 

[sales]

path = /home/sales

comment = Sedona Real Estate Sales Data

writable = yes

guest ok = yes

guest account = ftp

guest only = yes

Make sure you specify yes for both guest only and guest ok; otherwise, Samba will not use the guest account that you specify.

Share this post


Link to post
Share on other sites

To compile code you are best to install the package build-essential includes gcc, make etc with many other common tools all in one convenient meta-package.

 

If you are looking for a remote web interface I'm working on a Linux version of what FreeNAS does for BSD, there is a thread over in Unix, Linux & Open Source Software with the details. If there is anything specific you are after drop in some suggestions and I can add them to my feature list.

Share this post


Link to post
Share on other sites

Cheers SledgY, although I haven't had to compile much code of late.

 

I'm thinking of actually throwing in the towel with Zentyal. It's being a piece of shit. For some reason there is a problem with the zentyal-samba module - it's always stopped and I can't seem to get it to start.

 

I've done a bit more reading about Webmin and Ubuntu and it seems that either the problem with the two only occurred with one version of Ubuntu (5 I think) or the problem has been resolved as of now. It seems like there are lots of people running Webmin and Ubuntu together without any problems. So perhaps I'll do another clean wipe, go back to 13.04 and install Webmin instead. Third time lucky hey?

Share this post


Link to post
Share on other sites

If you are interested in giving CentOS a shot, I'd be happy to send over my current install/ management scripts.

 

They will pretty much download most of the stuff you will need, including samba, webmin as well as enabling you to do ZFS if you choose. There is also a script that will get you up and running with Transmission too.

 

If you are running Macs, it will also handle Netatalk too although it's optional. I wrote the scripts though with the intention of having a very simple share addition function which will ensure that Samba as well as Netatalk (if installed) are both accommodated.

 

Let me know and I will zip them up and shoot you a link. I had posted them a while ago but considering I am constantly tweaking them it didn't make sense as there are a few gotchyas when CentOS does a .x update and a few things in the scripts need to occasionally be modified to accomodate.

Share this post


Link to post
Share on other sites

Thanks for the offer Tick.

Is there much benefit running CentOS over Ubuntu? What's the difference in a nut shell?

Share this post


Link to post
Share on other sites

It depends on who you talk to I guess.

 

Most nix guys who have invested heavily in a distro will praise the one they are most used to as they generally know it back to front and made a decision about the strengths of a particular distro over the others accepting a possible trade off.

 

I find myself in the same basket (even though I am a dabbler rather than a nix guy). RedHat was the first distro I learnt on and, for the most part, while it's updated it generally stays similar as far as the backend goes which I like. Driver support for anything server grade is also generally available for RedHat, therefore CentOS has access to it too.

 

I tried to standardise a while ago on Ubuntu but found it to be a battle when things changed where Cent seems to stay relatively the same. Of course that isn't always the case.

 

I don't care about the GUI on a linux server, although the one CentOS uses is more than enough - one of the scripts enables VNC access if needed. Ubuntu is pretty but it's not what I need from a headless device.

 

I also like that for things that are usually outside the core of Cent, someone has generally built a repo for it. ZFS for example, used to be a pain when updating as a kernal patch would take it offline. Now that I have added a repo installation, yum takes care of it. Same goes for webmin - add the repo, updates come down when yum requests it.

 

For the most part, the scripts are menu based and do the following.

 

By default, downloads a bunch of stuff that's required by other menues, adds a repo or two, installs webmin, installs samba. It also nominates a sysadmin account and provides the option to put in an email address for system alerts. It also prompts if you want to setup SSH access and gives you an option to change the port.

 

Disk management allows you to build a software raid, ZFS amongst other things. Allows you to set mount points and formats drives too where needed.

 

Transmission installs it and allows you to set some basic parameters like security for the local LAN, downloads directory ... It will also give you the webGUI interface for everything else.

 

There is a Plex installer - pretty simple.

 

There is a netatalk installer too.

 

I have scripted a user manager menu which makes it simple to add/ remove modify a user account which takes into consideration Samba too.

 

Share management is also scripted. It pretty much makes sure that everything is added correctly to both samba and netatalk.

 

VNC, ssh, rsync even a basic PPTP script is there. Network management allows you to drop the DHCP and assign a static IP address although there is a network bonding script which I was playing with as I have a switch capable of LAG although you could play with the mode setting on a standard switch too if you had additional NICs.

 

I started scripting stuff as I kept forgetting how things worked. Having to script it either cemented it or game me a reference to go back to. I haven't touched some things, like the RAID configuration in ages so there may be problems now that I am not sure of. As of CentOS 6.4 what I use these days seems to work well.

 

Like I said, if you are keen, I am happy to post them somewhere with the caveat that there may be some bugs although I'd be happy to know about them and fix them.

 

My build for the HP microserver has been to generally install CentOS onto one hard drive - usually the 250GB that comes with it. ZFS goes onto the other three drives. In two cases I have installed the primary drive to the CDROM bay and used four drives in a ZFS pool.

 

I generally run an Rsync script which either backs up the configs, files and home directories server to another box (which is rotated) or to a USB drive - depending on where it is. On remote sites where there isn't huge amounts of data I use crashplan.

Share this post


Link to post
Share on other sites

Wow.

 

That sounds like you have done an awful lot of work and put a lot of effort into it. Although I suppose it does make things easier in the long run - especially if you're deploying multiple servers.

 

I might give it one last shot with Ubuntu, since I now have a basic understanding of how that works, but if that fails, I may take you up on that offer.

 

Why do you have so many servers up an running, if you don't mind me asking?

Share this post


Link to post
Share on other sites

I have one I use at home - two that I rotate as backup servers, mainly due to the fact that the backup servers are handling Time Machine backups from two macs as well as backups from the main file/ plex server.

 

I setup a similar setup for my brother.

 

I have about three clients using them as file servers/ Asterisk PBX systems.

 

I also have one as a backup NAS at another client site and one running as a redundant backup storage in a data centre for the same client.

 

Low powered and (so far) reliable boxes.

 

So it appears I can't count - that would be 11 running although the backup units are rotated so backups are off site so while my intention was to have that weekly, it's more like every two months these days. I really need to get back onto that again.

 

That's cool if you want to stick to Ubuntu.

 

For samba, unless something has changed:

 

sudo apt-get install samba smbfs

 

After that, the stuff I posted earlier is all you really need.

Edited by The Tick

Share this post


Link to post
Share on other sites

After a bit of playing around (and a few typos) I managed to download the package using wgetand extract it, but ran into a problem when I came to make the file. Apparently make wasn't installed, so I had to

sudo apt-get install make
. I again tried to make the file but was presented with "Error 127" which apparently meant that I didn't have the GCC compliler installed which
sudo apt-get install gcc
soon fixed. I was then finally able to compile the installer, run it, setup up the deamon and configure it - mostly buy just following the aforementioned guides. I can now PuTTY into my sever by using a host name (eg. xyz.no-ip.biz).

Rather than getting the oodles of packages for compiling (you end up with make, gcc, install, automake, glibc, kernel-headers, and truckloads more), just use this:

 

apt-get install build-essential

Share this post


Link to post
Share on other sites

...that would be 11 running

Wow, talk about redundancy. That all must have cost a lot of $$$

 

Rather than getting the oodles of packages for compiling (you end up with make, gcc, install, automake, glibc, kernel-headers, and truckloads more), just use this:

apt-get install build-essential

Cool as, cheers mate.

 

I'm going to try and do the re-installation tonight if I get time. Otherwise it can be my little project over the weekend. I'll report back with how I end up going.

At least I don't have to reassemble the RAID array every time.

Edited by Electr0

Share this post


Link to post
Share on other sites

...that would be 11 running

Wow, talk about redundancy. That all must have cost a lot of $$$

 

 

lol - no.

 

Two sites have 3 boxes running - one main and two alternating as backups which get taken offsite.

 

The other ones are at different client sites.

Share this post


Link to post
Share on other sites

Two sites have 3 boxes running - one main and two alternating as backups which get taken offsite.

 

The other ones are at different client sites.

Ah ok. I presume this is used for SOHO or business clients? As opposed to backing up TV Shows, Movies and Music?

Share this post


Link to post
Share on other sites

For home I have the scenario where I have a few people who want to share some storage, some areas where I like to have guest access as read only if someone wanted to mooch some files, transmission with central access in the event I am working from home and have to pause stuff that someone else put into the queue as well as Plex to provide access to media players/ tablets & phones. I also run freePBX here.

 

For SOHO, it's usually a small business with minimal central storage requirements who may have a mix of Windows and Macs and want common access. The PBX gives them a sense of presenting as bigger than they are and they get cheaper calls via VoIP.

 

The other client has access to a managed IPVPN and some rack space in a COLO. I put in a microserver as their current setup was mission critical, where this one provided storage that wouldn't impact their main setup. I stuck another one in the COLO and use it to backup the servers there which in turn RSYNCs it down to the one at head office. They take photos of products at HO (VIC) and the guy who needs them is in Sydney. I plan on getting them to store what they need transferred to the microserver, let it Rsync up overnight so he can download them the next day.

 

I love these little servers. Small footprint, small price tag and so far (touch wood) really reliable and damn flexible. I generally up the RAM to 6 or 8GB and add drives and with a standard install method which is mostly scripted it makes it really quick to get one up and running.

Edited by The Tick

Share this post


Link to post
Share on other sites

Just thought I'd report back and give an update on how things are going.

 

So I ended up doing a complete reinstall over the weekend with Ubuntu 13.04. I updated the all the packages (and python dependencies) and installed Webmin.

 

OMFG.

 

It is the ducks nuts. Why did I not use Webmin in the first place. It's simple to use, intuitive, has all the configuration options you could want and fast.

 

I then mounted my raid array, set up SAMBA and Deluge.

 

My next major task was to transfer a heap of data from an external drive to my RAID array. If I had been using Windows I would have used TeraCopy, as that allows you to queue transfers. There is a program called Krusader but that relies on a GUI. Instead I decided to write a script using the rsync command, which would just run multiple consecutive rsync commands. I also added in a few extra bit to check the exit status of each rsync. If a command failed or passed it would say so at the end.

 

if [ $? != 0 ]
then error+=" dir1"
else ok+=" dir1"
fi

#dir2
sudo rsync -v -r --progress '/media/external/TV Shows/ShowDEF' '/mnt/raidarray/TV Shows'
if [ $? != 0 ]
then error+=" dir2"
else ok+=" dir2"
fi

...

echo "* Copying Complete *"
echo " "
echo "FAILED linenums:0'>#!/bin/bashclearerror=" "ok=" "#dir1sudo rsync -v -r --progress '/media/external/TV Shows/ShowABC' '/mnt/raidarray/TV Shows'if [ $? != 0 ]then error+=" dir1"else ok+=" dir1"fi#dir2sudo rsync -v -r --progress '/media/external/TV Shows/ShowDEF' '/mnt/raidarray/TV Shows'if [ $? != 0 ]then error+=" dir2"else ok+=" dir2"fi...echo "* Copying Complete *"echo " "echo "FAILED:" $errorecho "OK:" $ok

I then ran each script using nohup, which would allow the script to run in the background, even when I logged off.

 

sudo nohup ~/Scripts/copyexternal.sh &

Nohup appends it's output to nohup.out, so I could check the status of the job by opening the file in nano.

Edited by Electr0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×