Creating multiple directories for users - directory

I am using gatekeeper for access to pages on server.
This is done by creating directories with an index file in them. This then directs whoever inputted the password to a specific page.
I would like to be able to produce lots of directories with either not long random names or assigned names from say a database as creating many by a manual process is not practical.
Can someone tell me how to generate lots of directories on the fly?
Would be even better if users could create their own directory but thats probably something else.
Thanks

If you have bash (shell) access on your server, you can execute a simple bash script to create directories with a file in each.
for f in foo/bar{00..50}; do mkdir -p $f && touch $f/index.txt; done
Replace:
foo/bar with your directory
50 with the number of directories
index.txt with the name of the file
If you want to additionally write text to each file, then do this instead
for f in foo/bar{00..50}; do mkdir -p $f && printf "text\n goes\n here" > $f/index.txt; done

Related

Folderstructure with rsync in bash

I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.

Compare two directory trees

I have a btrfs-filesystem consisting of several harddrives in which is stored about 11 TB of Data. My backup consists of a NAS which exports one path via NFS. The path is then mounted on the machine with the btrfs-bilesystem and rsync is called to keep the nfs export synced to the main filesystem. I call rsync with one -v and send the results of the run to my email account to be sure everything is synchronized correctly. Now by pure chance I found out that some directories were not synchronized correctly - the directories existed on the NAS but they were empty. It is most likely not a rights issue since rsync is run as root. So it seems that in my situation rsync is not entirely trustworthy but I would like to compare the two directory trees to see if there are any files missing on the NAS and/or if there are files which dont exists on the btrfs anymore and which should have been deleted by rsync according. (I use the --delete option).
I am therefore looking for for a program or a script which can help me to check is rsync is running correctly. I don't need anything complicated like checksums, all I want to know if the NAS contains all the files in the btrfs-filesystem.
Any suggestions where to start looking?
Yours, Stefan
Run the following commands to list all files:
find /path/to/fs -type f | sort > filesystem.txt
find /path/to/nfs -type f |sort > nfs.txt
Then compare the lists:
diff -u filesystem.txt nfs.txt

How to delete Excel files from a Unix machine?

I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.

mkdir's "-p" option

So this doesn't seem like a terribly complicated question I have, but it's one I can't find the answer to. I'm confused about what the -p option does in Unix. I used it for a lab assignment while creating a subdirectory and then another subdirectory within that one. It looked like this:
mkdir -p cmps012m/lab1
This is in a private directory with normal rights (rlidwka). Oh, and would someone mind giving a little explanation of what rlidwka means? I'm not a total noob to Unix, but I'm not really familiar with what this means. Hopefully that's not too vague of a question.
The man pages is the best source of information you can find... and is at your fingertips: man mkdir yields this about -p switch:
-p, --parents
no error if existing, make parent directories as needed
Use case example: Assume I want to create directories hello/goodbye but none exist:
$mkdir hello/goodbye
mkdir:cannot create directory 'hello/goodbye': No such file or directory
$mkdir -p hello/goodbye
$
-p created both, hello and goodbye
This means that the command will create all the directories necessaries to fulfill your request, not returning any error in case that directory exists.
About rlidwka, Google has a very good memory for acronyms :). My search returned this for example: http://www.cs.cmu.edu/~help/afs/afs_acls.html
Directory permissions
l (lookup)
Allows one to list the contents of a directory. It does not allow the reading of files.
i (insert)
Allows one to create new files in a directory or copy new files to a directory.
d (delete)
Allows one to remove files and sub-directories from a directory.
a (administer)
Allows one to change a directory's ACL. The owner of a directory can always change the ACL of a directory that s/he owns, along with the ACLs of any subdirectories in that directory.
File permissions
r (read)
Allows one to read the contents of file in the directory.
w (write)
Allows one to modify the contents of files in a directory and use chmod on them.
k (lock)
Allows programs to lock files in a directory.
Hence rlidwka means: All permissions on.
It's worth mentioning, as #KeithThompson pointed out in the comments, that not all Unix systems support ACL. So probably the rlidwka concept doesn't apply here.
-p|--parent will be used if you are trying to create a directory with top-down approach. That will create the parent directory then child and so on iff none exists.
-p, --parents
no error if existing, make parent directories as needed
About rlidwka it means giving full or administrative access. Found it here https://itservices.stanford.edu/service/afs/intro/permissions/unix.
mkdir [-switch] foldername
-p is a switch, which is optional. It will create a subfolder and a parent folder as well, even if parent folder doesn't exist.
From the man page:
-p, --parents no error if existing, make parent directories as needed
Example:
mkdir -p storage/framework/{sessions,views,cache}
This will create subfolder sessions,views,cache inside framework folder irrespective of whether 'framework' was available earlier or not.
PATH: Answered long ago, however, it maybe more helpful to think of -p as "Path" (easier to remember), as in this causes mkdir to create every part of the path that isn't already there.
mkdir -p /usr/bin/comm/diff/er/fence
if /usr/bin/comm already exists, it acts like:
mkdir /usr/bin/comm/diff
mkdir /usr/bin/comm/diff/er
mkdir /usr/bin/comm/diff/er/fence
As you can see, it saves you a bit of typing, and thinking, since you don't have to figure out what's already there and what isn't.
Note that -p is an argument to the mkdir command specifically, not the whole of Unix. Every command can have whatever arguments it needs.
In this case it means "parents", meaning mkdir will create a directory and any parents that don't already exist.

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources