Unix-Changing Permissions-Files turned green - unix

I used this command on my folder in unix:
chmod -R go-rwx *
in order to change permissions for group and others.
Doing this, many files turned green coloured, even simple data files.
Why did this happened?What does it mean?
Is it going to affect my files in a bad way?
They seem to work right now, but I'm concerned about their general functionality.
Thanks!

It is very unlikely that the command you mentioned would cause ls to print your files in green. When ls colors are enabled, executable files are printed in light green by default. Since chmod +R go-rwx only removes permissions, it cannot have caused any files to be marked as executable, and hence won't have made ls print them in green.
Instead, I believe the cause of this is a different command you must have entered, which accidentally marked all those files as executable. This is actually pretty common. Here is the typical scenario: You want to make a directory and all subdirectories readable and possible to enter for all users. So you do chmod -R a+rx top_directory. This works, but as a side effect you have also set the executable flag for all the normal files in all those directories too. This will make ls print them in green if colors are enabled, and it has happened to me several times. You can avoid this by doing chmod -R a+rX top_directory instead, which will only set the executable bit for directories.
To make your files stop being green, you must clear those executable bits. If none of the files in these directories are actually supposed to be executable, this is simple:
$ chmod -R a-x top_directory
$ chmod -R u+X top_directory
This will remove the executable flag for all files and directories, and then add it back for directories only (for the current user). But if some of the files are actually supposed to be executable, you will have to go through them and fix things manually, which can be tedious.
Having some files incorrectly marked as being executable is not a big problem. They will still function normally. It's just a bit messy, and they may show up in command tab completion if the current directory (.) is in your $PATH. So you can safely ignore this issue.

That is an ls functionality:
--color[=WHEN]
colorize the output. WHEN defaults to 'always' or can be 'never' or 'auto'. More info below
Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors command to set it.
You can try with ls --color=never and you won't see the colors anymore.
You can see your color configuration with dircolors -p.
This is the line where the executables files configuration resides:
# This is for files with execute permission:
EXEC 01;32
That's just to help you identify file types, so it's not affecting your files in any bad way.

Related

Folderstructure with rsync in bash

I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.

Does chmod modify permissions of FILE... arguments in the order specified?

Given an invocation of chmod with multiple FILE arguments,
$ chmod 0xxx FILE-1 FILE-2 FILE-3 ...
is there a predictable order of prcessing them? This may matter when FILE-k and FILE-j are related, e.g. when one is a subdirectory of the other. Say, FILE-1 being some directory d1 and FILE-2 subdirectory d1/d2, i.e., the second argument names a subdirectory of the first argument:
$ chmod 0000 d1 d1/d2
chmod: cannot access `d1/d2': Permission denied
O.K., this is what I had expected, i.e., reading from left to right reflects the order of processing FILE... arguments, d1's permissions are cleared first and, therefore, chmod connot then gain access to d1/d2. So, the following invocation also works as expected:
$ chmod 0000 d1/d2 d1
This clears permissions of either directory. But is this order dependence guaranteed, and more generally, does POSIX say something about the matter? Does option -R affect the reasoning in some way or other, as regards predictability across Unix systems?
It doesn't seem to be specified anywhere.
However, the Application Usage section of the Manual Page for CHMOD does specify recursive behaviour of the command.
After several experimentation, I obtained the same result as yours on my x86_64 GNU/Linux running Ubuntu 14.04.5 LTS.
It seems to work out the permissions from left to right and stops when it cannot change permission.
If you are trying to revoke permissions, then it is better to have the parent directory last. (i.e. leaf to root in the file hierarchy)
If you are trying to grant permissions, then it is better to have the parent directory first. (i.e. root to leaf in the file hierarchy)
While not specified, all implementations behave naturally like you experienced, but there is no guarantee, beware POSIX (IEEE Std 1003.1-2008, 2016 Edition) says in chmod command manual - Application Usage:
Some implementations of the chmod utility change the mode of a
directory before the files in the directory when performing a
recursive (-R option) change; others change the directory mode after
the files in the directory. If an application tries to remove read or
search permission for a file hierarchy, the removal attempt fails if
the directory is changed first; on the other hand, trying to re-enable
permissions to a restricted hierarchy fails if directories are changed
last. Users should not try to make a hierarchy inaccessible to
themselves.
Thus any order on the arguments can be applied, you need to use separate commands to ensure the order.

Grep command stopped working

Suddenly grep command stopped working. When I did the ls -l ~/grep showing the one file in my home directory.But this file has been present for ages. If I give command which grep --> pointing to /bin/grep and with /bin/grep it is working fine. Can anyone please suggest.
Thanks,
Regards,
Shiv
You can delete the zero-byte file in your home directory. It's not doing anything. (I don't know how it got there.) The problem is that the first entry in PATH, ".", points to whatever directory you're in. So when you're in your home directory, the shell (bash, I assume) looks for grep in the current directory, and finds the file that's there, which can't do anything.
I consider it a bad idea to have "." in your path. It's convenient, and natural if you're coming from the Windows world, but it means that what gets executed can change depending on what directory you're in (as you have now seen). It also means that if you're on a multiuser system, someone can put an executable in one of their directories, and then when you cd into their directory, all of a sudden you're executing their code, which might not be what you want, and could be dangerous.
Instead, remove ".:" (dot colon) from your PATH. When you need to run a script in the current directory, add "./" to its name to execute it. "/bin" and "/usr/bin" should usually be at the front of the list. Some people prefer to put "/usr/local/bin" at the front of the list, or something else.
You can change your PATH by editing .profile or .bash_profile or .bashrc. It depends on how you have your shell set up. Be careful to separate each directory path in PATH with one ":" character.

How to delete Excel files from a Unix machine?

I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.

How to process directory first, then files and directories under it?

On my Linux system, I've got into a situation where there are not write/execute permissions on directories on a mounted drive. As a result, I can't get into a directory before I open its permissions up. This happens every time I mount that drive. The mounting operation is done by a tool under its hood, so I doubt if could modify mount parameters to address this problem.
As a workaround, I am using this find command to modify permissions on directories. I use it repetitively, since it gets one more level of directories on each run.
find . -type d -print0 | xargs -0 -n 1 chmod a+wrx
I am sure there is a better way to do this. I wonder if there is a find option that processes a directory first and then its contents - the opposite of -depth|-d option.
Any tips?
Try:
chmod +wrx /path/to/mounted/drive/*
Another possibility is to investigate the mount options available for that particular file type (I'm guessing FAT/VFAT here, but it might be something else). Some file systems have mount options for overriding default permissions in some form or other... That would also avoid having to change all the permissions, which might have some effect when you put that file system back to whereever its original source is (is this a memory card from a camera or something or a USB stick, or .... ?)
Thanks to StarNamer at unix.stackexchange.com, here's something that worked great:
Try:
find . -type d -exec chmod a+rwx {} ';'
This will cause find to execute the chmod before it tries to read the directory rather than trying to generate a list and feed it to xargs.
From: https://unix.stackexchange.com/q/45907/22323

Resources