Suddenly grep command stopped working. When I did the ls -l ~/grep showing the one file in my home directory.But this file has been present for ages. If I give command which grep --> pointing to /bin/grep and with /bin/grep it is working fine. Can anyone please suggest.
Thanks,
Regards,
Shiv
You can delete the zero-byte file in your home directory. It's not doing anything. (I don't know how it got there.) The problem is that the first entry in PATH, ".", points to whatever directory you're in. So when you're in your home directory, the shell (bash, I assume) looks for grep in the current directory, and finds the file that's there, which can't do anything.
I consider it a bad idea to have "." in your path. It's convenient, and natural if you're coming from the Windows world, but it means that what gets executed can change depending on what directory you're in (as you have now seen). It also means that if you're on a multiuser system, someone can put an executable in one of their directories, and then when you cd into their directory, all of a sudden you're executing their code, which might not be what you want, and could be dangerous.
Instead, remove ".:" (dot colon) from your PATH. When you need to run a script in the current directory, add "./" to its name to execute it. "/bin" and "/usr/bin" should usually be at the front of the list. Some people prefer to put "/usr/local/bin" at the front of the list, or something else.
You can change your PATH by editing .profile or .bash_profile or .bashrc. It depends on how you have your shell set up. Be careful to separate each directory path in PATH with one ":" character.
Related
I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.
I am trying to use rsync to do backups. I have an include file called /etc/daily.rsync and it contains the following:
+ /home/demo
- *
Then I run the command below:
$ sudo rsync -acvv --delete --include-from=/etc/daily.rsync /mnt/offsite_backup/home/
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
drwxrwxr-x 6 2021/02/22 14:09:13 .
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 52 bytes received 131 bytes 366.00 bytes/sec
total size is 0 speedup is 0.00
When I go look in the directory I see nothing. What I think is that it is trying to rsync from the current directory which btw is empty. So this leaves me to believe that it is not getting the data form the include file.
This command runs as expected:
sudo rsync -acvv --delete /home/demo /mnt/offsite_backup/home/
The different posts made many suggestions, and I have tried them. I am just stuck. Any thoughts would be very welcome.
I think you're misunderstanding what a filter file (like the one you specified with --include-from) does. It does not specify where to sync files from; it specifies which files within the source directory to sync.
You need to specify both the source and destination as part of the command line. In the command:
sudo rsync -acvv --delete --include-from=/etc/daily.rsync /mnt/offsite_backup/home/
You only specified one directory, /mnt/offsite_backup/home/, so rsync has assumed it's the source, and there is no destination. According to the rsync man page:
As a special case, if a single source arg is specified without a
destination, the files are listed in an output format similar to "ls -l".
So, basically, it's listing the contents of /mnt/offsite_backup/home/, and apparently that's empty.
The second command you gave specifies both the source and destination, which is why it works correctly. If you want to add a filter file to, be aware that the paths in the filter will be relative to the source. So if you used
sudo rsync -acvv --delete --include-from=/etc/daily.rsync /home/demo /mnt/offsite_backup/home/
...it's going to try to include the file/directory /home/demo/home/demo, which probably doesn't exist. Except it actually won't do that, because the - * line will exclude /home/demo/home, so if it did exist, it and its contents will be excluded. You need to include the parent directories of anything you want to include in the sync operation. Again, from the man page:
The concept path exclusion is particularly important when using a
trailing '*' rule. For instance, this won't work:
+ /some/path/this-file-will-not-be-found
+ /file-is-included
- *
This fails because the parent directory "some" is excluded by the '*' rule, so rsync never visits any of the files in the "some" or
"some/path" directories. One solution is to ask for all directories in
the hierarchy to be included by using a single rule: "+ */" (put it
somewhere before the "- *" rule), and perhaps use the
--prune-empty-dirs option. Another solution is to add specific include rules for all the parent dirs that need to be visited. For instance,
this set of rules works fine:
+ /some/
+ /some/path/
+ /some/path/this-file-is-found
+ /file-also-included
- *
ok, so after walking away from the problem I realized that, I never specified what actual directory I wanted to sync. The include can't work from thin air. so the command is:
sudo rsync -acv --delete /home/ --include-from=/etc/weekly.rsync /mnt/offline_backup/home/
The include file had to change as well.
+ demo/***
+ truenorth/***
- *
To have it decend into the directory structure, I needed the ***. I hope this can help someone else out.
I recently attempted to move some files by running an -exec mv command with find (command linked below). When I did this, I mistyped the destination directory path (so the directory did not yet exist) and mv created what appears to be an executable instead of a directory?
When I run "Get Info" one image renders and the file size is about the correct size for an image, but hundreds of files were supposed to be copied. Have I lost this data for good? Is there any way to get macOS to recognize this "executable" as a directory?
This is the command I used:
find . -type f -name "*.JPG" -exec mv {} ../../DestinationFolderName \;
Here's an image showing a successful mv into an existing directory, and what happened when I put a path to a directory that did not yet exist.
Unfortunately "mv" to a name that doesn't exist is interpreted as a filename rather than a directory. So the OS has, one-by-one, copied your JPG file on top of each other. The resulting file is most likely whatever JPG happened to be the one it moved last (if you rename it to JPG extension you can check which one).
So, very unfortunately, you probably need to investigate a data recovery tool for MacOS quickly (and do so before you've done things that make more files on your disk, as much a possible). The "ghosts" of the files are for now at least mostly still present on your hard drive (as deallocated segments), but are back in the pool to be overwritten as you create new files (even when your browser creates temporary cache files, and things like that). It's a conundrum.
If you don't have a backup/time-machine of the files, the best thing to do is get a MacOS data recovery program QUICKLY.
VERY sorry not to have a happier answer.
I am having a problem with something seemingly very simple in Unix. I used the following code to move a file to another directory:
mv genes.gtf ./ ../..
The file is no longer in the original directory, but it has not shown up in the destination directory either! Has anyone experienced a similar thing before? What is causing the problem? Is it possible for it to take a while for a file to be moved, so it shows up in the destination directory with a big delay?
When 3 arguments are passed to mv, the first two are considered sources, and the last one is considered the destination. It seems you moved both genes.gtf and the current directory (./) to ../..
I think what you meant to write was mv genes.gtf ../..
As far as what happened to your file, I have no idea; I've never attempted to move ./ anywhere in unix/linux before.
I used this command on my folder in unix:
chmod -R go-rwx *
in order to change permissions for group and others.
Doing this, many files turned green coloured, even simple data files.
Why did this happened?What does it mean?
Is it going to affect my files in a bad way?
They seem to work right now, but I'm concerned about their general functionality.
Thanks!
It is very unlikely that the command you mentioned would cause ls to print your files in green. When ls colors are enabled, executable files are printed in light green by default. Since chmod +R go-rwx only removes permissions, it cannot have caused any files to be marked as executable, and hence won't have made ls print them in green.
Instead, I believe the cause of this is a different command you must have entered, which accidentally marked all those files as executable. This is actually pretty common. Here is the typical scenario: You want to make a directory and all subdirectories readable and possible to enter for all users. So you do chmod -R a+rx top_directory. This works, but as a side effect you have also set the executable flag for all the normal files in all those directories too. This will make ls print them in green if colors are enabled, and it has happened to me several times. You can avoid this by doing chmod -R a+rX top_directory instead, which will only set the executable bit for directories.
To make your files stop being green, you must clear those executable bits. If none of the files in these directories are actually supposed to be executable, this is simple:
$ chmod -R a-x top_directory
$ chmod -R u+X top_directory
This will remove the executable flag for all files and directories, and then add it back for directories only (for the current user). But if some of the files are actually supposed to be executable, you will have to go through them and fix things manually, which can be tedious.
Having some files incorrectly marked as being executable is not a big problem. They will still function normally. It's just a bit messy, and they may show up in command tab completion if the current directory (.) is in your $PATH. So you can safely ignore this issue.
That is an ls functionality:
--color[=WHEN]
colorize the output. WHEN defaults to 'always' or can be 'never' or 'auto'. More info below
Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors command to set it.
You can try with ls --color=never and you won't see the colors anymore.
You can see your color configuration with dircolors -p.
This is the line where the executables files configuration resides:
# This is for files with execute permission:
EXEC 01;32
That's just to help you identify file types, so it's not affecting your files in any bad way.