When I start tmux, my ~/.config/fish/config.fish seems to be sourced again. This means any set PATH foo $PATH statements in my config get executed again, which leads to my PATH variable having duplicate entries in it. This isn't drastic, but it is annoying to ECHO path. when it is so long
How can I prevent this problem?
EDIT: the only fish related entires in my tmux file are
#fix vim
set -g default-shell $SHELL
set -g default-command "reattach-to-user-namespace -l ${SHELL}"
set -g default-command 'reattach-to-user-namespace $SHELL --login'
The ~/.config/fish/config.fish config file is read by every new fish instance. There are several ways to achieve what you're asking. One option is to always set PATH from scratch. That is, don't modify the existing path by appending or prepending to it but instead set it to exactly what you want for a given machine. Something along the lines of
set -gx PATH $HOME/bin /usr/local/bin /usr/bin/ /bin
test -d /opt/X11/bin
and set PATH $PATH /opt/X11/bin
Another option is to add directories only if they aren't already in the path:
contains /usr/local/bin $PATH
or set PATH /usr/local/bin $PATH
Or only do the modification if not inside a tmux session:
if not set -q TMUX
set PATH /argle/bargle $PATH
end
Getting error in copying multiple files. Below command is copying only first file and giving error for rest of the files. Can someone please help me out.
Command:
scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Result:
user#host:~/scripts/OTA$ scp $host:$(ssh -n $host "find /incoming -mmin -120 -name 2018*") /incoming/
Password:
Password:
2018084session_event 100% |**********************************************************************************************************| 9765 KB 00:00
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9
cp: cannot access /incoming/2018084session_event_log.195-10.45.40.9_2_3
Your command uses Command Substitution to generate a list of files. Your assumption is that there is some magic in the "source" notation for scp that would cause multiple members of the list generated by your find command to be assumed to live on $host, when in fact your command might expand into something like:
scp remotehost:/incoming/someoldfile anotheroldfile /incoming
Only the first file is being copied from $host, because none of the rest include $host: at the beginning of the path. They're not found in your local /incoming directory, hence the error.
Oh, and in addition, you haven't escape the asterisk in the find command, so 2018* may expand to multiple files that are in the login directory for the user in question. I can't tell from here, it depends on your OS and shell configuration.
I should point out that you are providing yet another example of the classic Parsing LS problem. Special characters WILL break your command. The "better" solution usually offered for this problem tends to be to use a for loop, but that's not really what you're looking for. Instead, I'd recommend making a tar of the files you're looking for. Something like this might do:
ssh "$host" "find /incoming -mmin -120 -name 2018\* -exec tar -cf - {} \+" |
tar -xvf - -C /incoming
What does this do?
ssh runs a remote find command with your criteria.
find feeds the list of filenames (regardless of special characters) to a tar command as options.
The tar command sends its result to stdout (-f -).
That output is then piped into another tar running on your local machine, which extracts the stream.
If your tar doesn't support -C, you can either remove it and run a cd /incoming before the ssh, or you might be able to replace that pipe segment with a curly-braced command: { cd /incoming && tar -xvf -; }
The curly brace notation assumes a POSIX-like shell (bash, zsh, etc). The rest of this should probably work equally well in csh if that's what you're stuck with.
Limited warranty: Best Effort Only. Untested on animals or computers. Your milage may vary. May contain nuts.
If this doesn't work for you, poke at it until it does.
I've about 50 or so files in various sub-directories that I'd like to push to a remote server. I figured rsync would be able to do this for me using the --include-from option. Without the --exclude="*" option, all the files in the directory are being synced, with the option, no files are.
rsync -avP -e ssh --include-from=deploy/rsync_include.txt --exclude=* ./ root#0.0.0.0:/var/www/ --dry-run
I'm running it as dry initially and 0.0.0.0 is obviously replaced by the IP of the remote server. The contents of rsync_include.txt is a new line separated list of relative paths to the files I want to upload.
Is there a better way of doing this that is escaping me on a Monday morning?
There is a flag --files-from that does exactly what you want. From man rsync:
--files-from=FILE
Using this option allows you to specify the exact list of files to transfer (as read from the specified FILE or - for standard input). It also tweaks the default behavior of rsync to make transferring just the specified files and directories easier:
The --relative (-R) option is implied, which preserves the path information that is specified for each item in the file (use --no-relative or --no-R if you want to turn that off).
The --dirs (-d) option is implied, which will create directories specified in the list on the destination rather than noisily skipping them (use --no-dirs or --no-d if you want to turn that off).
The --archive (-a) option’s behavior does not imply --recursive (-r), so specify it explicitly, if you want it.
These side-effects change the default state of rsync, so the position of the --files-from option on the command-line has no bearing on how other options are parsed (e.g. -a works the same before or after --files-from, as does --no-R and all other options).
The filenames that are read from the FILE are all relative to the source dir -- any leading slashes are removed and no ".." references are allowed to go higher than the source dir. For example, take this command:
rsync -a --files-from=/tmp/foo /usr remote:/backup
If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash), the immediate contents of the directory would also be sent (without needing to be explicitly mentioned in the file -- this began in version 2.6.4). In both
cases, if the -r option was enabled, that dir’s entire hierarchy would also be transferred (keep in mind that -r needs to be specified explicitly with --files-from, since it is not implied by -a). Also note that the effect of the (enabled by default) --relative option is to duplicate only the path info that is read from the file -- it
does not force the duplication of the source-spec path (/usr in this case).
In addition, the --files-from file can be read from the remote host instead of the local host if you specify a "host:" in front of the file (the host must match one end of the transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end of the transfer". For example:
rsync -a --files-from=:/path/file-list src:/ /tmp/copy
This would copy all the files specified in the /path/file-list file that was located on the remote "src" host.
If the --iconv and --protect-args options are specified and the --files-from filenames are being sent from one host to another, the filenames will be translated from the sending host’s charset to the receiving host’s charset.
NOTE: sorting the list of files in the --files-from input helps rsync to be more efficient, as it will avoid re-visiting the path elements that are shared between adjacent entries. If the input is not sorted, some path elements (implied directories) may end up being scanned multiple times, and rsync will eventually unduplicate them after
they get turned into file-list elements.
For the record, none of the answers above helped except for one. To summarize, you can do the backup operation using --files-from= by using either:
rsync -aSvuc `cat rsync-src-files` /mnt/d/rsync_test/
OR
rsync -aSvuc --recursive --files-from=rsync-src-files . /mnt/d/rsync_test/
The former command is self explanatory, beside the content of the file rsync-src-files which I will elaborate down below. Now, if you want to use the latter version, you need to keep in mind the following four remarks:
Notice one needs to specify both --files-from and the source directory
One needs to explicitely specify --recursive.
The file rsync-src-files is a user created file and it was placed within the src directory for this test
The rsyn-src-files contain the files and folders to copy and they are taken relative to the source directory. IMPORTANT: Make sure there is not trailing spaces or blank lines in the file. In the example below, there are only two lines, not three (Figure it out by chance). Content of rsynch-src-files is:
folderName1
folderName2
--files-from= parameter needs trailing slash if you want to keep the absolute path intact. So your command would become something like below:
rsync -av --files-from=/path/to/file / /tmp/
This could be done like there are a large number of files and you want to copy all files to x path. So you would find the files and throw output to a file like below:
find /var/* -name *.log > file
$ date
Wed 24 Apr 2019 09:54:53 AM PDT
$ rsync --version
rsync version 3.1.3 protocol version 31
...
Syntax: rsync <args> <file_and_or_folder_list> <source_dir> <destination_dir/>
Folder names - WITH a trailing /; e.g. Cancer - Evolution/ - are provided in a file (e.g. my_folder_list):
# comment: /mnt/Vancouver/my_folder_list
# comment: 2019-04-24
some_file
another_file
Cancer/
Cancer - Evolution/
Cancer - Genomic Variants/
Cancer - Metastasis (EMT Transition ...)/
Cancer Pathways, Networks/
Catabolism - Autophagy; Phagosomes; Mitophagy/
so those are the "source" (files and/or) folders, to be rsync'd.
Note that if you don't include the trailing / shown above, rsync creates the target folders, but they are empty.
Those folder names provided in the <file_and_or_folder_list> are appended to the rest of their path: <src_dir> = /home/victoria/RESEARCH - NEWS (here, on a different partition), thus providing the complete folder path to rsync; e.g.: ... /home/victoria/RESEARCH - NEWS/Cancer - Evolution/ ...
[ I'm editing this answer some time later (2022-07), and I can't recall if the path provided to <src_dir> is /home/victoria/RESEARCH - NEWS or /home/victoria/RESEARCH - NEWS/ - providing the correct concatenated path. I believe it's the former; if it doesn't work, use the latter. ]
Note that you also need to use --files-from= ..., NOT --include-from= ...
Again the rsync syntax is:
rsync <args> <file_and_or_folder_list> <source_dir> <destination_dir/>
so,
rsync -aqP --delete --files-from=/mnt/Vancouver/my_folder_list "/home/victoria/RESEARCH - NEWS" $DEST_DIR/
where
<args> is -aqP --delete
<file_and_or_folder_list> is --files-from=/mnt/Vancouver/my_folder_list
<source_dir> is "/home/victoria/RESEARCH - NEWS"
<destination_dir/> is $DEST_DIR/ (note the trailing / added to the variable name)
In my BASH script, for coding flexibility I defined variable $DEST_DIR in two parts as follows.
BASEDIR="/mnt/Vancouver"
DEST_DIR=$BASEDIR/data
echo $DEST_DIR ## /mnt/Vancouver/data
## To clarify, here is $DEST_DIR with / appended to the variable name:
echo $DEST_DIR/ ## /mnt/Vancouver/data/
echo $DEST_DIR/apple/banana ## /mnt/Vancouver/data/apple/banana
However, you can more simply specify the destination path:
via a BASH variable: $DEST_DIR=/mnt/Vancouver/data
note that in the rsync expression above, / is appended to $DEST_DIR (i.e. $DEST_DIR/ is actually $DEST_DIR + /), giving the destination directory path /mnt/Vancouver/data/
explicitly state the destination path: /mnt/Vancouver/data/
rsync options used: ## man rsync or rsync -h
-a : archive: equals -rlptgoD (no -H,-A,-X)
-r : recursive
-l : copy symlinks as symlinks
-p : preserve permissions
-t : preserve modification times
-g : preserve group
-o : preserve owner (super-user only)
-D : same as --devices --specials
-P : same as --partial --progress
-q : quiet (https://serverfault.com/questions/547106/run-totally-silent-rsync)
--delete
This tells rsync to delete extraneous files from the RECEIVING SIDE (ones
that AREN’T ON THE SENDING SIDE), but only for the directories that are
being synchronized. You must have asked rsync to send the whole directory
(e.g. "dir" or "dir/") without using a wildcard for the directory’s contents
(e.g. "dir/*") since the wildcard is expanded by the shell and rsync thus
gets a request to transfer individual files, not the files’ parent directory.
Files that are excluded from the transfer are also excluded from being
deleted unless you use the --delete-excluded option or mark the rules as
only matching on the sending side (see the include/exclude modifiers in the
FILTER RULES section). ...
Edit: atp's answer below is better. Please use that one!
You might have an easier time, if you're looking for a specific list of files, putting them directly on the command line instead:
# rsync -avP -e ssh `cat deploy/rsync_include.txt` root#0.0.0.0:/var/www/
This is assuming, however, that your list isn't so long that the command line length will be a problem and that the rsync_include.txt file contains just real paths (i.e. no comments, and no regexps).
None of these answers worked for me, when all I had was a list of directories. Then I stumbled upon the solution! You have to add -r to --files-from because -a will not be recursive in this scenario (who knew?!).
rsync -aruRP --files-from=directory.list . ../new/location
I got similar task: to rsync all files modified after given date, but excluding some directories. It was difficult to build one liner all-in-one style, so I dived problem into smaller pieces.
Final solution:
find ~/sourceDIR -type f -newermt "DD MMM YYYY HH:MM:SS" | egrep -v "/\..|Downloads|FOO" > FileList.txt
rsync -v --files-from=FileList.txt ~/sourceDIR /Destination
First I use find -L ~/sourceDIR -type f -newermt "DD MMM YYYY HH:MM:SS". I tried to add regex to find line to exclude name patterns, however my flavor of Linux (Mint) seams not to understand negate regex in find. Tried number of regex flavors - non work as desired.
So I end up with egrep -v - option that excludes pattern easy way. My rsync is not copying directories like /.cache or /.config plus some other I explicitly named.
This answer is not the direct answer for the question.
But it should help you figure out which solution fits best for your problem.
When analysing the problem you should activate the debug option -vv
Then rsync will output which files are included or excluded by which pattern:
building file list ...
[sender] hiding file FILE1 because of pattern FILE1*
[sender] showing file FILE2 because of pattern *
My .htaccess file in my htdocs folder does not work. I tried to redirect to Google when accessing a filename. I want to find out where the settings for my httpd.conf are, so I can enable mod_rewrite. I did the following UNIX command to find out if a httpd.conf file existed on my hard drive:
find * -name "httpd.conf"
The file does not exist. I am thinking that maybe there is another file that controls mod_rewrite. I want to see if "AllowOverride" exists in any directory. I entered the following UNIX command:
grep -r "AllowOverride" *
But it's hard to read because it prints out so many folders. The message that accompanies the folders are "Permission denied" or "No such file or directory". How do I only get the file paths of files that contain AllowOverride?
Many Unix and similar systems provide a locate(1) command that uses a database to speed finding individual files. Try this:
locate httpd.conf
Note, of course, that Apache configurations are stored in files of all sorts of names; I've seen apache.conf, httpd.conf, httpd2.conf, and then there's the giant pile of /etc/apache2/conf.d/ -- entire directory structures set aside for configuring Apache. Your distribution may vary.
Perhaps apachectl configtest will show the paths? (currently not installed on my machine, so I can't easily test.)
Try this command:
find / -name "httpd.conf" 2>1 | grep -v "Permission denied"
the 2>1 funnels stderr to stdout so that both can be piped into the grep utility. grep in turn will print anyline that doesn't have the string "Permission denied" in it (the -v negates/inverts the matching of the search string)
If you don't redirect stderr to stdout, the output of stderr to the console would bypass the rest of the command line.
You could extend the above command line by appending this:
| grep -v "No such file or directory"
if that string was coming up and you wanted to suppress it too.
This tells you all about io redirection. And here's a nice quick summary.
Use the following:
find / -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
To scan files containing the "AllowOverride" string from the root, if you want to run the search in a particular directory, use the following instead:
find /path/to/directory -type f -exec grep -n "AllowOverride" {} \; -print 2>/dev/null
The output should only print the files containing the specified string along with the number of the matching line
I have two unix machines, both running AIX 5.3
My $HOME is mounted on machine1.
Using NFS, login machine2 will go to the same $HOME
I login machine2 first, then machine1.
Both using telnet.
The 2 sessions will share the same .sh_history file.
I found out that the fc -l behavior very strange.
In machine2, I issue the commands in telnet:
fc -l
ksh fc -l
Both give the same output.
In machine1,
fc -l
ksh fc -l
give DIFFERENT results
The result for ksh fc -l
is the same as /usr/bin/fc -l
Also, when I run a script like this:
#!/usr/bin/ksh
fc -l
The result is same as /usr/bin/fc -l
Could anyone tell me what happened?
Alvin SIU
Ah, wisdom of the ancients... (Since this post is over a year old.)
Anyway, I just encountered this problem in Solaris 10. Issue seems to be this: When you define a function in /etc/profile, or in any file called by /etc/profile, your HISTFILE variable gets ignored by the Korn shell, and the shell instead uses ".sh_history" when accessing its history. Not sure why this is.
Result is that you see other root shell's commands. You can test it with :
lsof -p $$
or
cat /proc/$$/fd/63
It's possible that the login shell is not ksh or that $HISTFILE is being reset. One thing you can do is echo $HISTFILE in the various situations and see if it's different. Another thing to check is to see what shell you're in using ps.
Bash (default $HOME/.bash_history), for example, will have a different $HISTFILE than ksh (default $HOME/.sh_history).
Another possible reason for the difference is that the builtin fc may be able to see in-memory history that hasn't been written to disk yet (which the external /usr/bin/fc wouldn't be able to see). If this is true, it may be version dependent. Bash, for example, doesn't write history to the file until the shell exits. Ksh (at least the version I'm using) writes it immediately.