How can I execute script with xargs after find command [closed] - unix

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
find . -name "recovery_script" | xargs
I try to execute but it only prints it. How can I run it parallel ?

find . -name "recovery_script" | xargs -n1 -P8 sh
for 8 processes in parallel.
Provided there are at least 8 places where "recovery_script" can be found.
The -n1 argument is necessary to feed one argument at a time to sh. Otherwise, xargs will feed a reasonable number of arguments all at once to sh, meaning it's trying to execute something like
sh dir1/recovery_script dir2/recovery_script dir3/recovery_script ...
instead of
sh dir1/recovery_script
sh dir2/recovery_script
sh dir3/recovery_script
...
in parallel.
Bonus: your command can be longer than just a single command, including options. I often use nice to allow other processes to still continue without problems:
find . -name "recovery_script" | xargs -n1 -P8 nice -n19
where -n19 is an option to nice, not to xargs.
(Aside: if you ever use wildcards for -name in find, use the -print0 option to find, and the -0 option to xargs: that separates output and input by the null character, instead of whitespace (since the latter may be part of the filename). Since you search for the full name here, that is not a problem.)
From the xargs manual page:
SYNOPSIS: xargs ... [command [initial-arguments]]
and
... and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.
The default behaviour is thus to echo whatever arguments you give to xargs. Providing a command like sh (perhaps depending on what executable you're trying to run) then works.

This solution is not using xargs but a simple bash script. Maybe it can help:
#!/bin/sh
for i in $(find -name recovery_script)
do
{
echo "Started $i"
$i
echo "Ended $i"
} &
done
wait

Related

Unix command to find all files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I need a unix command to list all files that contains 'foo' in their name ?
We have two commands that do that : grep command and find command !!
what's the best?
Thanks
The find command by itself suffices (unless you want to include files in directories whose name includes "foo"):
find / -type f -name '*foo*'
That checks the leaf name (last part) of the pathnames. If you piped the result of find through grep in a similar way:
find / -type f | grep foo
it would match those files, as well as all files (and directories) inside directories whose name includes "foo".
To filter the list in a more interesting way, you can use grep, which supports regular expressions and other features. For example, you could do
find / -type f | grep -i foo
to match "foo" ignoring case.
But if you want to look at the contents of files, that is grep-specific:
find / -type f -exec grep foo {} +
Further reading:
find
grep
Use find to list all files on the system.
Pair that up with grep to search for a specific file name.
like this:
find / -type f -exec grep -i -r "*foo*" {}

tail multiple files and grep the output [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I would like to grep a pattern from multiple log files which are being constantly updated by some processes and tail the output of this grep continuosly.
Below command doesnt work and I get
tail: warning: following standard input indefinitely is ineffective
tail -f | grep --line-buffered "Search this: " /var/links/proc2/id/myprocess*/Daily/myprocess*.log
Can someone help sort this out?
You should have a look at multitail tool (Install using sudo apt-get install multitail)
In short, with multitail, you need to use the --mergeall flag for viewing output of all in one place
multitail --mergeall /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
You can do the same without using grep
multitail -E "Search this: " --mergeall /var/links/proc2/id/myprocess*/Daily/myprocess*.log
To view the output individually using multitail, this will give the filename as well.
multitail -E "Search this: " /var/links/proc2/id/myprocess*/Daily/myprocess*.log
the mistake is that you give the files to the grep command and not the tail.
the tail -f needs to get the files as input. try:
tail -f /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
to get also the file names (however it will not be like grep output it is):
tail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered -e'^==> .* <==$' -e'Search this: '
This is an interesting question and the simple answer should be: Use the prefix switch with tail, but unfortunately this is currently not implemented in most versions of tail.
As I see it, you have two options: adapt the standard tools to the task (see Udys answer) or write your own tool with your favorite scripting/programming language.
Below is one way you could do it with the File::Tail::Multi module for perl. Note that you may need to install the module from CPAN (cpan -i File::Tail::Multi).
Save the following script e.g. mtail to your executable path and make the script executable.
#!/usr/bin/env perl
use File::Tail::Multi;
$| = 1; # Enable autoflush
$tail = File::Tail::Multi->new(RemoveDuplicate => 0,
OutputPrefix => 'f',
Files => \#ARGV);
while(1) { $tail->read; $tail->print; sleep 2 }
Change OutputPrefix to 'p' if you prefer full path prefixes.
Run it like this:
mtail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep --line-buffered "Search this: "
You do not need to specify --line-buffered when grep is the last command, so this is sufficient:
mtail /var/links/proc2/id/myprocess*/Daily/myprocess*.log | grep "Search this: "

How to copy recursive directory structure while creating symbolic file links only? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What's the best way to copy a whole recursive directory structure where all files are just copied as symbolic links?
In other words, the copy should mirror the whole directory (sub-)structure of the original directory but each file should just be a symbolic link.
I guess ... first you want to make your directories...
cd "$source"
find . -type d -exec mkdir -p "$target/{}" \;
Next, make your symlinks...
cd "$source"
find . -type f -print | (
cd "$target"
while read one; do
deep=$(echo "${one:2}" | sed 's:[^/][^/]*:..:g')
ln -s "${deep:3}/${one:2}" "$(basename "$one")"
done
)
Note that this will fail if you have linefeeds or possibly other odd characters in your filenames. I can't think of a quick way out of this (I mean by doing this in a find -exec), since you need to calculate $deep differently for each level of directory.
Also, this is untested, and I'm not planning to test it. If it gives you inspiration, that's great. :)
This is not the solution that will make symbolic links but it will make hard links.
cp -rl $src $dst
Cons:
it is harder to see if the file replaced in the tree
the both trees should be on the same filesystem

How to convert relative path to absolute path in Unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to convert
Relative Path - /home/stevin/data/APP_SERVICE/../datafile.txt
to
Absolute Path - /home/stevin/data/datafile.txt
Is there a built-in tool in Unix to do this or any good ideas as to how can I implement this?
readlink -f /home/stevin/data/APP_SERVICE/../datafile.txt should do what you're looking for, assuming your Unix/Linux has readlink.
Something like this can help for directories: (For files, append with basename)
echo $(cd ../dir1/dir2/; pwd)
For files,
filepath=../dir1/dir2/file3
echo $(cd $(dirname $filepath); pwd)/$(basename $filepath)
I'm surprised nobody mentions realpath yet. Pass your paths to realpath and it will canonicalize them.
$ ls
Changes
dist.ini
$ ls | xargs realpath
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/Changes
/home/steven/repos/perl-Alt-Module-Path-SHARYANTO/dist.ini
Based on Thrustmaster's answer but with pure bash:
THING="/home/stevin/data/APP_SERVICE/../datafile.txt"
echo "$(cd ${THING%/*}; pwd)/${THING##*/}"
Of course the cd requires the path to actually exist, which may not always be the case - in that case, you'll probably have a simpler life by writing a small Python script instead...
This is my helper function
# get a real path from a relative path.
function realPath() {
if [[ -d "$1" ]]; then
currentPath="$(pwd)" # save current path
cd "$1" || exit 1 # cd to the checking dir
pwd
cd "$currentPath" || exit 1 # restore to original path
else
echo "$(cd "$(dirname "$1")" && pwd -P)/$(basename "$1")"
fi
}

In Unix, user having the highest UID? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Can some one please tell me how can I find the following.
List from /etc/passwd the UID and the user having the highest UID.
cat /etc/passwd | awk -F: '{print $3,$1}' | sort -n | tail -n 1
Instead of reading /etc/passwd, it would be better to get the output from
getent passwd
As you could be using another source of UIDs via nsswitch, such as LDAP.
/etc/passwd contains user information separated by colons. The user id is in the third column.
The sort command line tool can be used to sort the lines of a file. It has options, to choose which separator the columns are separated by, which column to sort by and whether to sort numerically or alphabetically.
So you can use sort to sort /etc/passwd by user id and then use tail to get the last line from that, which will contain the user with highest id.
getent passwd | awk -F : '$3>h{h=$3;u=$1}END{print h " " u}'
The getent output needs to be sorted for the awk command.
In addition, I found that nfsnobody (on Linux) can be ignored and the next highest UID is what is often needed. So this worked well:
getent passwd |sort -t: -k3 -n |awk -F: '$3>h{ph=h;pu=u;h=$3;u=$1}END{print h,u"\n"ph,pu}'
65534 nfsnobody 1002 user2

Resources