Multiple find exec options: synology index command line - unix

I want to index some files on my NAS after moving them in the correct folder.
My command would be something like :
find *.$ext -exec mv "{}" $path \; -exec synoindex -a $(echo $path)$(basename "{}") \;
The first part is working. All files with $ext extension are moved to destination $path.
But the second part, which is supposed to index these files in their new $path folder, does not work.
This is strange because :
{} contains the right value => the complete old path of each file processed
To make sure of that, I added a third part which only does : -exec echo {} \;
Executing separately $(echo $path)$(basename "{}") works, after replacing {} by one real
value taken for example => gives the the complete new path => syntax is correct
Executing separately synoindex -a $(echo $path)$(basename "{}") works, after replacing {} by
one real value taken for example => command is correct
Thanks for any idea.
Regards,

Your command substitutions $(echo $path) and $(basename "{}") are executed by your shell before find is executed.
And you don't need to echo the $path variable. You could execute a small shell script instead:
find . -type f -name "*.$ext" -exec sh -c '
targetpath=$1; shift # get the value of $path
for file; do
mv -i "$file" "$targetpath"
synoindex -a "$targetpath/${file##*/}"
done
' sh "$path" {} +
This starts find in the current directory . searching for regular files (-type f) ending with
the file extension $ext (-name "*.$ext")
and executes a small shell script passing the $path variable as first argument to the script.
The following arguments are the filepaths found by find.
The parameter expansion ${file##*/} removes the longest prefix */ from the file and the result is the basename.
If your $path variable already contains a trailing slash /, then omit the / after $targetpath.

Related

BASH: performing a regex replace on a path from find command

AIM: to find all JS|TS excluding *.spec.js files in a directory but replace the base path with ./
I have this command
find src/app/directives -name '*.[j|t]s' ! -name '*.spec.js' -exec printf "import \"%s\";\n" {} \;
which in said directory prints the marked JS files. However I want to replace the src/app with ./
I've tried playing with [[]] and this command but they don't work.
find src/app/components -name '*.[j|t]s' ! -name '*.spec.js' -exec printf "import \"%s\";\n" ${{}/src
/hi} \;
zsh: bad substitution
Given your "AIM", all you really need is:
find src/app/directives -type f -name "*.[jt]s" ! -name "*.spec.js" -printf "./%f\n"
The reason being is the '|' in your character-class isn't matching anything, but isn't hurting anything for that matter. Your second ! -name "*.spec.js" is fine. You don't need -exec and can simply use -printf "./%f\n" (where "%f" provides the filename only for the current file). You simply prepend the "./" as part of the -printf format-string.
Let me know if I misunderstood your AIM or if you have further questions.
Removing src/app/directives While Preserving Remaining Path
If you want to preserve the remainder of the path after src/app/directives (essentially just replacing it with '.'), you can use a short helper-script with the POSIX parameter expansion to trim src/app/directives from the front of the string replacing it with '.' using printf in the helper script. For example the helper could be:
#!/bin/zsh
printf ".%s" "${1#./src/app/directives}"
(note: the leading "./" being removed along with src/app/directives is prepended by find, the '.' added by the printf format-string will result in the returned filename being ./rest/of/path/to/filename)
Call the script whatever you like, helper.sh below. Make it executable chmod +x helper.sh.
The find call would then be:
find src/app/directives -type f -name "*.[jt]s" ! -name "*.spec.js" -exec path/to/helper.sh '{}' \;
Give that a go and let me know if it does what you are needing.

Recursively remove portion of filename that matches a pattern

I'm on a UNIX system. Within a directory (and any of its subdirectories), I'm trying to rename all files that match a certain pattern:
change hello (1).pdf
to hello.pdf
Based on the top response from this question, I wrote the following command:
find . -name '* (1)*' -exec rename -ns 's/ (1)//' {} \;
The find works on its own and the rename also works on its own, but the above command only outputs Reading filenames from STDIN and does nothing. How can I make this work?
Figured this out! For whatever reason, it only works when you use the Perl version of rename like this:
find . -name '* (1)*' -exec rename -f -s ' (1)' '' {} \;

Append "/" to end of directory

Completely noob question but, using ls piped to grep, I need to find files or directories that have all capitals in their name, and directories need to have "/" appended to indicate that it is a directory. Trying to append the "/" is the only part I am stuck on. Again, I apologize for the amateur question. I currently have ls | grep [A-Z] and the example out should be: BIRD, DOG, DOGDIR/
It's an interesting question because it's a somewhat difficult thing to accomplish with a bash one-liner.
Here's what I came up with. It doesn't seem very elegant, but I'm not sure how to improve.
find /animals -type d -or -type f \
| grep '/[A-Z]*$' \
| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' \
| sed -e 's/, *$//'; echo
I'll break that down for you
find /animals -type d -or -type f writes out, once per line, the directories and files it found in /animals (see below for my test environment dockerfile - I created /animals to match your desired output). Find can't do a regex match as far as I know on the name, so...
grep '/[A-Z]*$' filter's find's output so that only paths are shown where the last part of the file or directory name, after the final /, is all uppercase
xargs -I + bash -c '...' when you're in a shell and you want to use a "for" loop, chances are what you should be using is xargs. Learn it, know it, love it. xargs takes its input, separated by default by $IFS, and runs the command you give it for each piece of input . So this is going to run a bash shell for each path. that passed the grep filter. In my case, -I + will make xargs replace the literal '+' character with its current input filename. -I also makes it pass one at a time through xargs. For more information, see the xargs manual page.
'echo -n $(basename +)$( test -d + && echo -n /),\\ ' this is the inner bash script that will be run by xargs for each path that got through grep.
basename + cuts the directory component off the path; from your example output you don't want eg /animals/DOGDIR/, you want DOGDIR/. basename is the program that trims the directories for us.
test -d + && echo -n / checks to see whether + (remember xargs will replace it with filename) is a directory ,and if so, runs echo -n /. the -n argument to echo suppresses the newline, important to get the output in the CSV format you specified.
now we can put it all together to see that we're echo -n the output of basename + , with / appended, if it's a directory, and then , appended to that. All the echos run with -n to suppress newlines to keep output CSV looking.
| sed -e 's/, *$//'; echo is purely for formatting. Adding , to each individual output was an easy way to get the CSV, but it leaves us with a final , at the end of the list. The sed invocation removes , followed by any number of spaces at the end of the output so far - eg the entire output from all the xargs invocations. And since we never did output a newline at the end of that output, the final echo is adding that.
Usually in unix shells, you probably wouldn't want a CSV style output. You'd probably instead want a newline-separated output in most cases, one matching file per line, and that would be somewhat simpler to do because you wouldn't need all that faffing with -n and , to make it CSV style. But, valid requirement if the need is there.
FROM debian
RUN mkdir -p /animals
WORKDIR /animals
RUN mkdir -p DOGDIR lowerdir && touch DOGDIR/DOG DOGDIR/lowerDOG2 lowerdir/BIRD
ENTRYPOINT [ "/bin/bash" ]
CMD [ "-c" , "find /animals -type d -or -type f | grep '/[A-Z]*$'| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' | sed -e 's/, *$//'; echo"]
$ docker run --rm test
BIRD, DOGDIR/, DOG
You can start looking at
ls -F | grep -v "[[:lower:]]"
I did not add something for a comma-seperated line, because this is the wrong method: Parsing ls should be avoided ! It will go wrong for filenames like
I am a terribble filename,
with newlines inside me,
and the ls command combined with grep
will only show the last line
BECAUSE THIS LINE HAS NO LOWERCASE CHARACTERS
To get the files without a pipe, you can use
shopt -s extglob
ls -dp +([[:upper:]])
shopt -u extglob
An explanation of the extglob and uppercase can be found at https://unix.stackexchange.com/a/389071/57293
When you want the output in one line, you can get troubles with filenames that have newlines or commas in its name. You might want something like
# parsing ls, yes wrong and failing for some files
ls -dp +([[:upper:]]) | tr "\n" "," | sed 's/,$/\n/'

I want to recursively insert two lines in all files of my directory where it was not present?

I have a directory customer. I have many customers in customer directory.
Now I want to add two lines in some process_config file within customer directory where it was not available.
For example:
/home/sam/customer/a1/na/process_config.txt
/home/sam/customer/p1/emea/process_config.txt
and so so.
Is this possible by single command like find & sed?
With a simple for loop :
for file in /home/sam/customer/*/*/process_config.txt; do
printf "one line\nanother line\n" >> "$file"
done
find /home/sam/customer -name 'process_config.txt' -exec DoYourAddWithSedAwkEchoOrWhatever {} \;
find give you the possibility to select each wanted (selected) file
option -exec start a subshell with your command on this file.
{} is the file name (full name) in this case.
Use \; as end of command for the iteration (other command couldbe used with the standard behaviour of ; ex -exec echo 'line1' >> {} ; echo "line2" >> {} \;
sed, awk or echo like in sample can modify the file

How to print the longest line number for each file in a directory?

I'm trying to list the max line length for files in the current directory, but I'm having trouble with my command working. I believe it's an issue with escaping the curly brackets {} in my exec command. After googling through a ton of find exec escape answers I wasn't able to locate anything about how to escape brackets {} in the exec command. What am I missing?
find . -iname *.page -exec awk '{if(length($0) > L) { LINE=$0;L = length($0)}}
END {print LINE"|"L}' {}\; | sort
Their are multiple issues with the original command none of which are escaping {}. The first issue is there needs to be a space between {} and \;. The second issue is related to how the shell expands the wildcard in the find iname paramater *.page.
From the Free BSD Forums
"*" is expanded by the shell before the command-line is passed to find(1). If there's only 1 item in the directory, then it works. If
there's more than one item in the directory, then it fails as the
command-line options are no longer correct.
Wrapping the *.page in quotes solves the issue. The final version is
find . -iname '*.page' -exec awk '{if(length($0) > L)
{ LINE=NR;L = length($0)}} END {print L"|"FILENAME":"LINE}' {} \; | sort -n
Which outputs the a sorted list of the longest line for each file with line number
220|./Example1.page:157
206|./Example2.page:203
You want to run awk on each file, right?
create a script: t.sh in your home directory:
awk '{if(length($0) > L) { LINE=$0;L = length($0)}}
END {print LINE"|"L}' "$1"
command line:
find . -iname *.page -exec ~/t.sh {} | sort
I'm not too sure about your awk script but since you think it is what you need let's pass on that for now.

Resources