Error in shell script - qt

I'm using a shell script to help me resolve library paths so I can send out my app bundle. I don't know much about shell scripts and was hacking something together from other pieces so I really don't know how to resolve the issue. The issue revolves around lines like done << ...
Here's some code! Note, this is based off of a Qt project.
echo "Below is the list of install_name_tools that need to be added:"
while IFS= read -r -d '' file; do
baseName=`basename "$file"`
#echo "otool -L \"$file\" | grep -e \"*$baseName\""
hasUsrLocal=`otool -L "$file" | grep -v -e "*$baseName" | grep -v libgcc_s.1.dylib | grep -v libstdc++.6.dylib | grep "/usr/local\|/Users"`
if [ -n "$hasUsrLocal" ]; then
#echo "WARNING: $file has /usr/local dependencies"
#echo "\"$hasUsrLocal\""
#echo "To Fix:"
while read line; do
#Remove extra info
library=`echo "$line" | perl -pe 's/(.*?)\s\(compatibility version.*/\1/'`
libraryBaseName=`basename "$library"`
frameworkNameBase="$libraryBaseName.framework"
isframework=`echo "$library" | grep "$frameworkNameBase"`
unset fixCommand;
if [ -n "$isframework" ]; then
#Print out how to fix the framework
frameworkName=`echo $library | perl -pe "s/.*?($frameworkNameBase\/.+)/\1/"`
fixCommand=`echo "install_name_tool -change \"$library\" \"#executable_path/../Frameworks/$frameworkName\" \"$file\""`
else
#Print out how to fix the regular dylib
if [ "$baseName" != "$libraryBaseName" ]; then
fixCommand=`echo "install_name_tool -change \"$library\" \"#executable_path/../Frameworks/$libraryBaseName\" \"$file\""`
fi
fi
echo "$fixCommand"
done << (echo "$hasUsrLocal")
#echo "---------------------------------------------------------"
fi
done << (find MyProgram.app -type f -print0)
The error this prints is referring to the line done << (echo "$hasUsrLocal")
./deploy.sh: line 563: syntax error near unexpected token `('
./deploy.sh: line 563: ` done << (echo "$hasUsrLocal")'
I'll get a similar issue for done << (find MyProgram.app -type f -print0) too if I comment out some of the script. Thank you!

I believe the author intended to use a process substitution:
done < <( find ...
You could also try piping the find into the while loop:
find MyProgram ... | while IFS= read -r -d '' file; do ... done

I fixed it! Thanks to William Pursell for the keyword "process substitution." That gave me ground to "Google."
I noticed it was a spacing issue. Needs to be spaced like done < <(echo "$hasUsrLocal") for example!

Answer
The << symbol is for here-documents. What you probably want is redirected process substitution:
< <( : )
The <(command_list) construct is for process substitution, while the first less-than symbol redirects standard input of your loop to the file descriptor created by the process substitution that follows it on the line.
Pro Tip
This is a handy but confusing syntax. It really helps to read it from right to left.

Related

Append "/" to end of directory

Completely noob question but, using ls piped to grep, I need to find files or directories that have all capitals in their name, and directories need to have "/" appended to indicate that it is a directory. Trying to append the "/" is the only part I am stuck on. Again, I apologize for the amateur question. I currently have ls | grep [A-Z] and the example out should be: BIRD, DOG, DOGDIR/
It's an interesting question because it's a somewhat difficult thing to accomplish with a bash one-liner.
Here's what I came up with. It doesn't seem very elegant, but I'm not sure how to improve.
find /animals -type d -or -type f \
| grep '/[A-Z]*$' \
| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' \
| sed -e 's/, *$//'; echo
I'll break that down for you
find /animals -type d -or -type f writes out, once per line, the directories and files it found in /animals (see below for my test environment dockerfile - I created /animals to match your desired output). Find can't do a regex match as far as I know on the name, so...
grep '/[A-Z]*$' filter's find's output so that only paths are shown where the last part of the file or directory name, after the final /, is all uppercase
xargs -I + bash -c '...' when you're in a shell and you want to use a "for" loop, chances are what you should be using is xargs. Learn it, know it, love it. xargs takes its input, separated by default by $IFS, and runs the command you give it for each piece of input . So this is going to run a bash shell for each path. that passed the grep filter. In my case, -I + will make xargs replace the literal '+' character with its current input filename. -I also makes it pass one at a time through xargs. For more information, see the xargs manual page.
'echo -n $(basename +)$( test -d + && echo -n /),\\ ' this is the inner bash script that will be run by xargs for each path that got through grep.
basename + cuts the directory component off the path; from your example output you don't want eg /animals/DOGDIR/, you want DOGDIR/. basename is the program that trims the directories for us.
test -d + && echo -n / checks to see whether + (remember xargs will replace it with filename) is a directory ,and if so, runs echo -n /. the -n argument to echo suppresses the newline, important to get the output in the CSV format you specified.
now we can put it all together to see that we're echo -n the output of basename + , with / appended, if it's a directory, and then , appended to that. All the echos run with -n to suppress newlines to keep output CSV looking.
| sed -e 's/, *$//'; echo is purely for formatting. Adding , to each individual output was an easy way to get the CSV, but it leaves us with a final , at the end of the list. The sed invocation removes , followed by any number of spaces at the end of the output so far - eg the entire output from all the xargs invocations. And since we never did output a newline at the end of that output, the final echo is adding that.
Usually in unix shells, you probably wouldn't want a CSV style output. You'd probably instead want a newline-separated output in most cases, one matching file per line, and that would be somewhat simpler to do because you wouldn't need all that faffing with -n and , to make it CSV style. But, valid requirement if the need is there.
FROM debian
RUN mkdir -p /animals
WORKDIR /animals
RUN mkdir -p DOGDIR lowerdir && touch DOGDIR/DOG DOGDIR/lowerDOG2 lowerdir/BIRD
ENTRYPOINT [ "/bin/bash" ]
CMD [ "-c" , "find /animals -type d -or -type f | grep '/[A-Z]*$'| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' | sed -e 's/, *$//'; echo"]
$ docker run --rm test
BIRD, DOGDIR/, DOG
You can start looking at
ls -F | grep -v "[[:lower:]]"
I did not add something for a comma-seperated line, because this is the wrong method: Parsing ls should be avoided ! It will go wrong for filenames like
I am a terribble filename,
with newlines inside me,
and the ls command combined with grep
will only show the last line
BECAUSE THIS LINE HAS NO LOWERCASE CHARACTERS
To get the files without a pipe, you can use
shopt -s extglob
ls -dp +([[:upper:]])
shopt -u extglob
An explanation of the extglob and uppercase can be found at https://unix.stackexchange.com/a/389071/57293
When you want the output in one line, you can get troubles with filenames that have newlines or commas in its name. You might want something like
# parsing ls, yes wrong and failing for some files
ls -dp +([[:upper:]]) | tr "\n" "," | sed 's/,$/\n/'

What is the fastest way to copy a large number of file paths mentioned in a text file, from one directory to another

I have a text file that lists a large number of file paths. I need to copy all these files from the source directory (mentioned in the path in the file one every line) to a destination directory.
Currently, the command line I tried is
while read line; do cp $ line dest_dir; done < my_file.txt
This seems to be a bit slow. Is there a way to parallelise this whole thing or speed it up ?
You could try GNU Parallel as follows:
parallel --dry-run -a fileList.txt cp {} destinationDirectory
If you like what it says, remove the --dry-run.
You could do something like the following (in your chosen shell)
#!/bin/bash
BATCHSIZE=2
# **NOTE**: check exists with -f and points at the right place. you might not need this. depends on your own taste for risk.
ln -s `which cp` /tmp/myuniquecpname
# **NOTE**: this sort of thing can have limits in some shells
for i in `cat test.txt`
do
BASENAME="`basename $i`"
echo doing /tmp/myuniquecpname $i test2/$BASENAME &
/tmp/myuniquecpname $i test2/$BASENAME &
COUNT=`ps -ef | grep /tmp/myuniquecpname | grep -v grep | wc -l`
# **NOTE**: maybe need to put a timeout on this loop
until [ $COUNT -lt $BATCHSIZE ]; do
COUNT=`ps -ef | grep /tmp/myuniquecpname | grep -v grep | wc -l`
echo waiting...
sleep 1
done
done

Handling file permissions in UNIX using awk

I want to know which permission is given to a file using a shell script. So i used the below code to test for a file. But it shows nothing in output. I just wanted to know where i have made the mistake. Please help me.
The file "1.py" has all read write and execute files enabled.
ls -l 1.py | awk ' {if($1 -eq "-rwxrwxrwx")print 'True'; }'
The single quotes (') around True should be double quotes ("), and awk uses == for string comparison.
However, depending on what you're trying to do, it might be cleaner to use the Bash builtin tests:
if [ -r 1.py -a -x 1.py ]; then
echo "Yes, we can read (-r) and (-a) execute (-x) the file"
else
echo "No, we can't."
fi
This avoids having to parse ls output. For a longer list of checks, see tldp.org.
in awk, you shouldn't write shell test, e.g. [[ ... -eq ...]], you should do it in awk way:
if($1=="whatever")...
you could use
ls -l 1.py | awk '{if ($1 == "-rwxrwxrwx") print "True" }'

Grep files containing two or more occurrence of a specific string

I need to find files where a specific string appears twice or more.
For example, for three files:
File 1:
Hello World!
File 2:
Hello World!
Hello !
File 3:
Hello World!
Hello
Hello Again.
--
I want to grep Hello and only get files 2 & 3.
What about this:
grep -o -c Hello * | awk -F: '{if ($2 > 1){print $1}}'
Since the question is tagged grep, here is a solution using only that utility and bash (no awk required):
#!/bin/bash
for file in *
do
if [ "$(grep -c "Hello" "${file}")" -gt 1 ]
then
echo "${file}"
fi
done
Can be a one-liner:
for file in *; do if [ "$(grep -c "Hello" "${file}")" -gt 1 ]; then echo "${file}"; fi; done
Explanation
You can modify the for file in * statement with whatever shell expansion you want to get all the data files.
grep -c returns the number of lines that match the pattern, with multiple matches on a line still counting for just one matched line.
if [ ... -gt 1 ] test that more than one line is matched in the file. If so:
echo ${file} print the file name.
This awk will print the file name of all files with 2 or more Hello
awk 'FNR==1 {if (a>1) print f;a=0} /Hello/ {a++} {f=FILENAME} END {if (a>1) print f}' *
file2
file3
What you need is a grep that can recognise patterns across line endings ("hello" followed by anything (possibly even line endings), followed by "hello")
As grep processes your files line by line, it is (by itself) not the right tool for the job - unless you manage to cram the whole file into one single line.
Now, that is easy, for example using the tr command, replacing line endings by spaces:
if cat $file | tr '\n' ' ' | grep -q 'hello.*hello'
then
echo "$file matches"
fi
This is quite efficient, even on large files with many (say 100000) lines, and can be made even more efficient by calling grep with --max-count=1 , making it stop the search after a match has been found. It doesn't matter whether the two hellos are on the same line or not.
After reading your question, I think you also want to find the case hello hello in one line. ( find files where a specific string appears twice or more.) so I come up with this one-liner:
awk -v p="hello" 'FNR==1{x=0}{x+=gsub(p,p);if(x>1){print FILENAME;nextfile}}' *
in the above line, p is the pattern you want to search
it will print the filename if the file contains the pattern two or more times. no matter they are in same or different lines
during the processing, after checking some line, if we had already found two or more pattern, print the filename and stop processing current file, take the next input file, if there still are. This is helpful if you have big files.
A little test:
kent$ head f*
==> f <==
hello hello world
==> f2 <==
hello
==> f3 <==
hello
hello
SK-Arch 22:27:00 /tmp/test
kent$ awk -v p="hello" 'FNR==1{x=0}{x+=gsub(p,p);if(x>1){print FILENAME;nextfile}}' f*
f
f3
Another way:
grep Hello * | cut -d: -f1 | uniq -d
Grep for lines containing 'Hello'; keep only the file names; print only the duplicates.
grep -c Hello * | egrep -v ':[01]$' | sed 's/:[0-9]*$//'
Piping to a scripting language might be overkill, but it's oftentimes much easier than just using awk
grep -rnc "Hello" . | ruby -ne 'file, count = $_.split(":"); puts "#{file}: #{count}" if count&.to_i >= 2'
So for your input, we get
$ grep -rnc "Hello" . | ruby -ne 'file, count = $_.split(":"); puts "#{file}: #{count}" if count&.to_i >= 2'
./2: 2
./3: 3
Or to omit the count
grep -rnc "Hello" . | ruby -ne 'file, _ = $_.split(":"); puts file if count&.to_i >= 2'

for i in `ls |grep` question

This is the code i'm using to untar a file grep on the contents of the files within the tar and then delete the untared files. I dont have enough space to untar all files at once.
the issue i'm having is with the for f in `ls | grep -v *.gz line this is supposed to find the files that have come out of the tar and can be identified by not having a .tar.gz extension but it doesnt seem to pick them up?
Any help would be much appreciated
M
for i in *.tar.gz;
do echo $i >>outtput1;
tar -xvvzf $i; mv $i ./processed/;
for f in `ls | grep -v *.gz`; ----- this is the line that isn't working
do echo $f >> outtput1;
grep 93149249194 $f >>
outtput1; grep 788 $f >> outtput1;
rm -f $f;
done;
done
Try ls -1 | grep -v "\\.gz$". The -1 will make ls output one result per line. I've also fixed your regex for you in a few ways.
Although a better way to solve this whole thing is with find -exec.
Change it to ls | grep -v "*.gz", you have to quote *.gz because otherwise it will just glob the files in the working directory and grep them.
Never use ls in scripts, and don't use grep to match file patterns. Use globbing and tests instead:
for f in *
do
if [[ $f != *.gz ]]
then
...
fi
done

Resources