Convert ImageMagick’s output from scientific to decimal? - unix

I have a small one-liner in terminal which is to write the pixel count of many JPEG files to a text file:
find . -name *.jpg -exec convert {} -format "%[fx:w*h]" info: \; > sizes.txt
It actually does, but some of the numbers are in scientific notation, like here:
949200
960000
1.098e+06
1.038e+06
1.1664e+06
1.0824e+06
831600
What is the most robust / elegant way to have the commands output just in decimal notation, like in the following lines?
949200
960000
109806
103806
1166406
1082406
831600
I was wondering if you would do this within the ImageMagick fx part or rather pipe the output to another command for conversion. Thanks!

According to http://www.imagemagick.org/script/escape.php
there is no obvious way to get other number formats with the %fx:
directive, so a command line solution is necessary.
Converting the w*h scientific notation output will lose you significant
digits, so better output w and h separately and multiply.
Using bc this would be:
find . -name '*.jpg' -exec convert {} -format "%w*%h" info: \; |bc

To have ImageMagick print numbers larger than one million without truncating them to exponential (scientific) notation, use the -precision n option. The default precision is 6. I suggest setting it to 12, which allows numbers up to 1e+12 (a 1 followed by twelve zeros).
$ convert xc: -precision 12 -format '%[fx:1000*1000]\n' info:-
1000000
$ convert xc: -precision 6 -format '%[fx:1000*1000]\n' info:-
1e+06

This modification to the shell command seems to work:
find . -name *.jpg -exec convert {} -format "%[fx:w*h]" info: \; | xargs printf "%0.0f\n" > sizes.txt
Adjust the formatting directives printf "%0.0f\n" to suit your needs.
Just for demonstration purposes (this works the same with find etc and .jpg files found on my system):
$ cat data.txt
949200
960000
1.098e+06
1.038e+06
1.1664e+06
1.0824e+06
831600
$ cat data.txt | xargs printf "%0.0f\n"
949200
960000
1098000
1038000
1166400
1082400
831600

Related

How to use find in unix excluding folders and files with size condition?

Working on Solaris 10 in ksh:
I'm trying look after big files on all my root disk but I need to exclude some folders and files.
Currently the ksh find command doesn't succeed:
find / -type d \( ! -name NFS* ! -name proc ! -name devices \) -type f \( ! -name /backup_DB0/databases/data/ems_data.dat ! -name /backup_DB1/databases/log/ems_log.dat \) size +1000000 -exec ls -lah {} \;
This example is shortened, the files list is about 20 files, is there a limit to the command?
As #peterh hinted at, your question is about usage of the find command, and the only impact your choice of shell would have is likely on how you escape potentially special characters like ( and !.
Your find command uses a grammar that you're not quite using properly. You're close, though.
Just as with English, if you want to express multiple conditions, you need a logical construct that joins them, like "or" or "and". (See what I did there?) The find command concatenates conditions with an implicit "and" by default, and uses -o to designate a logical "or". For example:
find $pathcondition1! \\(condition2-ocondition3\\)
specifies that condition1 must be true, but the entire expression is false if either condition2 or condition3 is true.
In your case, if I'm understanding your conditions properly, I would suggest constructing your expression more like this:
find / \( -type d ! \( -name NFS\* -o -name proc -o name devices \) \) \
-o \( -type f ! \( -name ems_data.dat -o -name ems_log.dat \) \) \
-size +1000000 \
-ls
With this, I've separated your "ANDed" expressions onto separate lines, and the "or" expressions are showing inside nested brackets.
Note: remember that -size is in blocks long (512 bytes per block). Check the man page for how to specify size in bytes.
As to your last question about a limit, you certainly shouldn't have a problem with 20 files. If you were dealing with thousands of files, I'd be worried that you'd reach a limit set by your operating system, noted in ARG_MAX from /usr/include/limits.h. To determine your particular limit, if you have a C compiler installed, you may be able to run the following:
$ cpp <<HERE | tail -1
#include <limits.h>
ARG_MAX
HERE
My systems all tell me that 262144 characters is the limit. Note that this limit is imposed by the OS's attempt at POSIX compliance, so it should be the same regardless of which shell you're using.
When the list of files to exclude can change often, you might want to exlude them with a sort of config file.
# cat /dontfind
/backup_DB0/databases/data/ems_data.dat
/backup_DB1/databases/log/ems_log.dat
# find ... | grep -vf /dontfind | xargs ls -l

How to cat all files with filename with certain words in unix

I have a bunch of file in one directory, what I wanted to do is:
cat a-12-08.json b-12-08_others.json b-12-08-mian.json >> new.json
But there are too many files, is there any command I can use to cat all files with "12-08" in their filename?
I found the solution below.
Here is the answer:
cat *12-08* >> new.json
you can use find to do what you want to archive:
find . -type f -name '*12-08*' -exec sh -c 'grep "one" {} && cat {} >> /tmp/output.txt' \;
In this way you can cat the files with contain the word that you looking for
Use a wildcard name:
cat *12-08* >>new.json
This will work as long as there aren't so many files that you exceed the maximum length of a command line, ARG_MAX (2MB on the Linux systems I checked).

How do I perform a recursive directory search for strings within files in a UNIX TRU64 environment?

Unfortunately, due to the limitations of our Unix Tru64 environment, I am unable to use the GREP -r switch to perform my search for strings within files across multiple directories and sub directories.
Ideally, I would like to pass two parameters. The first will be the directory I want my search is to start on. The second is a file containing a list of all the strings to be searched. This list will consist of various directory path names and will include special characters:
ie:
/aaa/bbb/ccc
/eee/dddd/ggggggg/
etc..
The purpose of this exercise is to identify all shell scripts that may have specific hard coded path names identified in my list.
There was one example I found during my investigations that perhaps comes close, but I am not sure how to customize this to accept a file of string arguments:
eg: find etb -exec grep test {} \;
where 'etb' is the directory and 'test', a hard coded string to be searched.
This should do it:
find dir -type f -exec grep -F -f strings.txt {} \;
dir is the directory from which searching will commence
strings.txt is the file of strings to match, one per line
-F means treat search strings as literal rather than regular expressions
-f strings.txt means use the strings in strings.txt for matching
You can add -l to the grep switches if you just want filenames that match.
Footnote:
Some people prefer a solution involving xargs, e.g.
find dir -type f -print0 | xargs -0 grep -F -f strings.txt
which is perhaps a little more robust/efficient in some cases.
By reading, I assume we can not use the gnu coreutil, and egrep is not available.
I assume (for some reason) the system is broken, and escapes do not work as expected.
Under normal situations, grep -rf patternfile.txt /some/dir/ is the way to go.
a file containing a list of all the strings to be searched
Assumptions : gnu coreutil not available. grep -r does not work. handling of special character is broken.
Now, you have working awk ? no ?. It makes life so much easier. But lets be on the safe side.
Assume : working sed ,one of od OR hexdump OR xxd (from vim package) is available.
Lets call this patternfile.txt
1. Convert list into a regexp that grep likes
Example patternfile.txt contains
/foo/
/bar/doe/
/root/
(example does not print special char, but it's there.) we must turn it into something like
(/foo/|/bar/doe/|/root/)
Assuming echo -en command is not broken, and xxd , or od, or hexdump is available,
Using hexdump
cat patternfile.txt |hexdump -ve '1/1 "%02x \n"' |tr -d '\n'
Using od
cat patternfile.txt |od -A none -t x1|tr -d '\n'
and pipe it into (common for both hexdump and od)
|sed 's:[ ]*0a[ ]*$::g'|sed 's: 0a:\\|:g' |sed 's:^[ ]*::g'|sed 's:^: :g' |sed 's: :\\x:g'
then pipe result into
|sed 's:^:\\(:g' |sed 's:$:\\):g'
and you have a regexp pattern that is escaped.
2. Feed the escaped pattern into broken regexp
Assuming the bare minimum shell escape is available,
we use grep "$(echo -en "ESCAPED_PATTERN" )" to do our job.
3. To sum it up
Building a escaped regexp pattern (using hexdump as example )
grep "$(echo -en "$( cat patternfile.txt |hexdump -ve '1/1 "%02x \n"' |tr -d '\n' |sed 's:[ ]*0a[ ]*$::g'|sed 's: 0a:\\|:g' |sed 's:^[ ]*::g'|sed 's:^: :g' |sed 's: :\\x:g'|sed 's:^:\\(:g' |sed 's:$:\\):g')")"
will escape all characters and enclose it with (|) brackets so a regexp OR match will be performed.
4. Recrusive directory lookup
Under normal situations, even when grep -r is broken, find /dir/ -exec grep {} \; should work.
Some may prefer xargs instaed (unless you happen to have buggy xargs).
We prefer find /somedir/ -type f -print0 |xargs -0 grep -f 'patternfile.txt' approach, but since
this is not available (for whatever valid reason),
we need to exec grep for each file,and this is normaly the wrong way.
But lets do it.
Assume : find -type f works.
Assume : xargs is broken OR not available.
First, if you have a buggy pipe, it might not handle large number of files.
So we avoid xargs in such systems (i know, i know, just lets pretend it is broken ).
find /whatever/dir/to/start/looking/ -type f > list-of-all-file-to-search-for.txt
IF your shell handles large size lists nicely,
for file in cat list-of-all-file-to-search-for.txt ; do grep REGEXP_PATTERN "$file" ;
done ; is a nice way to get by. Unfortunetly, some systems do not like that,
and in that case, you may require
cat list-of-all-file-to-search-for.txt | split --help -a 4 -d -l 2000 file-smaller-chunk.part.
to turn it into smaller chunks. Now this is for a seriously broken system.
then a for file in file-smaller-chunk.part.* ; do for single_line in cat "$file" ; do grep REGEXP_PATTERN "$single_line" ; done ; done ;
should work.
A
cat filelist.txt |while read file ; do grep REGEXP_PATTERN $file ; done ;
may be used as workaround on some systems.
What if my shell doe not handle quotes ?
You may have to escape the file list beforehand.
It can be done much nicer in awk, perl, whatever, but since we restrict our selves to
sed, lets do it.
We assume 0x27, the ' code will actually work.
cat list-of-all-file-to-search-for.txt |sed 's#['\'']#'\''\\'\'\''#g'|sed 's:^:'\'':g'|sed 's:$:'\'':g'
The only time I had to use this was when feeding output into bash again.
What if my shell does not handle that ?
xargs fails , grep -r fails , shell's for loop fails.
Do we have other things ? YES.
Escape all input suitable for your shell, and make a script.
But you know what, I got board, and writing automated scripts for csh just seems
wrong. So I am going to stop here.
Take home note
Use the tool for the right job. Writing a interpreter on bc is perfectly
capable, but it is just plain wrong. Install coreutils, perl, a better grep
what ever. makes life a better thing.

Generate a random filename in unix shell

I would like to generate a random filename in unix shell (say tcshell). The filename should consist of random 32 hex letters, e.g.:
c7fdfc8f409c548a10a0a89a791417c5
(to which I will add whatever is neccesary). The point is being able to do it only in shell without resorting to a program.
Assuming you are on a linux, the following should work:
cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32
This is only pseudo-random if your system runs low on entropy, but is (on linux) guaranteed to terminate. If you require genuinely random data, cat /dev/random instead of /dev/urandom. This change will make your code block until enough entropy is available to produce truly random output, so it might slow down your code. For most uses, the output of /dev/urandom is sufficiently random.
If you on OS X or another BSD, you need to modify it to the following:
cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-f0-9' | head -c 32
why do not use unix mktemp command:
$ TMPFILE=`mktemp tmp.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` && echo $TMPFILE
tmp.MnxEsPDsNUjrzDIiPhnWZKmlAXAO8983
One command, no pipe, no loop:
hexdump -n 16 -v -e '/1 "%02X"' -e '/16 "\n"' /dev/urandom
If you don't need the newline, for example when you're using it in a variable:
hexdump -n 16 -v -e '/1 "%02X"' /dev/urandom
Using "16" generates 32 hex digits.
uuidgen generates exactly this, except you have to remove hyphens. So I found this to be the most elegant (at least to me) way of achieving this. It should work on linux and OS X out of the box.
uuidgen | tr -d '-'
As you probably noticed from each of the answers, you generally have to "resort to a program".
However, without using any external executables, in Bash and ksh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); done; echo $string
in zsh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); dummy=$RANDOM; done; echo $string
Change the lower case x in the format string to an upper case X to make the alphabetic hex characters upper case.
Here's another way to do it in Bash but without an explicit loop:
printf -v string '%X' $(printf '%.2s ' $((RANDOM%16))' '{00..31})
In the following, "first" and "second" printf refers to the order in which they're executed rather than the order in which they appear in the line.
This technique uses brace expansion to produce a list of 32 random numbers mod 16 each followed by a space and one of the numbers in the range in braces followed by another space (e.g. 11 00). For each element of that list, the first printf strips off all but the first two characters using its format string (%.2) leaving either single digits followed by a space each or two digits. The space in the format string ensures that there is then at least one space between each output number.
The command substitution containing the first printf is not quoted so that word splitting is performed and each number goes to the second printf as a separate argument. There, the numbers are converted to hex by the %X format string and they are appended to each other without spaces (since there aren't any in the format string) and the result is stored in the variable named string.
When printf receives more arguments than its format string accounts for, the format is applied to each argument in turn until they are all consumed. If there are fewer arguments, the unmatched format string (portion) is ignored, but that doesn't apply in this case.
I tested it in Bash 3.2, 4.4 and 5.0-alpha. But it doesn't work in zsh (5.2) or ksh (93u+) because RANDOM only gets evaluated once in the brace expansion in those shells.
Note that because of using the mod operator on a value that ranges from 0 to 32767 the distribution of digits using the snippets could be skewed (not to mention the fact that the numbers are pseudo random in the first place). However, since we're using mod 16 and 32768 is divisible by 16, that won't be a problem here.
In any case, the correct way to do this is using mktemp as in Oleg Razgulyaev's answer.
Tested in zsh, should work with any BASH compatible shell!
#!/bin/zsh
SUM=`md5sum <<EOF
$RANDOM
EOF`
FN=`echo $SUM | awk '// { print $1 }'`
echo "Your new filename: $FN"
Example:
$ zsh ranhash.sh
Your new filename: 2485938240bf200c26bb356bbbb0fa32
$ zsh ranhash.sh
Your new filename: ad25cb21bea35eba879bf3fc12581cc9
Yet another way[tm].
R=$(echo $RANDOM $RANDOM $RANDOM $RANDOM $RANDOM | md5 | cut -c -8)
FILENAME="abcdef-$R"
This answer is very similar to fmarks, so I cannot really take credit for it, but I found the cat and tr command combinations quite slow, and I found this version quite a bit faster. You need hexdump.
hexdump -e '/1 "%02x"' -n32 < /dev/urandom
Another thing you can add is running the date command as follows:
date +%S%N
Reads nonosecond time and the result adds a lot of randomness.
The first answer is good but why fork cat if not required.
tr -dc 'a-f0-9' < /dev/urandom | head -c32
Grab 16 bytes from /dev/random, convert them to hex, take the first line, remove the address, remove the spaces.
head /dev/random -c16 | od -tx1 -w16 | head -n1 | cut -d' ' -f2- | tr -d ' '
Assuming that "without resorting to a program" means "using only programs that are readily available", of course.
If you have openssl in your system you can use it for generating random hex (also it can be -base64) strings with defined length. I found it pretty simple and usable in cron in one line jobs.
openssl rand -hex 32
8c5a7515837d7f0b19e7e6fa4c448400e70ffec88ecd811a3dce3272947cb452
Hope to add a (maybe) better solution to this topic.
Notice: this only works with bash4 and some implement of mktemp(for example, the GNU one)
Try this
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//}
This one is twice as faster as head /dev/urandom | tr -cd 'a-f0-9' | head -c 32, and eight times as faster as cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32.
Benchmark:
With mktemp:
#!/bin/bash
# a.sh
for (( i = 0; i < 1000; i++ ))
do
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//} > /dev/null
done
time ./a.sh
./a.sh 0.36s user 1.97s system 99% cpu 2.333 total
And the other:
#!/bin/bash
# b.sh
for (( i = 0; i < 1000; i++ ))
do
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32 > /dev/null
done
time ./b.sh
./b.sh 0.52s user 20.61s system 113% cpu 18.653 total
If you are on Linux, then Python will come pre-installed. So you can go for something similar to the below:
python -c "import uuid; print str(uuid.uuid1())"
If you don't like the dashes, then use replace function as shown below
python -c "import uuid; print str(uuid.uuid1()).replace('-','')"

unix - how to deal with too many args for cat

I have a bunch of files in a directory, each with one line of text. I want to cat all of these files together (all the one liners) into a single, large file. However, when I use cat there are too many arguments. How can I get around this?
bash$ (ls | xargs cat) > /tmp/some_big_file
try to use -n with xargs to reduce the number of arguments passed to cat
find .|xargs -n 100 cat >> out
look into xargs
find . <whatever> | xargs cat > outfile.txt
Replace the find . <whatever> bit with your own way of getting all the files
Replace outfile.txt with your output file.

Resources