I have a file that contains the version name and version number. The contents of the first file looks as-
File1-
<Line contains the name of product1>
package_name0_9_8 >= 1.2.3x-4.5.6
package_name0_9_8-32bit >= 3.6.1g-3.5.1
package_name0_9_8-xx >= 6.3.2v-3.0.4
<Line contains the name of product2>
anotherpackage_name0_9_8 >= 3.5.6u-3.6.5
And,
File2.xml-
<package name="package_name0_9_8" version="1.2.3x-4.4.4"/>
<package name="package_name0_9_8-32bit" version="3.6.1g-3.4.0"/>
.
.
Is there a way to check the existance of package_name that is present in File1 with the package_name of File2 and check if the corresponding version of package_name in File1 with that of corresponding version of package_name of File2?
I am frank that I am pretty much weak in concatenating the 'grep' and 'awk' commands along with options to be used here. Please help out.
for a in $(sed -n '/>=/p' File1.txt | grep -o '^[^ ]*'); do for b in $(sed -n "/^$a /{s/.*>=\(.*\)$/\1/p}" File1.txt); do ((! $(grep -c "$a.*$b" File2.txt))) && (echo "$a $b" >> missing_pkgs.txt); done; done;
this is a quick one liner - you could print it out a bit prettier
the way this works is nested for loop that grabs both pieces separate into variables (you could do that with read and put them in on one loop if you want) and then just counts the occurences in the second file with grep and whenever there is a count of zero it will reverse the value making the test (()) turn true and echo the missing packages to the file missing_pkgs.txt
here is another quick one liner that does the same thing except more efficient with one loop and variables loaded via read
while read each; do read a b < <(echo $each) && ((! $(grep -c "$a.*$b" File2.txt))) && (echo "$a $b" >> missing_pkgs.txt); done < <(awk '/>=/{ print $1" "$3 }' File1.txt)
more simplified:
while read a b; do ((! $(grep -c "$a.*$b" File2.txt))) && (echo "$a $b" >> missing_pkgs.txt); done < <(awk '/>=/{ print $1" "$3 }' File1.txt)
sed -n 's².*²s#<package name="\\(&"/>#\\1 Present#p²;s/ *>= */\\)" *version="/p' File1 > /tmp/File1.sed
sed -n -f /tmp/File1.sed File2
rm /tmp/File1.sed
not in on instruction like awk could do, but do the job (posix version so --posix on GNU sed
you could change the output message that is the \\1 Present text where \\1 will the be the package name (with few modification, version could also be used)
It looks like you already got a much shorter solution in a format closer to what you desired. However, since I asked if a Python solution would work, and you said yes, check out the code here:
http://pastebin.com/F5LYrmea
(I haven't debugged it more than a little, but it seems to work on at least a little more than your example files. I released the code to the public domain. CC-BY-SA isn't a software license, according to the makers of CC; so, that's why I didn't post it here, as posting it here would give it that license. Plus, you get syntax highlighting specific to Python at the link provided.)
Basically, it's a lot of complicated text parsing. Not much of an algorithm to explain. It gets the contents of both files, strips out the packages, their versions and the operands (puts all those in a dictionary for use later), and loops through lines of the other file and compares versions; then it tells you which ones match and which ones don't.
Related
I need to create new files with extensions like: file.1, file.2, file.3 and then check if files with certain numbers exist and create file.(n+1) where n is number of highest, existing file. I was trying to get extensions using basename but it doesn't want to get couple of files
file=`basename $file.*`
ext=${file##*.}
It only works when I input whole file name like $file.3
If the filenames are guaranteed not to have newline characters in them, you can, for example, use standard unix text processing tools:
printf '%s\n' file.* | #full list
sed 's/.*\.//' | #extensions
grep '^[0-9][0-9]*$' | #numerical extensions
awk '{ if($0>m) m=$0} END{ print m }' #get maximum
Here's my take on this.
You can do this entirely in standard awk.
$ awk '{ext=FILENAME;sub(/.*\./,"",ext)} ext>n&&ext~/^[0-9]+$/{n=ext}{nextfile} END {print n}' *.*
Broken out for easier reading:
$ awk '
{
# Capture the extension...
ext=FILENAME
sub(/.*\./,"",ext)
}
# Then, if we have a numeric extension that is bigger than "n"...
ext > n && ext ~ /^[0-9]+$/ {
# let "n" be that extension.
n=ext
}
{
# We aren't actually interested in the contents of this file, so move on.
nextfile
}
# No more files? Print our result.
END {print n}
' *.*
The idea here is that we'll step through the list of filenames and let awk do ALL the processing to capture and "sort" the extensions. (We're not really sorting, we're just recording the highest number as we pass through the files.)
There are a few provisos with this solution:
This only works if all the files have a non-zero length. Technically awk conditions are being compared on "lines of the file", so if there are no lines, awk will pass right by that file.
You don't really need to use the ext variable, you can modify FILENAME directly. I included it for improved readability.
The nextfile command is fairly standard, but not universal. If you have a very old machine, or are running an esoteric variety of unix, nextfile may not be included. (I don't expect this to be a problem.)
Another alternative, which might be easier for you, would be to implement the same logic directly in POSIX shell:
$ n=0; for f in *.*; do ext=${f##*.}; if expr "$ext" : '[0-9][0-9]*$' >/dev/null && [ "$ext" -gt "$n" ]; then n="$ext"; fi; done; echo "$n"
Or, again broken out for easier reading (or scripting):
n=0
for f in *.*; do
ext=${f##*.}
if expr "$ext" : '[0-9][0-9]*$' >/dev/null && [ "$ext" -gt "$n" ]; then
n="$ext"
fi
done
echo "$n"
This steps through all files using a for loop, captures the extension, makes sure it's numeric, determines whether it's greater than "n" and records if it it is, then prints its result.
It requires no pipes and no external tools except expr, which is a POSIX.1 tool available on every system.
One proviso for this solution is that if you have NO files with extensions (i.e. *.* returns no files), this script will erroneously report that the highest numbered extension is 0. You can of course handle that easily enough, but I thought I should mention it.
Thanks for all answers, I've came up with quite similar and a bit simpler idea which I'd like to present it:
for i in file.*; do
#reading the extensions
ext=${i##*.}
if [ "$ext" -gt "$n" ];
then
#increasing n
n=$((n+1))
fi
done
then if we want to get number exceeding n by one
until [[ $a -gt "$n" ]]; do
a=$((a+1))
done
and finally a is one number bigger then number of file extensions. So if there are three files: file.1 file.2 file.3 the returned value will be 4.
I have spent some time considering how to tackle this but i'm not sure and my use of unix is fairly limited so far.
I have a text file, lets give it a name "Text.txt", which contains lots of information. Let's say it contains:
SomethingA: aValue
SomethingB: bValue
SomethingC: cValue
SomethingD: dValue
SomethingD: anotherDValueThisTime
SomethingA: aValueToIgnore
I want to search through "Text.txt", and find some values, then put the values in a new file, output.txt.
This gets a little bit more tricky tough, as what I am trying to do is get the first value of somethingA, and then every SomethingD value which occurs.
So the output in "output.txt" should be:
aValue
dValue
anotherDValue
The second "SomethingA" value wants to be ignored as this is not the first "SomethingA" value.
I imagine the logic to be something like:
Find SomethingA > output.txt
Find ALL SomethingD's >> output.txt
But I just can't quite get it.
Any help is much appreciated!
awk is ideal
awk '/^SomethingA/ && ! a++ || /^SomethingD/ { print $2 }' FS=: text.txt > output.txt
This is a little sloppy, but you can be more precise with:
awk '$1 == "SomethingA" && ! a++ || $1 == "SomethingD" { print $2 }' FS=: text.txt > output.txt
Unfortunately, that requires a fixed string for the keys. If you want a regex, you can do:
awk 'match($1, "pattern") && ...
grep -m 1 somethingA inputfile.txt >outputfile.txt
grep somethingD inputfile.txt >>outputfile.txt
grep option -m sets the maximum number of matches you want to get.
>> appends to a file rather than overwriting it like > does.
I think what you need is the grep command with the -o options. It works like this:
grep -o -e 'pattern1' -e 'pattern2' /tmp/1 >> /tmp/2
For grep there's a fixed string option, -F (fgrep) to turn off regex interpretation of the search string.
Is there a similar facility for sed? I couldn't find anything in the man. A recommendation of another gnu/linux tool would also be fine.
I'm using sed for the find and replace functionality: sed -i "s/abc/def/g"
Do you have to use sed? If you're writing a bash script, you can do
#!/bin/bash
pattern='abc'
replace='def'
file=/path/to/file
tmpfile="${TMPDIR:-/tmp}/$( basename "$file" ).$$"
while read -r line
do
echo "${line//$pattern/$replace}"
done < "$file" > "$tmpfile" && mv "$tmpfile" "$file"
With an older Bourne shell (such as ksh88 or POSIX sh), you may not have that cool ${var/pattern/replace} structure, but you do have ${var#pattern} and ${var%pattern}, which can be used to split the string up and then reassemble it. If you need to do that, you're in for a lot more code - but it's really not too bad.
If you're not in a shell script already, you could pretty easily make the pattern, replace, and filename parameters and just call this. :)
PS: The ${TMPDIR:-/tmp} structure uses $TMPDIR if that's set in your environment, or uses /tmp if the variable isn't set. I like to stick the PID of the current process on the end of the filename in the hopes that it'll be slightly more unique. You should probably use mktemp or similar in the "real world", but this is ok for a quick example, and the mktemp binary isn't always available.
Option 1) Escape regexp characters. E.g. sed 's/\$0\.0/0/g' will replace all occurrences of $0.0 with 0.
Option 2) Use perl -p -e in conjunction with quotemeta. E.g. perl -p -e 's/\\./,/gi' will replace all occurrences of . with ,.
You can use option 2 in scripts like this:
SEARCH="C++"
REPLACE="C#"
cat $FILELIST | perl -p -e "s/\\Q$SEARCH\\E/$REPLACE/g" > $NEWLIST
If you're not opposed to Ruby or long lines, you could use this:
alias replace='ruby -e "File.write(ARGV[0], File.read(ARGV[0]).gsub(ARGV[1]) { ARGV[2] })"'
replace test3.txt abc def
This loads the whole file into memory, performs the replacements and saves it back to disk. Should probably not be used for massive files.
If you don't want to escape your string, you can reach your goal in 2 steps:
fgrep the line (getting the line number) you want to replace, and
afterwards use sed for replacing this line.
E.g.
#/bin/sh
PATTERN='foo*[)*abc' # we need it literal
LINENUMBER="$( fgrep -n "$PATTERN" "$FILE" | cut -d':' -f1 )"
NEWSTRING='my new string'
sed -i "${LINENUMBER}s/.*/$NEWSTRING/" "$FILE"
You can do this in two lines of bash code if you're OK with reading the whole file into memory. This is quite flexible -- the pattern and replacement can contain newlines to match across lines if needed. It also preserves any trailing newline or lack thereof, which a simple loop with read does not.
mapfile -d '' < file
printf '%s' "${MAPFILE//"$pat"/"$rep"}" > file
For completeness, if the file can contain null bytes (\0), we need to extend the above, and it becomes
mapfile -d '' < <(cat file; printf '\0')
last=${MAPFILE[-1]}; unset "MAPFILE[-1]"
printf '%s\0' "${MAPFILE[#]//"$pat"/"$rep"}" > file
printf '%s' "${last//"$pat"/"$rep"}" >> file
perl -i.orig -pse 'while (($i = index($_,$s)) >= 0) { substr($_,$i,length($s), $r)}'--\
-s='$_REQUEST['\'old\'']' -r='$_REQUEST['\'new\'']' sample.txt
-i.orig in-place modification with backup.
-p print lines from the input file by default
-s enable rudimentary parsing of command line arguments
-e run this script
index($_,$s) search for the $s string
substr($_,$i,length($s), $r) replace the string
while (($i = index($_,$s)) >= 0) repeat until
-- end of perl parameters
-s='$_REQUEST['\'old\'']', -r='$_REQUEST['\'new\'']' - set $s,$r
You still need to "escape" ' chars but the rest should be straight forward.
Note: this started as an answer to How to pass special character string to sed hence the $_REQUEST['old'] strings, however this question is a bit more appropriately formulated.
You should be using replace instead of sed.
From the man page:
The replace utility program changes strings in place in files or on the
standard input.
Invoke replace in one of the following ways:
shell> replace from to [from to] ... -- file_name [file_name] ...
shell> replace from to [from to] ... < file_name
from represents a string to look for and to represents its replacement.
There can be one or more pairs of strings.
Unfortunately, due to the limitations of our Unix Tru64 environment, I am unable to use the GREP -r switch to perform my search for strings within files across multiple directories and sub directories.
Ideally, I would like to pass two parameters. The first will be the directory I want my search is to start on. The second is a file containing a list of all the strings to be searched. This list will consist of various directory path names and will include special characters:
ie:
/aaa/bbb/ccc
/eee/dddd/ggggggg/
etc..
The purpose of this exercise is to identify all shell scripts that may have specific hard coded path names identified in my list.
There was one example I found during my investigations that perhaps comes close, but I am not sure how to customize this to accept a file of string arguments:
eg: find etb -exec grep test {} \;
where 'etb' is the directory and 'test', a hard coded string to be searched.
This should do it:
find dir -type f -exec grep -F -f strings.txt {} \;
dir is the directory from which searching will commence
strings.txt is the file of strings to match, one per line
-F means treat search strings as literal rather than regular expressions
-f strings.txt means use the strings in strings.txt for matching
You can add -l to the grep switches if you just want filenames that match.
Footnote:
Some people prefer a solution involving xargs, e.g.
find dir -type f -print0 | xargs -0 grep -F -f strings.txt
which is perhaps a little more robust/efficient in some cases.
By reading, I assume we can not use the gnu coreutil, and egrep is not available.
I assume (for some reason) the system is broken, and escapes do not work as expected.
Under normal situations, grep -rf patternfile.txt /some/dir/ is the way to go.
a file containing a list of all the strings to be searched
Assumptions : gnu coreutil not available. grep -r does not work. handling of special character is broken.
Now, you have working awk ? no ?. It makes life so much easier. But lets be on the safe side.
Assume : working sed ,one of od OR hexdump OR xxd (from vim package) is available.
Lets call this patternfile.txt
1. Convert list into a regexp that grep likes
Example patternfile.txt contains
/foo/
/bar/doe/
/root/
(example does not print special char, but it's there.) we must turn it into something like
(/foo/|/bar/doe/|/root/)
Assuming echo -en command is not broken, and xxd , or od, or hexdump is available,
Using hexdump
cat patternfile.txt |hexdump -ve '1/1 "%02x \n"' |tr -d '\n'
Using od
cat patternfile.txt |od -A none -t x1|tr -d '\n'
and pipe it into (common for both hexdump and od)
|sed 's:[ ]*0a[ ]*$::g'|sed 's: 0a:\\|:g' |sed 's:^[ ]*::g'|sed 's:^: :g' |sed 's: :\\x:g'
then pipe result into
|sed 's:^:\\(:g' |sed 's:$:\\):g'
and you have a regexp pattern that is escaped.
2. Feed the escaped pattern into broken regexp
Assuming the bare minimum shell escape is available,
we use grep "$(echo -en "ESCAPED_PATTERN" )" to do our job.
3. To sum it up
Building a escaped regexp pattern (using hexdump as example )
grep "$(echo -en "$( cat patternfile.txt |hexdump -ve '1/1 "%02x \n"' |tr -d '\n' |sed 's:[ ]*0a[ ]*$::g'|sed 's: 0a:\\|:g' |sed 's:^[ ]*::g'|sed 's:^: :g' |sed 's: :\\x:g'|sed 's:^:\\(:g' |sed 's:$:\\):g')")"
will escape all characters and enclose it with (|) brackets so a regexp OR match will be performed.
4. Recrusive directory lookup
Under normal situations, even when grep -r is broken, find /dir/ -exec grep {} \; should work.
Some may prefer xargs instaed (unless you happen to have buggy xargs).
We prefer find /somedir/ -type f -print0 |xargs -0 grep -f 'patternfile.txt' approach, but since
this is not available (for whatever valid reason),
we need to exec grep for each file,and this is normaly the wrong way.
But lets do it.
Assume : find -type f works.
Assume : xargs is broken OR not available.
First, if you have a buggy pipe, it might not handle large number of files.
So we avoid xargs in such systems (i know, i know, just lets pretend it is broken ).
find /whatever/dir/to/start/looking/ -type f > list-of-all-file-to-search-for.txt
IF your shell handles large size lists nicely,
for file in cat list-of-all-file-to-search-for.txt ; do grep REGEXP_PATTERN "$file" ;
done ; is a nice way to get by. Unfortunetly, some systems do not like that,
and in that case, you may require
cat list-of-all-file-to-search-for.txt | split --help -a 4 -d -l 2000 file-smaller-chunk.part.
to turn it into smaller chunks. Now this is for a seriously broken system.
then a for file in file-smaller-chunk.part.* ; do for single_line in cat "$file" ; do grep REGEXP_PATTERN "$single_line" ; done ; done ;
should work.
A
cat filelist.txt |while read file ; do grep REGEXP_PATTERN $file ; done ;
may be used as workaround on some systems.
What if my shell doe not handle quotes ?
You may have to escape the file list beforehand.
It can be done much nicer in awk, perl, whatever, but since we restrict our selves to
sed, lets do it.
We assume 0x27, the ' code will actually work.
cat list-of-all-file-to-search-for.txt |sed 's#['\'']#'\''\\'\'\''#g'|sed 's:^:'\'':g'|sed 's:$:'\'':g'
The only time I had to use this was when feeding output into bash again.
What if my shell does not handle that ?
xargs fails , grep -r fails , shell's for loop fails.
Do we have other things ? YES.
Escape all input suitable for your shell, and make a script.
But you know what, I got board, and writing automated scripts for csh just seems
wrong. So I am going to stop here.
Take home note
Use the tool for the right job. Writing a interpreter on bc is perfectly
capable, but it is just plain wrong. Install coreutils, perl, a better grep
what ever. makes life a better thing.
I would like to generate a random filename in unix shell (say tcshell). The filename should consist of random 32 hex letters, e.g.:
c7fdfc8f409c548a10a0a89a791417c5
(to which I will add whatever is neccesary). The point is being able to do it only in shell without resorting to a program.
Assuming you are on a linux, the following should work:
cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32
This is only pseudo-random if your system runs low on entropy, but is (on linux) guaranteed to terminate. If you require genuinely random data, cat /dev/random instead of /dev/urandom. This change will make your code block until enough entropy is available to produce truly random output, so it might slow down your code. For most uses, the output of /dev/urandom is sufficiently random.
If you on OS X or another BSD, you need to modify it to the following:
cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-f0-9' | head -c 32
why do not use unix mktemp command:
$ TMPFILE=`mktemp tmp.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` && echo $TMPFILE
tmp.MnxEsPDsNUjrzDIiPhnWZKmlAXAO8983
One command, no pipe, no loop:
hexdump -n 16 -v -e '/1 "%02X"' -e '/16 "\n"' /dev/urandom
If you don't need the newline, for example when you're using it in a variable:
hexdump -n 16 -v -e '/1 "%02X"' /dev/urandom
Using "16" generates 32 hex digits.
uuidgen generates exactly this, except you have to remove hyphens. So I found this to be the most elegant (at least to me) way of achieving this. It should work on linux and OS X out of the box.
uuidgen | tr -d '-'
As you probably noticed from each of the answers, you generally have to "resort to a program".
However, without using any external executables, in Bash and ksh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); done; echo $string
in zsh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); dummy=$RANDOM; done; echo $string
Change the lower case x in the format string to an upper case X to make the alphabetic hex characters upper case.
Here's another way to do it in Bash but without an explicit loop:
printf -v string '%X' $(printf '%.2s ' $((RANDOM%16))' '{00..31})
In the following, "first" and "second" printf refers to the order in which they're executed rather than the order in which they appear in the line.
This technique uses brace expansion to produce a list of 32 random numbers mod 16 each followed by a space and one of the numbers in the range in braces followed by another space (e.g. 11 00). For each element of that list, the first printf strips off all but the first two characters using its format string (%.2) leaving either single digits followed by a space each or two digits. The space in the format string ensures that there is then at least one space between each output number.
The command substitution containing the first printf is not quoted so that word splitting is performed and each number goes to the second printf as a separate argument. There, the numbers are converted to hex by the %X format string and they are appended to each other without spaces (since there aren't any in the format string) and the result is stored in the variable named string.
When printf receives more arguments than its format string accounts for, the format is applied to each argument in turn until they are all consumed. If there are fewer arguments, the unmatched format string (portion) is ignored, but that doesn't apply in this case.
I tested it in Bash 3.2, 4.4 and 5.0-alpha. But it doesn't work in zsh (5.2) or ksh (93u+) because RANDOM only gets evaluated once in the brace expansion in those shells.
Note that because of using the mod operator on a value that ranges from 0 to 32767 the distribution of digits using the snippets could be skewed (not to mention the fact that the numbers are pseudo random in the first place). However, since we're using mod 16 and 32768 is divisible by 16, that won't be a problem here.
In any case, the correct way to do this is using mktemp as in Oleg Razgulyaev's answer.
Tested in zsh, should work with any BASH compatible shell!
#!/bin/zsh
SUM=`md5sum <<EOF
$RANDOM
EOF`
FN=`echo $SUM | awk '// { print $1 }'`
echo "Your new filename: $FN"
Example:
$ zsh ranhash.sh
Your new filename: 2485938240bf200c26bb356bbbb0fa32
$ zsh ranhash.sh
Your new filename: ad25cb21bea35eba879bf3fc12581cc9
Yet another way[tm].
R=$(echo $RANDOM $RANDOM $RANDOM $RANDOM $RANDOM | md5 | cut -c -8)
FILENAME="abcdef-$R"
This answer is very similar to fmarks, so I cannot really take credit for it, but I found the cat and tr command combinations quite slow, and I found this version quite a bit faster. You need hexdump.
hexdump -e '/1 "%02x"' -n32 < /dev/urandom
Another thing you can add is running the date command as follows:
date +%S%N
Reads nonosecond time and the result adds a lot of randomness.
The first answer is good but why fork cat if not required.
tr -dc 'a-f0-9' < /dev/urandom | head -c32
Grab 16 bytes from /dev/random, convert them to hex, take the first line, remove the address, remove the spaces.
head /dev/random -c16 | od -tx1 -w16 | head -n1 | cut -d' ' -f2- | tr -d ' '
Assuming that "without resorting to a program" means "using only programs that are readily available", of course.
If you have openssl in your system you can use it for generating random hex (also it can be -base64) strings with defined length. I found it pretty simple and usable in cron in one line jobs.
openssl rand -hex 32
8c5a7515837d7f0b19e7e6fa4c448400e70ffec88ecd811a3dce3272947cb452
Hope to add a (maybe) better solution to this topic.
Notice: this only works with bash4 and some implement of mktemp(for example, the GNU one)
Try this
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//}
This one is twice as faster as head /dev/urandom | tr -cd 'a-f0-9' | head -c 32, and eight times as faster as cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32.
Benchmark:
With mktemp:
#!/bin/bash
# a.sh
for (( i = 0; i < 1000; i++ ))
do
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//} > /dev/null
done
time ./a.sh
./a.sh 0.36s user 1.97s system 99% cpu 2.333 total
And the other:
#!/bin/bash
# b.sh
for (( i = 0; i < 1000; i++ ))
do
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32 > /dev/null
done
time ./b.sh
./b.sh 0.52s user 20.61s system 113% cpu 18.653 total
If you are on Linux, then Python will come pre-installed. So you can go for something similar to the below:
python -c "import uuid; print str(uuid.uuid1())"
If you don't like the dashes, then use replace function as shown below
python -c "import uuid; print str(uuid.uuid1()).replace('-','')"