Unix: Using filename from another file - unix

A basic Unix question.
I have a script which counts the number of records in a delta file.
awk '{
n++
} END {
if(n >= 1000) print "${completeFile}"; else print "${deltaFile}";
}' <${deltaFile} >${fileToUse}
Then, depending on the IF condition, I want to process the appropriate file:
cut -c2-11 < ${fileToUse}
But how do I use the contents of the file as the filename itself?
And if there are any tweaks to be made, feel free.
Thanks in advance
Cheers
Simon

To use as a filename the contents of a file which is itself identified by a variable (as asked)
cut -c2-11 <"$( cat $filetouse )"
// or in zsh just
cut -c2-11 <"$( < $filetouse )"
unless the filename in the file ends with one or more newline character(s), which people rarely do because it's quite awkward and inconvenient, then something like:
read -rdX var <$filetouse; cut -c2-11 < "${var%?}"
// where X is a character that doesn't occur in the filename
// maybe something like $'\x1f'
Tweaks: your awk prints the variable reference ${completeFile} or ${deltaFile} (because they're within the single-quoted awk script) not the value of either variable. If you actually want the value, as I'd expect from your description, you should pass the shell vars to awk vars like this
awk -vf="$completeFile" -vd="$deltaFile" '{n++} END{if(n>=1000)print f; else print d}' <"$deltaFile"`
# the " around $var can be omitted if the value contains no whitespace and no glob chars
# people _often_ but not always choose filenames that satisfy this
# and they must not contain backslash in any case
or export the shell vars as env vars (if they aren't already) and access them like
awk '{n++} END{if(n>=1000) print ENVIRON["completeFile"]; else print ENVIRON["deltaFile"]}' <"$deltaFile"
Also you don't need your own counter, awk already counts input records
awk -vf=... -vd=... 'END{if(NR>=1000)print f;else print d}' <...
or more briefly
awk -vf=... -vd=... 'END{print (NR>=1000?f:d)}' <...
or using a file argument instead of redirection so the name is available to the script
awk -vf="$completeFile" 'END{print (NR>=1000?f:FILENAME)}' "$deltaFile" # no <
and barring trailing newlines as above you don't need an intermediate file at all, just
cut -c2-11 <"$( awk -vf="$completeFile" -'END{print (NR>=1000?f:FILENAME)}' "$deltaFile")"
Or you don't really need awk, wc can do the counting and any POSIX or classic shell can do the comparison
if [ $(wc -l <"$deltaFile") -ge 1000 ]; then c="$completeFile"; else c="$deltaFile"; fi
cut -c2-11 <"$c"

Related

Adding previous lines to current line unless pattern is found in unix shell script

I am facing an issue while adding previous lines to current line for a pattern. I have a 43 MB file in unix. The snippet is shown below:
AAA7034 new value and a old value
A
78698 new line and old value
BCA0987 old value and new value
new value
What I want is :
AAA7034 new value and a old value A 78698 new line and old value
BCA0987 old value and new value new value
Means I have add all the the lines till next pattern is found ( first pattern is : AAA and next pattern is : BCA )
because of high size of files..not sure if awk/sed shall work. Any bash script is appreciated.
You can combine all patterns and perform a regex match. Try something like this (it is just a scratch, you should trim the output if you need):
#!/bin/bash
patterns="^(AAA|BCS|BABA|BCA)"
file="$1"
while IFS= read -r line; do
if [[ "$line" =~ $patterns ]] ; then
echo # prints new line
fi
echo -n $line " " # prints the line itself and a space as a separator
done < "$file"
You can redirect the output to a file, of course.
It's not really clear precisely what you want. You've stated that you want to match the patterns 'AAA' and 'BCA', and later expanded that to "patter shall be like: AAA, BCS, BABA, BCA". I don't know if that means that you only want to match 'AAA', 'BCA', 'AAA, 'BCS, 'BABA', and 'BCA, or if you want to match 3 or 4 characters strings containing only 'A', B', 'C', and 'S', but it sounds like you are just looking for:
awk '/[A-Z]{3,4}/{printf "\n"} { printf "%s ", $0} END {printf "\n"}' input-file
Change the pattern as needed when your requirements are made more precise.
Based on the comment, it is trivial to convert any awk program to perl. Here is (basically) the output of a2p on the above awk script, with changes to reflect the stated pattern:
#!/usr/bin/env perl
while (<>) {
chomp;
if (/AAA|BCA|BCS|BABA/) {
printf "\n";
}
printf '%s ', $_;
}
printf "\n";
You can simplify that a bit:
perl -pe 'chomp; printf "\n" if /AAA|BCA|BCS|BABA/; printf "%s ", $_' input-file; echo

How can I split a text based on every n.th words?

I am trying to split a text file for every 1000th word.
awk -v RS='[[:space:]]+' 'END{print NR+0}' filename
with awk I can count the words in a file but I don't know how I can split it.
final output= filename(1).txt, filename(2).txt
This totally sick solution should work for files which are less than 10000 words:
. <(echo -e 'uno due tre\nquattro\ncinque sei sette otto\nnove dieci undici dodici tredici' | sed -zE '
s/^/\x0/
:a
y/012345678/123456789/
s/\x0(([^ \n]+[ \n]+){4})/cat > file0 <<EOF\n\1\nEOF\n\x0/
ta
s/\x0(.*)/cat > file0 <<EOF\n\1\nEOF\n\x0/
s/\n+/\n/g')
Essentially, it intersperses some code at the points where the splits have to occur in such a way that the outcoming file is a bash script which is a sequence of cat commands which read from a heredocument and write to a file (a maximum of 10 files is allowed!). This script is sourced (. file is just source file, just uglier). You can see the script by removing the leading . <( and the trailing ).
The nice thing is that it splits the big file in the middle of lines if necessary, without altering the lines where no split occurs.
The ugliest thing is that it numbers the files backward.
The limitation on the number of words is because I am implementing only a one-digit addition on the filenames; it can be removed by implementing an addition in a similar way as done here or here.
You can do it with awk without too much trouble. It helps keep the clutter down if you write a function to actually handle outputting the words from an array to your file. Keep a counter to number the output file names, e.g. wordsfile_1 (first 1000 words), wordsfile_2 (next 1000 words) and so on. Then it is just a matter of keeping track of how many words you add to your array and call your output function when you hit 1000 words. Then delete the array, to make it ready to hold the next 1000 words, reset your word counter and keep going.
For example you could do something like:
awk '
function writefile() {
fname="wordsfile_" ++c + 0
for (j=1; j<=n; j++)
print a[j] > fname
delete a
n = 0
}
{
for (i=1; i<=NF; i++) {
a[++n] = $i
if (n == 1000)
writefile()
}
}
END {
writefile()
}' input_file
The function writefile() handles writing the output to your 1000 word files and deleting the array and resetting the counter n. The END rule just calls the function once more to output any words collected since the last output.
Let me know if you have further questions.
#!/bin/bash
for FILE in *.txt
do
#FILE="FILENAME.txt"
read -p "HOW MANY WORDS SHOULD BE IN YOUR FILES? (~ APPROXIMATE) " BUFFER
#BUFFER=1000 # APPROXIMATE NUMBER OF WORDS IN A FILE
NW=$(wc -w $FILE | awk '{print $1}') #NW=NUMBER OF WORDS IN YOUR FILE
if [[ $NW -gt $BUFFER ]]
then
LINENUMBER=$(wc -l $FILE | awk '{print $1}')
WCOUNT=0
FL=1 #FIRST LINE NUMBER OF EVERY NEW FILE
FN=1 #FILE NUMBER
for j in $(eval echo "{1..$LINENUMBER}")
do
INC=$(sed -n "${j}p" $FILE | wc -w)
WCOUNT=$(( WCOUNT + INC ))
if [[ $WCOUNT -gt $BUFFER ]];
then
sed -n "${FL},${j}p" $FILE > ${FILE%%.*}_${FN}.txt
FL=$(( j + 1))
(( FN++ ))
WCOUNT=0
fi
done
sed -n "${FL},\$p" $FILE > ${FILE%%.*}_${FN}.txt
fi
done
I found a different solution, It generates files that have roughly 1000 words in each.

How do I replace empty strings in a tsv with a value?

I have a tsv, file1, that is structured as follows:
col1 col2 col3
1 4 3
22 0 8
3 5
so that the last line would look something like 3\t\t5, if it was printed out. I'd like to replace that empty string with 'NA', so that the line would then be 3\tNA\t5. What is the easiest way to go about this using the command line?
awk is designed for this scenario (among a million others ;-) )
awk -F"\t" -v OFS="\t" '{
for (i=1;i<=NF;i++) {
if ($i == "") $i="NA"
}
print $0
}' file > file.new && mv file.new file
-F="\t" indicates that the field separator (also known as FS internally to awk) is the tab character. We also set the output field separator (OFS) to "\t".
NF is the number of fields on a line of data. $i gets evaluated as $1, $2, $3, ... for each value between 1 and NF.
We test if the $i th element is empty with if ($i == "") and when it is, we change the $i th element to contain the string "NA".
For each line of input, we print the line's ($0) value.
Outside the awk script, we write the output to a temp file, i.e. file > file.new. The && tests that the awk script exited without errors, and if OK, then moves the file.new over the original file. Depending on the safety and security use-case your project requires, you may not want to "destroy" your original file.
IHTH.
A straightforward approach is
sed -i 's/^\t/NA\t/;s/\t$/\tNA/;:0 s/\t\t/\tNA\t/;t0' file
sed -i edit file in place;
s/a/b/ replace a with b;
s/^\t/\tNA/ replace \t in the beginning of the line with NA\t
(the first column becomes NA);
s/\t$/\tNA/ the same for the last column;
s/\t\t/\tNA\t/ insert NA in between \t\t;
:0 s///; t0 repeat s/// if there was a replacement (in case there are other missing values in the line).

Replacing a String Pattern with another sequence in unix

I want replace the String TaskID_1 with a sequence starting from 1001 and this TaskID_1 can exists any many number of lines in my input file.
Similarly i need to replace all occurrences of TASKID_2 in my input file with next sequence value 1002.
Input file:
12345|45345|TaskID_1|dksj|kdjfdsjf|12
1245|425345|TaskID_1|dksj|kdjfdsjf|12
1234|25345|TaskID_2|dksj|kdjfdsjf|12
123425|65345|TaskID_2|dksj|kdjfdsjf|12
123425|15325|TaskID_1|dksj|kdjfdsjf|12
11345|55315|TaskID_2|dksj|kdjfdsjf|12
6345|15345|TaskID_3|dksj|kdjfdsjf|12
72345|25345|TaskID_4|dksj|kdjfdsjf|12
9345|411345|TaskID_3|dksj|kdjfdsjf|12
The output file should look like:
12345|45345|1001|dksj|kdjfdsjf|12
1245|425345|1001|dksj|kdjfdsjf|12
1234|25345|1002|dksj|kdjfdsjf|12
123425|65345|1002|dksj|kdjfdsjf|12
123425|15325|1001|dksj|kdjfdsjf|12
11345|55315|1002|dksj|kdjfdsjf|12
6345|15345|1003|dksj|kdjfdsjf|12
72345|25345|1004|dksj|kdjfdsjf|12
9345|411345|1003|dksj|kdjfdsjf|12
Here's one way using awk:
awk 'BEGIN { FS=OFS="|" } { $3=1000 + NR }1' file
Or less verbosely:
awk -F '|' '{ $3=1000 + NR }1' OFS='|' file
Results:
12345|45345|1001|dksj|kdjfdsjf|12
1245|425345|1002|dksj|kdjfdsjf|12
1234|25345|1003|dksj|kdjfdsjf|12
123425|65345|1004|dksj|kdjfdsjf|12
123425|15325|1005|dksj|kdjfdsjf|12
11345|55315|1006|dksj|kdjfdsjf|12
6345|15345|1007|dksj|kdjfdsjf|12
72345|25345|1008|dksj|kdjfdsjf|12
9345|411345|1009|dksj|kdjfdsjf|12
For the first example, the file separator and output file separator are set to a single pipe character. This is set in the BEGIN block, so that it is executed only once, and not on every line of input. We then set the third column to be equal to 1000 plus an incrementing variable. We could use ++i as this variable, but we could instead use NR (which is short for record number/line number) and this would therefore avoid the need to create an extra variable. The 1 on the end enables printing by default. A more verbose solution would look like:
awk 'BEGIN { FS=OFS="|" } { $3=1000 + NR; print }' file
EDIT:
Using the updated data file, try:
awk 'BEGIN { FS=OFS="|" } { sub(/.*_/,"",$3); $3+=1000 }1' file
Results:
12345|45345|1001|dksj|kdjfdsjf|12
1245|425345|1001|dksj|kdjfdsjf|12
1234|25345|1002|dksj|kdjfdsjf|12
123425|65345|1002|dksj|kdjfdsjf|12
123425|15325|1001|dksj|kdjfdsjf|12
11345|55315|1002|dksj|kdjfdsjf|12
6345|15345|1003|dksj|kdjfdsjf|12
72345|25345|1004|dksj|kdjfdsjf|12
9345|411345|1003|dksj|kdjfdsjf|12
A Perl solution using Steve's logic of adding 1000:
perl -pne 's/TaskID_(\d+)/$1+1000/e;' file
This replaces the 'TaskID_n' with 1000+n. 'e' is used to evaluate the replacement.
Replace TaskID_ with 100, this is super easy with sed for single digit IDs:
$ sed 's/TaskID_/100/' file
12345|45345|1001|dksj|kdjfdsjf|12
1245|425345|1001|dksj|kdjfdsjf|12
1234|25345|1002|dksj|kdjfdsjf|12
123425|65345|1002|dksj|kdjfdsjf|12
123425|15325|1001|dksj|kdjfdsjf|12
11345|55315|1002|dksj|kdjfdsjf|12
6345|15345|1003|dksj|kdjfdsjf|12
72345|25345|1004|dksj|kdjfdsjf|12
9345|411345|1003|dksj|kdjfdsjf|12
To store this change back to the file use the -i option:
sed -i 's/TaskID_/100/' file
Note: this works for TaskID_[0-9] if you want TaskID_23 mapped to 1023 then this won't, this would map TaskID_23 to 10023.
I can't come up with a better solution than the one steve suggested in awk.
So here's a worse solution, using only bash.
#!/bin/bash
IFS='|'
while read f1 f2 f3 f4 f5 f6; do
printf '%s|%s|%d|%s|%s|%s\n' "$f1" "$f2" "$((${f3#*_}+1000))" "$f4" "$f5" "$f6"
done < input
It's "worse" only because it'll be much slower than awk, which is fast and efficient with this sort of problem.
perl -F"\|" -lane '$F[2]=~s/.*_/100/g;print join("|",#F)' your_file
Tested Below:
> cat temp
12345|45345|TaskID_1|dksj|kdjfdsjf|12
1245|425345|TaskID_1|dksj|kdjfdsjf|12
1234|25345|TaskID_2|dksj|kdjfdsjf|12
123425|65345|TaskID_2|dksj|kdjfdsjf|12
123425|15325|TaskID_1|dksj|kdjfdsjf|12
11345|55315|TaskID_2|dksj|kdjfdsjf|12
6345|15345|TaskID_3|dksj|kdjfdsjf|12
72345|25345|TaskID_4|dksj|kdjfdsjf|12
9345|411345|TaskID_3|dksj|kdjfdsjf|12
> perl -F"\|" -lane '$F[2]=~s/.*_/100/g;print join("|",#F)' temp
12345|45345|1001|dksj|kdjfdsjf|12
1245|425345|1001|dksj|kdjfdsjf|12
1234|25345|1002|dksj|kdjfdsjf|12
123425|65345|1002|dksj|kdjfdsjf|12
123425|15325|1001|dksj|kdjfdsjf|12
11345|55315|1002|dksj|kdjfdsjf|12
6345|15345|1003|dksj|kdjfdsjf|12
72345|25345|1004|dksj|kdjfdsjf|12
9345|411345|1003|dksj|kdjfdsjf|12
>

Maximum number of characters in a field of a csv file using unix shell commands?

I have a csv file. In one of the fields, say the second field, I need to know maximum number of characters in that field. For example, given the file below:
adf,jlkjl,lkjlk
jf,j,lkjljk
jlkj,lkejflkj,adfafef,
jfje,jj,lkjlkj
jjee,eeee,ereq
the answer would be 8 because row 3 has 8 characters in the second field. I would like to integrate this into a bash script, so common unix command line programs are preferred. Imaginary bonus points for explaining what the command is doing.
EDIT: Here is what I have so far
cut --delimiter=, -f 2 test.csv | wc -m
This gives me the character count for all of the fields, not just one, so I still have progress to make.
I would use awk for the task. It uses a comma to split line in fields and for each line checks if the length of second field is bigger that the value already saved.
awk '
BEGIN {
FS = ","
}
{ c = length( $2 ) > c ? length( $2 ) : c }
END {
print c
}
' infile
Use it as a one-liner and assign the return value to a variable, like:
num=$(awk 'BEGIN { FS = "," } { c = length( $2 ) > c ? length( $2 ) : c } END { print c }' infile)
Well #oob, you basically provided the answer with your last edit, and it's the most simple of all answers given. However, I also like #Birei's answer just because I enjoy AWK. :-)
I too had to find the longest possible value for a given field inside a text file today. Tested with your sample and got the expected 8.
cut -d, -f2 test.csv | wc -L
As you see, just a matter of using the correct option for wc (which I hope you have already figured by now).
My solution is to loop over the lines. Than I exchange the commas with new lines to loop over the words than I check which is the longest word and save the data.
#!/bin/bash
lineno=1
matchline=0
matchlen=0
for line in $(cat input.txt); do
words=`echo $line | sed -e 's/,/\n/g'`
for word in $words; do
# echo "line: $lineno; length: ${#word}; input: $word"
if [ $matchlen -lt ${#word} ]; then
matchlen=${#word}
matchline=$lineno
fi
done;
lineno=$(($lineno + 1))
done;
echo max length is $matchlen in line $matchline
Bash and Coreutils Solution
There are a number of ways to solve this, but I vote for simplicity. Here's a solution that uses Bash parameter expansion and a few standard shell utilities to measure each line:
cut -d, -f2 /tmp/foo |
while read; do
echo ${#REPLY}
done | sort | tail -n1
The idea here is to split the CSV file, and then use the parameter length expansion of the implicit REPLY variable to measure the characters on each line. When we sort the measurements, the last line of the sorted output will hold the length of the longest line found.
cut out the desired column
print each line length
sort the line lengths
grab the max line length
cut -d, -f2 test.csv | awk '{print length($0);}' | sort -n | tail -n 1

Resources