Extracting dates that satisfied multiple conditions using NCO/CDO or bash - netcdf

I have a netcdf file containing vorticity (per sec) and winds (m/s). I want to print the dates of gridpoints that satisfy the following conditions:
1). Vorticity > 1x10^-5 per sec and winds >= 5 m/s at a gridpoint.
2). The average of vorticity and winds at the "four" (North, West, East, South) surrounding the gridpoint found in (1) should also be > 1x10^-5 and 5m/s, respectively.
I am able to just filter the gridpoints that satisfied (1), using ncap:
ncap2 -v -O -s 'where(vort > 1e-5 && winds >= 5) vort=vort; elsewhere vort=vort.get_miss();' input_test.nc output_test.nc
How do I get the dates? Also how can I implement the second condition.
Here's the screenshot of the header of the netcdf file.
I'll appreciate any help on this.

This may be achieved by combining "cdo" and "nco".
The average value of the surrounding 4 grids needed for the second condition can be calculated by combining the shiftxy and ensmean operators of "cdo".
cdo selname,vr,wspd input_test.nc vars.nc
cdo -expr,'vr_mean=vr; wspd_mean=wspd' \
-ensmean \
-shiftx,1 vars.nc \
-shiftx,-1 vars.nc \
-shifty,1 vars.nc \
-shifty,-1 vars.nc \
vars_mean.nc
You can then use merge operator of "cdo" to combine the variables needed to check conditions 1) and 2) into a single NetCDF file, and use ncap2 to check the conditions as you have tried.
In the example command below, the "for" loop of "ncap2" is used to scan the time. If there is at least one grid that satisfies both conditions 1) and 2) at each time, the information for that time will be displayed.
cdo merge vars.nc vars_mean.nc vars_test.nc
ncap2 -s '*flag = (vr > 1e-5 && wspd >= 5) && (vr > 1e-5 && wspd >= 5); *nt=$time.size; for(*i=0;i<nt;i++) { if ( max(flag(i,:,:))==1 ) { print(time(i)); } }' vars_test.nc

Related

How to plot data from file from specific lines start at line with some special string

I am trying to execute command similar to
plot "data.asc" every ::Q::Q+1500 using 2 with lines
But i have problem with that "Q" number. Its not a well known value but number of line with some specific string. Lets say i have line with string "SET_10:" and then i have my data to plot after this specific line. Is there some way how to identify the number of that line with specific string?
An easy way is to pass the data through GNU sed to print just the wanted lines:
plot "< sed -n <data.asc '/^SET_10:/,+1500{/^SET_10:/d;p}'" using 1:2 with lines
The -n stops any output, the a,b says between which lines to do the {...} commands, and those commands say to delete the trigger line, and p print the others.
To make sure you have a compatible GNU sed try the command on its own, for a short number of lines, eg 5:
sed -n <data.asc '/^SET_10:/,+5{/^SET_10:/d;p}'
If this does not output the first 5 lines of your data, an alternative is to use awk, as it is too difficult in sed to count lines without this GNU-specific syntax. Test the (standard POSIX, not GNU-specific) awk equivalent:
awk <data.asc 'end!=0 && NR<=end{print} /^start/{end=NR+5}'
and if that is ok, use it in gnuplot as
plot "< awk <data.asc 'end!=0 && NR<=end{print} /^start/{end=NR+1500}'" using 1:2 with lines
Here's a version entirely within gnuplot, with no external commands needed. I tested this on gnuplot 5.0 patchlevel 3 using the following bash commands to create a simple dataset of 20 lines of which only 5 lines are to be printed from the line with "start" in column 1. You don't need to do this.
for i in $(seq 1 20)
do let j=i%2
echo "$i $j"
done >data.asc
sed -i data.asc -e '5a\
start'
The actual gnuplot uses a variable endlno initially set to NaN (not-a-number) and a function f which takes 3 parameters: a boolean start saying if column 1 has the matching string, lno the current linenumber, and the current column 1 value val. If the linenumber is less-than-or-equal-to the ending line number (and therefore it is not still NaN), f returns val, else if the start condition is true the wanted ending line number is set in variable endlno and NaN is returned. If we have not yet seen the start, NaN is returned.
gnuplot -persist <<\!
endlno=NaN
f(start,lno,val) = ((lno<=endlno)?val:(start? (endlno=lno+5,NaN) : NaN))
plot "data.asc" using (f(stringcolumn(1)eq "start", $0, $1)):2 with lines
!
Since gnuplot does not plot points with NaN values, we ignore lines upto the start, and again after the wanted number of lines.
In your case you need to change 5 to 1500 and "start" to "SET_10:".

How can I copy the values of a variable to another variable in a NetCDF but not the dimension?

I have 2 dimensions X1, X2
And 3 variables V1(X1), V2(X2), V3(X3)
I want to copy the values of V2 to V1. But keep the dimensions as it is.
If I do:
ncap2 -s "V2=V1*1" in.nc out.nc
the dimensions become V1(X2), V2(X2), V3(X3)
How can I retain the original dimension of V1?
That's an unusual request. One solution is to follow the step you already have with one more command to append the values you want back into the original variable. Here lon and ilev are both the same size, but with different underlying dimensions:
ncap2 -O -v -s 'lon=ilev' ~/in.nc ~/foo.nc # make lon a copy of ilev
ncks -A -C -v lon ~/foo.nc ~/in.nc # append lon back into itself

Unix random 16 character number

Hey need help in unix creating a random 16 character long number using 0-9 but the first number being not 0 (1-9).
tr -c -d 0-9 < /dev/urandom | fold -w16
something like this but first number not being 0
First, generate a 1-digit random number using the digits 1-9.
Second, generate a 15-digit random number using the digits 0-9.
Then, combine these two numbers.
Since you are on UNIX, you can use jot with arguments from bash integer expansion:
jot -r 1 $((10 ** 15)) $((((10 ** 15)) * 2))
Generates numbers like:
1595866171875968

Can't concatenate netCDF files with ncrcat

I am looping over a model that outputs daily netcdf files. I have a 7-year time series of daily files that, ideally, I would like to append into a single file at the end of each loop but it seems that, using nco tools, the best way to merge the data into one file is to concatenate. Each daily file is called test.t.nc and is renamed as the date of the daily file e.g. 20070102.nc, except the first one that I create with
ncks -O --mk_rec_dmn time test.t.nc 2007-01-01.nc
to make time the record dimension for concatenation. If I try to concatenate the first two files such as
ncrcat -O -h 2007-01-01.nc 2007-01-02.nc out.nc
I get the error message
ncrcat: symbol lookup error: /usr/local/lib/libudunits2.so.0: undefined symbol: XML_ParserCreate
I don't understand what this means and, looking at all the help online, ncrcat should be a straightforward process. Does anyway understand what's happening?
Just in case this helps, the ncdump -h for 20070101.nc is
netcdf \20070101 {
dimensions:
time = UNLIMITED ; // (8 currently)
y = 1 ;
x = 1 ;
tile = 9 ;
soil = 4 ;
nt = 2 ;
and 20070102.nc
netcdf \20070102 {
dimensions:
x = 1 ;
y = 1 ;
tile = 9 ;
soil = 4 ;
time = UNLIMITED ; // (8 currently)
nt = 2 ;
This is part of a bigger shell script and I don't have much flexibility over the naming of files - just in case this matters!

Unix: find all lines having timestamps in both time series?

I have time-series data where I would like to find all lines matching each another but values can be different (match until the first tab)! You can see the vimdiff below where I would like to get rid of days that occur only on the other time series.
I am looking for the simplest unix tool to do this!
Timeserie here and here.
Simple example
Input
Left file Right File
------------------------ ------------------------
10-Apr-00 00:00 0 || 10-Apr-00 00:00 7
20-Apr 00 00:00 7 || 21-Apr-00 00:00 3
Output
Left file Right File
------------------------ ------------------------
10-Apr-00 00:00 0 || 10-Apr-00 00:00 7
Let's consider these sample input files:
$ cat file1
10-Apr-00 00:00 0
20-Apr-00 00:00 7
$ cat file2
10-Apr-00 00:00 7
21-Apr-00 00:00 3
To merge together those lines with the same date:
$ awk 'NR==FNR{a[$1]=$0;next;} {if ($1 in a) print a[$1]"\t||\t"$0;}' file1 file2
10-Apr-00 00:00 0 || 10-Apr-00 00:00 7
Explanation
NR==FNR{a[$1]=$0;next;}
NR is the number of lines read so far and FNR is the number of lines read so far from the current file. So, when NR==FNR, we are still reading the first file. If so, save this whole line, $0, in array a under the key of the first field, $1, which is the date. Then, skip the rest of the commands and jump to the next line.
if ($1 in a) print a[$1]"\t||\t"$0
If we get here, then we are reading the second file, file2. If the first field on this line, $1 is a date that we already saw in file1, in other words, if $1 in a, then print this line out together with the corresponding line from file1. The two lines are separated by tab-||-tab.
Alternative Output
If you just want to select lines from file2 whose dates are also in file1, then the code can be simplified:
$ awk 'NR==FNR{a[$1]++;next;} {if ($1 in a) print;}' file1 file2
10-Apr-00 00:00 7
Or, still simpler:
$ awk 'NR==FNR{a[$1]++;next;} ($1 in a)' file1 file2
10-Apr-00 00:00 7
There is the relatively unknown unix command join. It can join sorted files on a key column.
To use it in your context, we follow this strategy (left.txt and right.txt are your files):
add line numbers (to put everything in the original sequence in the last step)
nl left.txt > left_with_lns.txt
nl right.txt > right_with_lns.txt
sort both files on the date column
sort left_with_lns.txt -k 2 > sl.txt
sort right_with_lns.txt -k 2 > sr.txt
join the files using the date column (all times are 0:00) (this would merge all columns of both files with correponding key, but we provide a output template to write the columns from the first file somewhere and the columns from the second file somewhere else (but only those line with a matching key will end in the result fl.txt and fr.txt)
join -j 2 -t $'\t' -o 1.1 1.2 1.3 1.4 sl.txt sr.txt > fl.txt
join -j 2 -t $'\t' -o 2.1 2.2 2.3 2.4 sl.txt sr.txt > fr.txt
sort boths results on the linenumber column and output the other columns
sort -n fl |cut -f 2- > left_filtered.txt
sort -n fr.txt | cut -f 2- > right_filtered.txt
Tools used: cut, join, nl, sort.
As requested by #Masi, I tried to work out a solution using sed.
My first attempt uses two passes; the first transforms file1 into a sed script that is used in the second pass to filter file2.
sed 's/\([^ \t]*\).*/\/^\1\t\/p;t/' file1 > sed1
sed -nf sed1 file2 > out2
With big input files, this is s-l-o-w; for each line from file2, sed has to process an amount of patterns that equals the number of lines in file1. I haven't done any profiling, but I wouldn't be surprised if the time complexity is quadratic.
My second attempt merges and sorts the two files, then scans through all lines in search of pairs. This runs in linear time and consequently is a lot faster. Please note that this solution will ruin the original order of the file; alphabetical sorting doesn't work too well with this date notation. Supplying files with a different date format (y-m-d) would be the easiest way to fix that.
sed 's/^[^ \t]\+/&#1/' file1 > marked1
sed 's/^[^ \t]\+/&#2/' file2 > marked2
sort marked1 marked2 > sorted
sed '$d;N;/^\([^ \t]\+\)#1.*\n\1#2/{s/\(.*\)\n\(.*\)/\2\n\1/;P};D' sorted > filtered
sed 's/^\([^ \t]\+\)#2/\1/' filtered > out2
Explanation:
In the first command, s/^[^ \t]\+/&#1/ appends #1 to every date. This makes it possible to merge the files, keep equal dates together when sorting, and still be able to tell lines from different files apart.
The second command does the same for file2; obviously with its own marker #2.
The sort command merges the two files, grouping equal dates together.
The third sed command returns all lines from file2 that have a date that also occurs in file1.
The fourth sed command removes the #2 marker from the output.
The third sed command in detail:
$d suppresses inappropriate printing of the last line
N reads and appends another line of input to the line already present in the pattern space
/^\([^ \t]\+\)#1.*\n\1#2/ matches two lines originating from different files but with the same date
{ starts a command group
s/\(.*\)\n\(.*\)/\2\n\1/ swaps the two lines in the pattern space
P prints the first line in the pattern space
} ends the command group
D deletes the first line from the pattern space
The bad news is, even the second approach is slower than the awk approach made by #John1024. Sed was never designed to be a merge tool. Neither was awk, but awk has the advantage of being able to store an entire file in a dictionary, making #John1024's solution blazingly fast. The downside of a dictionary is memory consumption. On huge input files, my solution should have the advantage.

Resources