call routine in IDL programming lanuage - idl-programming-language

I am new and learning IDL on a steep curve. I have 2 PROS first one follows
Pro READ_Netcdf1,infile,temperature,time,print_prompts=print_prompts
COMPILE_OPt IDL2
infile='D:/Rwork/dataset/monthly_mean/version_2C/air.2m.mon.mean.nc'
IF (N_Elements(infile) EQ 0 ) Then STOP,'You are being silly, you must specify infile on call'
print,infile
iid = NCDF_OPEN(infile)
NCDF_VARGET, iid, 'time', time ; Read time
NCDF_VARGET, iid, 'air', temperature ; Read surface average temperature
NCDF_VARGET, iid, 'lat', latitude ; Read Latitude
NCDF_VARGET, iid, 'lon', longitude ; Read Longitude
NCDF_CLOSE, iid ; Close Input File
Centigrade=temperature-273.15
print,'Time'
print,time[0:9]
Print, 'Latitude'
Print, latitude[0:9]
Print, 'Longitude'
Print, longitude[0:9]
print,'Temperature'
print, temperature[0:9]
Print, 'Centigrade'
Print, Centigrade[0:9]
;ENDIF
RETURN
END
This works perfectly. My second Pro is as follows:-
PRO Change_Kelvin_to_Cent,Temperature
;+ This programme take the temperature from the NETCDF file and converts
; to Centigrade
;Output
; The Month Mean Temperature in Centigrade
; Must have read ncdf1 in the directory run first
;
; -
COMPILE_OPt IDL2
infile='D:/Rwork/dataset/monthly_mean/version_2c/air.2m.mon.mean.nc'
read_netcdf1, infile, Temperature
Centigrade = Temperature-273.15
print,'Centigrade'
print,Centigrade[0:9]
RETURN
END
This also works
I am being instructed to call the variable "Temperature" from the first PRO to calculate the Temperature in the second PRO without the command line
read_netcdf1, infile, Temperature
I cannot get this to work. Can anybody advise and help me out of this problem please

I was misinformed . it can not be done. You must have the
"read_netcdf1, infile, Temperature" piece of code. Although Temperature can be any tag because it the position it is in not the wording.
i hope this makes sence

Related

How do you change the Cp/Ct array values within FLORIS?

I'd like to run a FLORIS simulation to calculate the wake for a specific turbine. Currently, the input file given in FLORIS is the "example_input.json" which defines the Cp and Ct values for the NREL 5MW at different wind speeds.
I want to run a simulation for a different turbine and I have the array values for that turbine. I'm wondering if there is an easier way to change the redefine the Cp/Ct array values within FLORIS/python rather than manually typing the array values in the .json input file.
You can do this using the change_turbine handle in the floris object. It is used as follows, say you want to change the power and thrust tables for turbine 0, 1 and 2 in your floris object called fi:
fi.change_turbine(
turb_num_array=[0, 1, 2],
turbine_change_dict={
"power_thrust_table": new_power_thrust_table,
}
)
where new_power_thrust_table is a dict with three keys: power (the power coefficients), thrust (the thrust coefficients), and wind_speed (the wind speeds). Each should contain one array or list defining the new values (respectively, new power coefficients, thrust coefficients, and wind speeds to which the former two belong).
Also, you may want to change the turbine rotor diameter at the same time, for example to 140 m. You can do that with:
fi.change_turbine(
turb_num_array=[0, 1, 2],
turbine_change_dict={
"power_thrust_table": new_power_thrust_table,
"rotor_diameter": 140.
}
)
Alternatively, say you want to copy over the turbine properties from the first turbine of a different floris object, fi_b, you could do something like:
fi.change_turbine(
turb_num_array=[0, 1, 2],
turbine_change_dict={
"power_thrust_table": fi_b.floris.farm.turbines[0].power_thrust_table,
"rotor_diameter": fi_b.floris.farm.turbines[0].rotor_diameter
}
)

Writing a chunk of MPI distributed data via hdf5 in fortran

I have a 3d array distributed into different MPI processes:
real :: DATA(i1:i2, j1:j2, k1:k2)
where i1, i2, ... are different for each MPI process, but the MPI grid is cartesian.
For simplicity let's assume I have a 120 x 120 x 120 array, and 27 MPI processes distributed as 3 x 3 x 3 (so that each processor has an array of size 40 x 40 x 40).
Using hdf5 library I need to write only a slice of that data, say, a slice that goes through the middle perpendicular to the second axis. The resulting (global) array would be of size 120 x 1 x 120.
I'm a bit confused on how to properly use the hdf5 here, and how to generalize full DATA writing (which I can do). The problem is, not each MPI thread is going to be writing. For instance, in the case above, only 9 processes will have to write something, others (which are on the +/-x and +/-z edges of the cube) will not have to, since they don't contain any chunk of the slab I need.
I tried the chunking technique described here, but it looks like that's just for a single thread.
Would be very grateful if the hdf5 community can help me in this :)
When writing an HDF5 dataset in parallel, all MPI processes must participate in the operation (even if a certain MPI process does not have values to write).
If you are not bound to a specific library, take a look at HDFql. Based on what I could understand from the use-case you have posted, here goes an example on how to write data in parallel in Fortran using HDFql.
PROGRAM Example
! use HDFql module (make sure it can be found by the Fortran compiler)
USE HDFql
! declare variables
REAL(KIND=8), DIMENSION(40, 40, 40) :: values
CHARACTER(2) :: start
INTEGER :: state
INTEGER :: x
INTEGER :: y
INTEGER :: z
! create an HDF5 file named "example.h5" and use (i.e. open) it in parallel
state = hdfql_execute("CREATE AND USE FILE example.h5 IN PARALLEL")
! create a dataset named "dset" of data type double of three dimensions (size 120x120x120)
state = hdfql_execute("CREATE DATASET dset AS DOUBLE(120, 120, 120)");
! populate variable "values" with certain values
DO x = 1, 40
DO y = 1, 40
DO z = 1, 40
values(z, y, x) = hdfql_mpi_get_rank() * 100000 + (x * 1600 + y * 40 + z)
END DO
END DO
END DO
! register variable "values" for subsequent use (by HDFql)
state = hdfql_variable_register(values)
IF (hdfql_mpi_get_rank() < 3) THEN
! insert (i.e. write) values from variable "values" into dataset "dset" using an hyperslab in function of the MPI rank (each rank writes 40x40x40 values)
WRITE(start, "(I0)") hdfql_mpi_get_rank() * 40
state = hdfql_execute("INSERT INTO dset(" // start // ":1:1:40) IN PARALLEL VALUES FROM MEMORY 0")
ELSE
! if MPI rank is equal or greater than 3 nothing is written
state = hdfql_execute("INSERT INTO dset IN PARALLEL NO VALUES")
END IF
END PROGRAM
Please check HDFql reference manual to get additional information on how to work with HDF5 files in parallel (i.e. with MPI) using this library.

PROC SQL with GROUP command extremely slow. Why? Workaround possible?

I have a MACRO which takes a data set D and essentially outputs k disjoint datasets, D_1,...,D_k. The value k is not fixed and depends on properties of the data that are not known in advance. We can assume that k is not larger than 10, though.
The dataset D contains the variables x and y, and I want to overlay the line/scatter plots of x and y for each of D_i over each other. In my particular case x is time, and I want to see the output y for each D_i and compare them to each other.
Hopefully that was clear.
How can I do this? I don't know k in advance, so I need some sort of %do loop. But it doesn't seem that I can put a do loop inside "proc sgplot".
I might be able to make a macro that includes a very long series of commands, but I'm not sure.
How can I overlay these plots in SAS?
EDIT: I am including for reference why I am trying to avoid doing a PROC SGPLOT with the GROUP clause. I tried the following code and it is taking over 30 minutes to compute (I canceled the calculation after this, so I don't know how long it will actually take). PROC SQL runs quite quickly, the program is stuck on PROC SGPLOT.
proc sql;
create table dataset as select
date, product_code, sum(num_of_records) as total_rec
from &filename
group by product_code, data
order by product_code, date
;
quit;
PROC SGPLOT Data = dataset;
scatter x = date y = total_rec/group=product_code;
title "Total records by product code";
run;
The number of observations in the file is 76,000,000.
What you should do is either change your macro to produce one dataset with a variable d_i (or whatever you can logically name it) which identifies which dataset it would've gone to (or identifies it with whatever determines what dataset it would've gone to), or post-macro combine the datasets.
Then, you can use group to overlay your plots. So for example:
data my_data;
call streaminit(7);
do d_i = 1 to 5;
y = 10;
x = 0;
output;
do x = 1 to 10;
y + round(rand('Uniform')*3,.1)-1.5;
output;
end;
end;
run;
proc sgplot data=my_data;
series x=x y=y/group=d_i;
run;

Can't concatenate netCDF files with ncrcat

I am looping over a model that outputs daily netcdf files. I have a 7-year time series of daily files that, ideally, I would like to append into a single file at the end of each loop but it seems that, using nco tools, the best way to merge the data into one file is to concatenate. Each daily file is called test.t.nc and is renamed as the date of the daily file e.g. 20070102.nc, except the first one that I create with
ncks -O --mk_rec_dmn time test.t.nc 2007-01-01.nc
to make time the record dimension for concatenation. If I try to concatenate the first two files such as
ncrcat -O -h 2007-01-01.nc 2007-01-02.nc out.nc
I get the error message
ncrcat: symbol lookup error: /usr/local/lib/libudunits2.so.0: undefined symbol: XML_ParserCreate
I don't understand what this means and, looking at all the help online, ncrcat should be a straightforward process. Does anyway understand what's happening?
Just in case this helps, the ncdump -h for 20070101.nc is
netcdf \20070101 {
dimensions:
time = UNLIMITED ; // (8 currently)
y = 1 ;
x = 1 ;
tile = 9 ;
soil = 4 ;
nt = 2 ;
and 20070102.nc
netcdf \20070102 {
dimensions:
x = 1 ;
y = 1 ;
tile = 9 ;
soil = 4 ;
time = UNLIMITED ; // (8 currently)
nt = 2 ;
This is part of a bigger shell script and I don't have much flexibility over the naming of files - just in case this matters!

Why do you divide the raw data by 16?

http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf
Read page 3, Operation – Measuring Temperature. The following code works to get the temp. I understand all of it except why they divide the number by 16.
local raw = (data[1] << 8) | data[0];
local SignBit = raw & 0x8000; // test most significant bit
if (SignBit) {raw = (raw ^ 0xffff) + 1;} // negative, 2's compliment
local celsius = raw / 16.0;
if (SignBit) {celsius *= -1;}
I’ve got another situation http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Pressure/MPL3115A2.pdf Page 23, section 7.1.3, temperature data. It’s only twelve bits, so the above code works for it also (just change the left shift to 4 instead of 8), but again, the /16 is required for the final result. I don’t get where that is coming from.
The raw temperature data is in units of sixteenths of a degree, so the value must be divided by 16 in order to convert it to degrees.

Resources