how to solve netCDF library : Attempt to convert between text & numbers - netcdf

I like to hear your suggestions for solving the netCDF library problem. When I use a program netcdf merge to combine several small netcdf files, I get the following messages:
The netCDF library has reported the following problem:
NetCDF: Attempt to convert between text & numbers
I stuck~~ Please let me know any suggestions. Thanks a lot.
The following is my log file:
imerge: Merging 61 npptot*.nc files, using 1 buffers
Number of dimensions: 4
Dimension id: 0 Name: longitude Length: 4
Dimension id: 1 Name: latitude Length: 1
Dimension id: 2 Name: time Length: 60
Dimension id: 3 Name: lengthd Length: 10
Number of variables: 6
Number of global attributes: 5
Global attribute: 0 Attribute: title Type: NC_CHAR, string Length: 25 Value: monthly total NPP carbon
Global attribute: 1 Attribute: source Type: NC_CHAR, string Length: 14 Value: ibis wmonthly
Global attribute: 2 Attribute: history Type: NC_CHAR, string Length: 12 Value: 14-May-2012
Global attribute: 3 Attribute: calendar Type: NC_CHAR, string Length: 10 Value: gregorian
Global attribute: 4 Attribute: conventions Type: NC_CHAR, string Length: 9 Value: NCAR-CSM
Variable: longitude Attribute: long_name Type: NC_CHAR, string Length: 10 Value: longitude
Variable: longitude Attribute: units Type: NC_CHAR, string Length: 13 Value: degrees_east
Attention: NetCDF attribute 'missing_value' does not exist for the selected variable.
Variable: latitude Attribute: long_name Type: NC_CHAR, string Length: 9 Value: latitude
Variable: latitude Attribute: units Type: NC_CHAR, string Length: 14 Value: degrees_north
Attention: NetCDF attribute 'missing_value' does not exist for the selected variable.
Variable: time Attribute: long_name Type: NC_CHAR, string Length: 5 Value: time
Variable: time Attribute: units Type: NC_CHAR, string Length: 22 Value: days since 1500-12-31
Attention: NetCDF attribute 'missing_value' does not exist for the selected variable.
Variable: time_weights Attribute: long_name Type: NC_CHAR, string Length: 29 Value: number of days per time step
Variable: time_weights Attribute: units Type: NC_CHAR, string Length: 5 Value: days
Attention: NetCDF attribute 'missing_value' does not exist for the selected variable.
Variable: date Attribute: long_name Type: NC_CHAR, string Length: 25 Value: label for each time step
Variable: date Attribute: units Type: NC_CHAR, string Length: 1 Value:
Attention: NetCDF attribute 'missing_value' does not exist for the selected variable.
Variable: npptot Attribute: long_name Type: NC_CHAR, string Length: 16 Value: total NPP carbon
Variable: npptot Attribute: units Type: NC_CHAR, string Length: 14 Value: kg-C/m^2/month
Variable: npptot Attribute: missing_value Type: NC_FLOAT, 4 bytes Length: 1 Value: 9e+20
Value of netCDF attribute 'missing_value' is: 9e+20
In file npptot0.nc there is 1 nlat line for a running total of 1 latitude lines.
:
:
:
variable 0: longitude, rank 1, size 4
variable 1: latitude, rank 1, size 61
variable 2: time, rank 1, size 60
variable 3: time_weights, rank 1, size 60
variable 4: date, rank 2, size 600

Related

Attaching a new volume each time a node group is scaled

Can anyone share any reference template to create a new volume and attach to a new instance each time a deployment for that instance is scaled up?
My template looks like:
node_templates:
key_pair:
...
vol:
...
node_host:
...
relationships:
...
- key_pair
- vol
node:
...
relationships:
- type: cloudify.relationships.contained_in
target: node_host
...
groups:
scale_up_group:
members: [node, node_host, vol]
policies:
auto_scale_up:
type: scale_policy_type
properties:
policy_operates_on_group: true
scale_limit: 6
scale_direction: '<'
scale_threshold: 15
service_selector: cpu.total.user
cooldown_time: 60
triggers:
execute_scale_workflow:
type: cloudify.policies.triggers.execute_workflow
parameters:
workflow: scale
workflow_parameters:
delta: 1
scalable_entity_name: node
scale_compute: true
Courtesy of Trammell from cloudify-user group:
node_templates:
key_pair:
type: cloudify.openstack.nodes.KeyPair
...
floating_ip:
type: cloudify.openstack.nodes.FloatingIP
...
vol:
type: cloudify.openstack.nodes.Volume
...
node_host:
type: cloudify.openstack.nodes.Server
...
relationships:
- type: cloudify.openstack.server_connected_to_keypair
target: keypair
- type: cloudify.openstack.server_connected_to_floating_ip
target: floating_ip
- type: cloudify.relationships.depends_on
target: vol
node:
type: custom.node.type
...
relationships:
- type: cloudify.relationships.contained_in
target: node_host
...
groups:
scale_vm:
members: [node, node_host, floating_ip, vol]
scale_up_group:
members: [node, node_host, floating_ip, vol]
policies:
auto_scale_up:
type: scale_policy_type
properties:
policy_operates_on_group: true
scale_limit: 6
scale_direction: '<'
scale_threshold: 15
service_selector: cpu.total.user
cooldown_time: 60
triggers:
execute_scale_workflow:
type: cloudify.policies.triggers.execute_workflow
parameters:
workflow: scale
workflow_parameters:
delta: 1
scalable_entity_name: scale_vm
scale_compute: true
policies:
scale_vm_policy:
type: cloudify.policies.scaling
properties:
default_instances: 1
targets: [scale_vm]
Ref: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/cloudify-users/TPepGofpSBU

Converting EDI 837I to XML using BizTalk server

I tried with another file,While converting to EDI 837 I to XML the below errors arraised. Are these fields are mandatory fields?
I manually passed values to ISA09, ISA10 and ISA13 then I am receiving
Error encountered during parsing. The X12 interchange with id ' ', with sender id ' ', receiver id ' ' had the following errors:
Error: 4 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA09 Position in Segment: 9 Data Value: 8:
Invalid Date
Error: 5 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA10 Position in Segment: 10 Data Value: 9:
Invalid Time
Error: 6 (Field level error) SegmentID: ISA Position in TS: 1 Data
Element ID: ISA13 Position in Segment: 13 Data Value: 6:
Invalid character in data element
ISA09 is YYMMDD formatted.
ISA10 is HHMM formatted.
ISA13 is n8, meaning 8 digits.
ISA is fixed length so numbers must be 0 padded and text space padded.
Have you done the EDI Tutorials for BizTalk?

Get value by date

I have a data frame df:
PRICE
2004-03-19 36.250000
2004-03-20 36.237500
2004-03-21 36.225000
2004-03-22 36.212500
etc...
The index is of type:
DatetimeIndex(['2004-03-19', '2004-03-20', '2004-03-21', ...],
dtype='datetime64[ns]', length=1691, freq='D')
I want to retrieve the PRICE at a certain day using df[datetime.date(2004,3,19)]. This is what pandas does:
KeyError: datetime.date(2004, 3, 19)
The following works, but that can't be the way it is supposed to work:
df[df.index.isin(pd.DatetimeIndex([datetime.date(2004,3,19)]))].PRICE.values[0]
The problem here is that the comparison is being performed for an exact match, as none of the times are 00:00:00 then no matches occur.
You can use loc with DatetimeIndex:
print df.loc[pd.DatetimeIndex(['2004-3-19'])]
PRICE
2004-03-19 36.25
Or you can use loc, convert string 2004-3-19 to_datetime and get date of DatetimeIndex:
print df.loc[pd.to_datetime('2004-3-19').date()]
PRICE 36.25
Name: 2004-03-19 00:00:00, dtype: float64
If you need value of PRICE:
print df.loc[pd.DatetimeIndex(['2004-3-19']), 'PRICE']
2004-03-19 36.25
Name: PRICE, dtype: float64
print df.loc[pd.DatetimeIndex(['2004-3-19']), 'PRICE'].values[0]
36.25
print df.loc[pd.to_datetime('2004-3-19').date(), 'PRICE']
36.25
But if add time to datetime, DatetimeIndex match:
print df.loc[pd.to_datetime('2004-3-19 00:00:00')]
PRICE 36.25
Name: 2004-03-19 00:00:00, dtype: float64
print df.loc[pd.to_datetime('2004-3-19 00:00:00'), 'PRICE']
36.25
Your index appears to be timestamps, whereas you are trying to equate them to datetime.date objects.
Rather than trying to retrieve the price via df[datetime.date(2004,3,19)], I would simply recommend df['2004-3-19'].
If you are intent on using datetime.date values, you should first convert the index.
df.index = [d.date() for d in df.index]

Removing duplicate records from .Xdf file

I would like to remove the duplicate records from my large .xdf file trans.xdf.
Here is the file details:
File name: /poc/revor/data/trans.xdf
Number of observations: 1000000000
Number of variables: 5
Number of blocks: 40
Compression type: zlib
Variable information:
Var 1: CARD_ID, Type: character
Var 2: SE_NO, Type: character
Var 3: r12m_cv, Type: numeric, Low/High: (-2348.7600, 40587.3900)
Var 4: r12m_roc, Type: numeric, Low/High: (0.0000, 231.0000)
Var 5: PROD_GRP_CD, Type: character
Also below is the sample data of the file:
CARD_ID SE_NO r12m_cv r12m_roc PROD_GRP_CD
900000999000000000 1045815024 110 1 1
900000999000000000 1052487253 247.52 2 1
900000999000000000 9999999999 38.72 1 1
900000999000000000 1090389768 1679.96 16 1
900000999000000000 1091226035 0 1 1
900000999000000000 1091241208 538.68 4 1
900000999000000000 9999999999 83 1 1
900000999000000000 1091468041 148.4 3 1
900000999000000000 1092640358 3.13 1 1
900000999000000000 1093468692 546.29 1 1
I have tried using rxDataStep function to use its transform parameter to call to unique() function over the .xdf file. Below is the code for the same:
uniq_dat <- function( dataList )
{
datalist <- unique(datalist)
return(datalist)
}
rxDataStepXdf(inFile = "/poc/revor/data/trans.xdf",outFile = "/poc/revor/data/trans.xdf",transformFunc = uniq_dat,overwrite = TRUE)
But was getting below error:
Error in unique(datalist) : object 'datalist' not found
Error in transformation function: Error in unique(datalist) : object 'datalist' not found
Error in rxCall("RxDataStep", params) :
So anybody could point out the mistake that I am doing here or if there is a better way to remove the duplicate records from the .Xdf file. I am avoiding loading the data into inmemory dataframe as the data is pretty huge.
I am running the above code in Revolution R Environment over HDFS.
If the same can be obtained by any other approach then the example for the same would be appreciated.
Thanks for the help in advance :)
Cheers,
Amit
you can remove the duplicate values providing removeDupKeys=TRUE parameter for rxSort() function. For example for your case:
XdfFilePath <- file.path("<your file's fully qualified path>/trans.xdf")
rxSort(inData = XdfFilePath,sortByVars=c("CARD_ID","SE_NO","r12m_cv","r12m_roc","PROD_GRP_CD"), removeDupKeys=TRUE)
if you want to remove duplicate records based on a specific key column, for example, based on SE_NO column
set the key value as sortByVars="SE_NO"

Datetime xaxis in high charts showing extra labels

xAxis: {
type: 'datetime',
maxZoom: 14 * 24 * 3600000,
dateTimeLabelFormats:{
day: '%e-%b-%Y',
week: '%e-%b-%Y',
month: '%b \'%Y',
year: '%Y'
},
title: {
text: 'Days'
},
labels: {
y: 40,
rotation: 60
},
tickmarkPlacement:'on',
startOnTick: true,
endOnTick: true
}
I have added a column chart using high chart gallery. I used xaxis as datetime type. This chart shows previous 30 days report. I am giving a starting date to it but when chart is rendered its showing 1 or 2 extra dates in starting and same in ending. Ex- if I am giving 1 may as a starting date it should show from 1 may to 30 may but its showing from 30 apr to 1 june or 30 apr to 31 may?

Resources