I calculated the potential temperaure from a NetCDF file. I would like to change standard_name and long_name with NCO.
I have tried some commands without sucess, e.g.:
> ncatted -a name,Temperature,o,c,"Potential_Temperature" pt_19891020-19891022.nc
ncatted: ERROR File contains no variables or groups that match name Temperature so attribute name cannot be changed
> ncrename -a air_temperature,air_potential_temperature -a Temperature,Potential_Temperature pt_19891020-19891022.nc
ncrename: ERROR Required attribute 'air_temperature' not present in group '/'.
HINT: If attribute presence is intended to be optional, then prefix attribute name with the period character '.', e.g., .air_temperature. With this syntax ncrename would succeed even when no variables or groups contain the attribute. If the attribute is intended to be renamed only in a specific variable, then prepend the variable name plus an at-sign '#' to the attribute name, e.g., var_nm#att_nm. If attribute presence is required only for root group (i.e., a global attribute), then prefix attribute name with "global" and an at-sign, e.g., global#att_nm. If attribute presence is required for all groups, then prefix attribute name with "group" and an at-sign, e.g., group#att_nm.
Current file attributes:
$ cdo showattsvar pt_19891020-19891022.nc
pt:
standard_name = "air_temperature"
long_name = "Temperature"
units = "K"
missing_value = -32767
Desired attributes:
pt:
standard_name = "air_potential_temperature"
long_name = "Potential_Temperature"
units = "K"
missing_value = -32767
These are attributes so ncatted is the correct tool, and the documentation is here with examples of the correct syntax:
ncatted -a standard_name,pt,o,c,air_potential_temperature -a long_name,pt,o,c,Potential_temperature pt_19891020-19891022.nc
Related
Recent airflow-providers-amazon has deprecated MySQLToS3Operator and introduced SqlToS3Operator and now it is adding an index column in the beginning of the CSV dump.
For example, if I run the following
sql_to_s3_task = SqlToS3Operator(
task_id="sql_to_s3_task",
sql_conn_id=conn_id_name,
query="SELECT created_at, score FROM my_table",
s3_bucket=bucket_name,
s3_key=key,
replace=True,
)
The S3 file has something like this:
,created_at,score
1,2023-01-01,5
2,2023-01-02,6
The output seems to be a direct dump from Pandas. How can I remove this unwanted preceding index column?
The operator uses pandas DataFrame under the hood.
You should use pd_kwargs. It allows you to pass arguments to include in DataFrame .to_parquet(), .to_json() or .to_csv().
Since your output is csv the relevant pandas.DataFrame.to_csv parameters are:
header: bool or list of str, default True
Write out the column names. If a list of strings is given it is assumed to be aliases for the column names.
index: bool, default True
Write row names (index).
Thus you can do:
sql_to_s3_task = SqlToS3Operator(
task_id="sql_to_s3_task",
sql_conn_id=conn_id_name,
query="SELECT created_at, score FROM my_table",
s3_bucket=bucket_name,
s3_key=key,
replace=True,
file_format="csv",
pd_kwargs={"index": False, "header": False},
)
I am passing a dictionary to a template.
dict road_len = {"US_NEWYORK_MAIN":24,
"US_BOSTON_WALL":18,
"FRANCE_PARIS_RUE":9,
"MEXICO_CABOS_STILL":8}
file_handle = output.txt
env.globals.update(country = "MEXICO")
env.globals.update(city = "CABOS")
env.globals.update(street = "STILL")
file_handle.write(env.get_template(template.txt).render(road_len=road_len)))
template.txt
This is a road length is: {{road_len["{{country}}_{{city}}_{{street}}"]}}
Expected output.txt
This is a road length is: 8
But this does not work as nested variable substitution are not allowed.
You never nest Jinja {{..}} markers. You're already inside a template context, so if you want to use the value of a variable you just use the variable. It helps if you're familiar with Python, because you can use most of Python's string formatting constructs.
So you could write:
This is a road length is: {{road_len["%s_%s_%s" % (country, city, street)]}}
Or:
This is a road length is: {{road_len[country + "_" + city + "_" + street]}}
Or:
This is a road length is: {{road_len["{}_{}_{}".format(country, city, street)]}}
I am writing a ruby file that is called from zsh and, among others, I am trying to pass an array as an input variable like that:
ruby cim_manager.rb test --target=WhiteLabel --devices=["iPhone 8", "iPhone 12 Pro"]
Inside my ruby file I have a function:
# Generates a hash value from an array of arguments
#
# #param [Array<String>] The array of values. Each value of the array needs to separate the key and the value with "=". All "--" substrings will be replaced for empty substrings
#
# #return [Hash]
#
def generate_hash_from_arguemnts(args)
hash = {}
args.each{ |item|
item = item.gsub("--", "")
item = item.split("=")
puts item.kind_of?(Array)
hash[item[0].to_s] = item[1].to_s
}
return hash
end
So I can have a value like:
{"target": "WhiteLabel", "devices": ["iPhone 8", "iPhone 12 Pro"]}
The error I am getting when executing my Ruby file is:
foo#Mac-mini fastlane % ruby cim_manager.rb test --target=WhiteLabel --devices=["iPhone 8", "iPhone 12 Pro"]
zsh: bad pattern: --devices=[iPhone 8,
Any ideas?
#ReimondHill : I don't see how the error is possibly related to Ruby. You have a zsh-line, in which you have --devices= [.... You could get the same error when doing a
echo --devices=["iPhone 8", "iPhone 12 Pro"]
An open square bracket is a zsh wildcard construct; for instance, [aeiou] is a wildcard which tries to match against a vocal in a file name. Hence, this parameter tries to match against files starting with the name --devices= in your working directory, so you would expect an error message like no matches found: --devices=.... However, there is one gotcha: The list of characters between [ ... ] must not have an (unescaped) space. Therefore, you don't see no matches found, but bad pattern.
After all, you don't want a filename expansion to occur, but pass the parameter to your program. Therefore you need to quote it:
ruby .... '--devices=["iPhone 8", "iPhone 12 Pro"]'
Ronald
Following the answer from #user1934428, I am extending my ruby file like that:
# Generates a hash value from an array of arguments
#
# #param [Array<String>] The array of values. Each value of the array needs to separate the key and the value with "=". All "--" substrings will be replaced for empty substrings
#
# #return [Hash]
#
def generate_hash_from_arguemnts(args)
hash = {}
args.each{ |item|
item = item.gsub("--", "")
item = item.gsub("\"", "")
item = item.split("=")
key = item[0].to_s
value = item[1]
if value.include?("[") && value.include?("]") #Or any other pattern you decide
value = value.gsub("[","")
value = value.gsub("]","")
value = value.split(",")
end
hash[key] = value
}
return hash
end
And then my zsh-line:
ruby cim_manager.rb test --target=WhiteLabel --devices='[iPhone 8,iPhone 12 Pro]'
The return value from generate_hash_from_arguemnts prints:
{"target"=>"WhiteLabel", "devices"=>["iPhone 8", "iPhone 12 Pro"]}
I'm using HERE's Platform Data Extension to retrieve road names. However, I don't understand the strings that I'm getting. I suspect they're encoded somehow but I don't know how to decode them.
For example:
ENGBNFDR Dr NNASN"e|fe "de "e|rre "dri|ve "nol|te;NASY"e|fe "de "e|rre;<snip>
If I split them by a "record separator" character, e.g. link_names.split('\x1e') the values look slightly more intelligible, but only slightly. There are still bizarre abbreviations I don't understand, e.g. ENGBN.
The PDE Layers documents can be found here: http://pde.cit.api.here.com/1/doc/content.html?detail=1&app_id=xxx&app_code=yyy
Layers > ROAD_NAME_FC1 > NAMES.
List of all names for this object, in all languages, latin1/pinyin/phonetic transliterations.
For convenience, non-exonym base names are listed first.
Format:
NAMES = NAME1 \u001D NAME2 \u001D NAME3 ...
NAME = NAME_TEXT \u001E TRANSLIT1 ; TRANSLIT2 ; ... \u001E PHONEME1 ; PHONEME2 ; ... NAME_TEXT = LANGUAGE_CODE NAME_TYPE IS_EXONYM text
TRANSLIT = LANGUAGE_CODE text
PHONEME = LANGUAGE_CODE IS_PREFERRED text
LANGUAGE_CODE is a 3 character string
NAME_TYPE is one letter (A = abbreviation, B = base name, E = exonym, K = shortened name, S = synonym)
IS_EXONYM = Y if the name is a translation into another language
IS_PREFERRED = Y if this is the preferred phoneme.
Please note, the delimiters are:
\u001D between languages (NAMES level)
\u001E between name text, transliterations, and phonemes ';' between different transliterations and phonemes of the same name.
I am trying to use Pyparsing to identify a keyword which is not beginning with $ So for the following input:
$abc = 5 # is not a valid one
abc123 = 10 # is valid one
abc$ = 23 # is a valid one
I tried the following
var = Word(printables, excludeChars='$')
var.parseString('$abc')
But this doesn't allow any $ in var. How can I specify all printable characters other than $ in the first character position? Any help will be appreciated.
Thanks
Abhijit
You can use the method I used to define "all characters except X" before I added the excludeChars parameter to the Word class:
NOT_DOLLAR_SIGN = ''.join(c for c in printables if c != '$')
keyword_not_starting_with_dollar = Word(NOT_DOLLAR_SIGN, printables)
This should be a bit more efficient than building up with a Combine and a NotAny. But this will match almost anything, integers, words, valid identifiers, invalid identifiers, so I'm skeptical of the value of this kind of expression in your parser.