I'm using HERE's Platform Data Extension to retrieve road names. However, I don't understand the strings that I'm getting. I suspect they're encoded somehow but I don't know how to decode them.
For example:
ENGBNFDR Dr NNASN"e|fe "de "e|rre "dri|ve "nol|te;NASY"e|fe "de "e|rre;<snip>
If I split them by a "record separator" character, e.g. link_names.split('\x1e') the values look slightly more intelligible, but only slightly. There are still bizarre abbreviations I don't understand, e.g. ENGBN.
The PDE Layers documents can be found here: http://pde.cit.api.here.com/1/doc/content.html?detail=1&app_id=xxx&app_code=yyy
Layers > ROAD_NAME_FC1 > NAMES.
List of all names for this object, in all languages, latin1/pinyin/phonetic transliterations.
For convenience, non-exonym base names are listed first.
Format:
NAMES = NAME1 \u001D NAME2 \u001D NAME3 ...
NAME = NAME_TEXT \u001E TRANSLIT1 ; TRANSLIT2 ; ... \u001E PHONEME1 ; PHONEME2 ; ... NAME_TEXT = LANGUAGE_CODE NAME_TYPE IS_EXONYM text
TRANSLIT = LANGUAGE_CODE text
PHONEME = LANGUAGE_CODE IS_PREFERRED text
LANGUAGE_CODE is a 3 character string
NAME_TYPE is one letter (A = abbreviation, B = base name, E = exonym, K = shortened name, S = synonym)
IS_EXONYM = Y if the name is a translation into another language
IS_PREFERRED = Y if this is the preferred phoneme.
Please note, the delimiters are:
\u001D between languages (NAMES level)
\u001E between name text, transliterations, and phonemes ';' between different transliterations and phonemes of the same name.
Related
I am passing a dictionary to a template.
dict road_len = {"US_NEWYORK_MAIN":24,
"US_BOSTON_WALL":18,
"FRANCE_PARIS_RUE":9,
"MEXICO_CABOS_STILL":8}
file_handle = output.txt
env.globals.update(country = "MEXICO")
env.globals.update(city = "CABOS")
env.globals.update(street = "STILL")
file_handle.write(env.get_template(template.txt).render(road_len=road_len)))
template.txt
This is a road length is: {{road_len["{{country}}_{{city}}_{{street}}"]}}
Expected output.txt
This is a road length is: 8
But this does not work as nested variable substitution are not allowed.
You never nest Jinja {{..}} markers. You're already inside a template context, so if you want to use the value of a variable you just use the variable. It helps if you're familiar with Python, because you can use most of Python's string formatting constructs.
So you could write:
This is a road length is: {{road_len["%s_%s_%s" % (country, city, street)]}}
Or:
This is a road length is: {{road_len[country + "_" + city + "_" + street]}}
Or:
This is a road length is: {{road_len["{}_{}_{}".format(country, city, street)]}}
In this string the character “=” differentiates attributes for a product, and commas distinguish variables within an attribute. However, we found that sometimes extra quotes have been added when there are no variables to put together.
The complete string is :
Uso="Protector para patas de silla,mesas,escaleras,muebles","Topes,4-Tipo=Topes,regatones",2-Familia=Ferretería y Plomería,regatones,7-Contenido="12 unidades,4-Origen=China,4-Material=Goma,2-Modelo=Goma transparente,9-Incluye=12 unidades,3-Color=Transparente"
This is right:
Uso="Protector para patas de silla,mesas,escaleras,muebles"
This is wrong:
"Topes,4-Tipo=Topes,regatones",2-Familia=Ferretería y Plomería,regatones,7-Contenido="12 unidades,4-Origen=China,4-Material=Goma,2-Modelo=Goma transparente,9-Incluye=12 unidades,3-Color=Transparente"
Categoría="Topes,4-Tipo=Topes,regatones",2-Familia=Ferretería y Plomería,regatones,7-Contenido="12 unidades,4-Origen=China,4-Material=Goma,2-Modelo=Goma transparente,9-Incluye=12 unidades,3-Color=Transparente"
I´ve tried "|w+=" but selects all quotes. I don´t want to select text between quotes, the goal is select and remove these quotes.
We want to remove those quotes that contains an equal in between. The quotes that are ok and need to stay are those used to separate commas within the string, differentiating the variables from the string.
The regex needs to detect an = contained into and opening and closing quotes, but considering text in between. And once this is detected remove those quotes, which no need to be there.
Thanks!
I understand the quoted substring should be preceded with =. Then, you need
gsub('="([^"=]*=[^"]*)"', '=\\1', x)
See the R demo online:
x <- '10-Uso="Protector para patas de silla,mesas,escaleras,muebles",6-Características=Regaton interior 1 1/4 plástico blanco 4 unidades,1-Marca=Nagel,Tipo=Topes,5-Medidas=3 cm,3-Categoría=Topes y regatones,7-Contenido=4 unidades,4-Tipo=Regatones,2-Familia=Ferretería y Plomería,9-Incluye=4 regatones plásticos,regatones,4-Origen="Argentina,4-Material=Plástico,2-Modelo=Regatón interior 1 1/4,3-Color=Blanco"'
cat(gsub('="([^"=]*=[^"]*)"', '=\\1', x))
## => 10-Uso="Protector para patas de silla,mesas,escaleras,muebles",6-Características=Regaton interior 1 1/4 plástico blanco 4 unidades,1-Marca=Nagel,Tipo=Topes,5-Medidas=3 cm,3-Categoría=Topes y regatones,7-Contenido=4 unidades,4-Tipo=Regatones,2-Familia=Ferretería y Plomería,9-Incluye=4 regatones plásticos,regatones,4-Origen=Argentina,4-Material=Plástico,2-Modelo=Regatón interior 1 1/4,3-Color=Blanco
So, the quote after muebles is kept and quote after blanco is removed.
How does this work?
=" - matches =" substring
([^"=]*=[^"]*) - matches and captures into Group 1:
[^"=]* - zero or more chars other than " and =
= - a = sign
[^"]* - any 0+ chars other than "
" - matches ".
The replacement pattern is a = and the value stored in Group 1 memory buffer (\1, a replacement backreference).
See the regex demo.
For a text field, I would like to expose those that contain invalid characters. The list of invalid characters is unknown; I only know the list of accepted ones.
For example for French language, the accepted list is
A-z, 1-9, [punc::], space, àéèçè, hyphen, etc.
The list of invalid charactersis unknown, yet I want anything unusual to resurface, for example, I would want
This is an 2-piece à-la-carte dessert to pass when
'Ã this Øs an apple' pumps up as an anomalie
The 'not contain' notion in R does not behave as I would like, for example
grep("[^(abc)]",c("abcdef", "defabc", "apple") )
(those that does not contain 'abc') match all three while
grep("(abc)",c("abcdef", "defabc", "apple") )
behaves correctly and match only the first two. Am I missing something
How can we do that in R ? Also, how can we put hypen together in the list of accepted characters ?
[a-z1-9[:punct:] àâæçéèêëîïôœùûüÿ-]+
The above regex matches any of the following (one or more times). Note that the parameter ignore.case=T used in the code below allows the following to also match uppercase variants of the letters.
a-z Any lowercase ASCII letter
1-9 Any digit in the range from 1 to 9 (excludes 0)
[:punct:] Any punctuation character
The space character
àâæçéèêëîïôœùûüÿ Any valid French character with a diacritic mark
- The hyphen character
See code in use here
x <- c("This is an 2-piece à-la-carte dessert", "Ã this Øs an apple")
gsub("[a-z1-9[:punct:] àâæçéèêëîïôœùûüÿ-]+", "", x, ignore.case=T)
The code above replaces all valid characters with nothing. The result is all invalid characters that exist in the string. The following is the output:
[1] "" "ÃØ"
If by "expose the invalid characters" you mean delete the "accepted" ones, then a regex character class should be helpful. From the ?regex help page we can see that a hyphen is already part of the punctuation character vector;
[:punct:]
Punctuation characters:
! " # $ % & ' ( ) * + , - . / : ; < = > ? # [ \ ] ^ _ ` { | } ~
So the code could be:
x <- 'Ã this Øs an apple'
gsub("[A-z1-9[:punct:] àéèçè]+", "", x)
#[1] "ÃØ"
Note that regex has a predefined, locale-specific "[:alpha:]" named character class that would probably be both safer and more compact than the expression "[A-zàéèçè]" especially since the post from ctwheels suggests that you missed a few. The ?regex page indicates that "[0-9A-Za-z]" might be both locale- and encoding-specific.
If by "expose" you instead meant "identify the postion within the string" then you could use the negation operator "^" within the character class formalism and apply gregexpr:
gregexpr("[^A-z1-9[:punct:] àéèçè]+", x)
[[1]]
[1] 1 8
attr(,"match.length")
[1] 1 1
I have the following strings:
F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_East_A.dat
F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_Froemke-Hoy.dat
and from each I want to extract the three variables, 1. SWIR32 2. the date and 3. the text following the date. I want to automate this process for about 200 files, so individually selecting the locations won't exactly work for me.
so I want:
variable1=SWIR32
variable2=2005210
variable3=East_A
variable4=SWIR32
variable5=2005210
variable6=Froemke-Hoy
I am going to be using these to add titles to graphs later on, but since the position of the text in each string varies I am unsure how to do this using strmid
I think you want to use a combination of STRPOS and STRSPLIT. Something like the following:
s = ['F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_East_A.dat', $
'F:\Sheyenne\ROI\SWIR32_subset\SWIR32_2005210_Froemke-Hoy.dat']
name = STRARR(s.length)
date = name
txt = name
foreach sub, s, i do begin
sub = STRMID(sub, 1+STRPOS(sub, '\', /REVERSE_SEARCH))
parts = STRSPLIT(sub, '_', /EXTRACT)
name[i] = parts[0]
date[i] = parts[1]
txt[i] = STRJOIN(parts[2:*], '_')
endforeach
You could also do this with a regular expression (using just STRSPLIT) but regular expressions tend to be complicated and error prone.
Hope this helps!
I am trying to use Pyparsing to identify a keyword which is not beginning with $ So for the following input:
$abc = 5 # is not a valid one
abc123 = 10 # is valid one
abc$ = 23 # is a valid one
I tried the following
var = Word(printables, excludeChars='$')
var.parseString('$abc')
But this doesn't allow any $ in var. How can I specify all printable characters other than $ in the first character position? Any help will be appreciated.
Thanks
Abhijit
You can use the method I used to define "all characters except X" before I added the excludeChars parameter to the Word class:
NOT_DOLLAR_SIGN = ''.join(c for c in printables if c != '$')
keyword_not_starting_with_dollar = Word(NOT_DOLLAR_SIGN, printables)
This should be a bit more efficient than building up with a Combine and a NotAny. But this will match almost anything, integers, words, valid identifiers, invalid identifiers, so I'm skeptical of the value of this kind of expression in your parser.