How to get the original word from trans() Symfony 2 - symfony

The user should give his country name, the problem is that all countries name are translated to different languages, and I must re-trans to englisch to compare the name with the name in my database.
I did like that but it doesn't work :
$translated_country = $this->get('translator')->trans($q_country, array(), null, 'en_US');
$countries = array("A, B, C");
if( in_array($translated_country, $countries))
{}
For example I have messages.de.yml
Germany : Deutschland
I want that when the user enters Deutschland , In my code I get Germany

You need to have a match in the EN locale for each country translated into the other languages you support.
# messages.en.yml
deutschland: germany
Германия: germany
russland: russia
Россия: russia
# messages.de.yml
germany: deutschland
russia: russland
# messages.ru.yml
russia: Россия
germany: Германия
$toTranslate = 'deutschland';
$translator = $this->get('translator');
$translation = $translator->trans($toTranslate, array(), null, 'en_US');
/** $translation should be 'germany' */

Related

Kusto query, comparing array of CIDR ranges to an IP

I am trying to do something in Kusto similar to this post:
Filter IPs if they are in list of ranges
but using the IP ranges from a publicly available list to compare to some logs.
Here's what I have tried, I believe the issue relates to me not knowing how to reference the "network" property of the external data.
I get a "Query could not be parsed" error. Apologies for the formatting, I'm not sure how to make it respect line breaks.
let IP_Data = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'];
let testIP = datatable (myip: string) ['4.28.114.50','4.59.176.50']; //Random IPs in Canada
testIP
| mv-apply tmpIP = IP_Data.network to typeof(string) on (
where ipv4_is_in_range(myip, tmpIP
)
| project-away tmpIP
This answers the OP question directly, however there is a great solution for this scenario, based on the ipv4_lookup plugin.
See new answer
For both options -
Since the CSV has header, so I added with (ignoreFirstRecord = true) to the external_data
Option 1
testIP is defined as array (and not a single column table).
The base table is IP_Data but the mv-apply is done on testIP array. This enables you to access values from both IP_Data and testIP
let IP_Data = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'] with (ignoreFirstRecord = true);
let testIP = dynamic(['4.28.114.50','4.59.176.50']); //Random IPs in Canada
IP_Data
| mv-apply testIP = testIP to typeof(string) on (where ipv4_is_in_range(testIP, network))
network
geoname_id
continent_code
continent_name
country_iso_code
country_name
is_anonymous_proxy
is_satellite_provider
testIP
4.28.114.0/24
6251999
NA
North America
CA
Canada
false
false
4.28.114.50
4.59.176.0/24
6251999
NA
North America
CA
Canada
false
false
4.59.176.50
Fiddle
Option 2
Cross join both tables (using a dummy column) and then filter the results
let IP_Data = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'] with (ignoreFirstRecord = true);
let testIP = datatable (myip: string) ['4.28.114.50','4.59.176.50']; //Random IPs in Canada
testIP | extend dummy = 1
| join kind=inner (IP_Data | extend dummy = 1) on dummy
| where ipv4_is_in_range(myip, network)
| project-away dummy*
myip
network
geoname_id
continent_code
continent_name
country_iso_code
country_name
is_anonymous_proxy
is_satellite_provider
4.28.114.50
4.28.114.0/24
6251999
NA
North America
CA
Canada
false
false
4.59.176.50
4.59.176.0/24
6251999
NA
North America
CA
Canada
false
false
Fiddle
New answer
Demo for 1M IPs, based on the ipv4_lookup plugin
let geoip2_ipv4 = external_data(network:string,geoname_id:long,continent_code:string,continent_name:string ,country_iso_code:string,country_name:string,is_anonymous_proxy:bool,is_satellite_provider:bool)
['https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv'] with (ignoreFirstRecord = true)
| extend continent_name = coalesce(continent_name, '--- Missing ---');
let ips = materialize(range i from 1 to 1000000 step 1 | extend ip = format_ipv4(tolong(rand() * pow(2,32))));
ips
| evaluate ipv4_lookup(geoip2_ipv4, ip, network, return_unmatched = true)
| summarize count() by continent_name
continent_name
count_
North America
399059
Asia
201902
Europe
173566
South America
33795
Oceania
13384
Africa
17569
--- Missing ---
226
160499
Fiddle

Data Scraping with list in excel

I have a list in Excel. One code in Column A and another in Column B.
There is a website in which I need to input both the details in two different boxes and it takes to another page.
That page contains certain details which I need to scrape in Excel.
Any help in this?
Ok. Give this a shot:
import pandas as pd
import requests
df = pd.read_excel('C:/test/data.xlsx')
url = 'http://rla.dgft.gov.in:8100/dgft/IecPrint'
results = pd.DataFrame()
for row in df.itertuples():
payload = {
'iec': '%010d' %row[1],
'name':row[2]}
response = requests.post(url, params=payload)
print ('IEC: %010d\tName: %s' %(row[1],row[2]))
try:
dfs = pd.read_html(response.text)
except:
print ('The name Given By you does not match with the data OR you have entered less than three letters')
temp_df = pd.DataFrame([['%010d' %row[1],row[2], 'ERROR']],
columns = ['IEC','Party Name and Address','ERROR'])
results = results.append(temp_df, sort=False).reset_index(drop=True)
continue
generalData = dfs[0]
generalData = generalData.iloc[:,[0,-1]].set_index(generalData.columns[0]).T.reset_index(drop=True)
directorData = dfs[1]
directorData = directorData.iloc[:,[-1]].T.reset_index(drop=True)
directorData.columns = [ 'director_%02d' %(each+1) for each in directorData.columns ]
try:
branchData = dfs[2]
branchData = branchData.iloc[:,[-1]].T.reset_index(drop=True)
branchData.columns = [ 'branch_%02d' %(each+1) for each in branchData.columns ]
except:
branchData = pd.DataFrame()
print ('No Branch Data.')
temp_df = pd.concat([generalData, directorData, branchData], axis=1)
results = results.append(temp_df, sort=False).reset_index(drop=True)
results.to_excel('path.new_file.xlsx', index=False)
Output:
print (results.to_string())
IEC IEC Allotment Date File Number File Date Party Name and Address Phone No e_mail Exporter Type IEC Status Date of Establishment BIN (PAN+Extension) PAN ISSUE DATE PAN ISSUED BY Nature Of Concern Banker Detail director_01 director_02 director_03 branch_01 branch_02 branch_03 branch_04 branch_05 branch_06 branch_07 branch_08 branch_09
0 0305008111 03.05.2005 04/04/131/51473/AM20/ 20.08.2019 NISSAN MOTOR INDIA PVT. LTD. PLOT-1A,SIPCOT IN... 918939917907 shailesh.kumar#rnaipl.com 5 Merchant/Manufacturer Valid IEC 2005-02-07 AACCN0695D FT001 NaN NaN 3 Private Limited STANDARD CHARTERED BANK A/C Type:1 CA A/C No :... HARDEEP SINGH BRAR GURMEL SINGH BRAR HOUSE NO ... JEROME YVES MARIE SAIGOT THIERRY SAIGOT A9/2, ... KOJI KAWAKITA KIHACHI KAWAKITA 3-21-3, NAGATAK... Branch Code:165TH FLOOR ORCHID BUSINESS PARK,S... Branch Code:14NRPDC , WAREHOUSE NO.B -2A,PATAU... Branch Code:12EQUINOX BUSINESS PARK TOWER 3 4T... Branch Code:8GRAND PALLADIUM,5TH FLR.,B WING,,... Branch Code:6TVS LOGISTICS SERVICES LTD.SING,C... Branch Code:2PLOT 1A SIPCOT INDUL PARK,ORAGADA... Branch Code:5BLDG.NO.3 PART,124A,VALLAM A,SRIP... Branch Code:15SURVEY NO. 678 679 680 681 682 6... Branch Code:10INDOSPACE SKCL INDL.PARK,BULD.NO...

How can I store question data for use in my app?

I need to store question data in a cms like wordpress, and use a json endpoint to pull into my app. Something like this:
<question>
What event marked the start of World War II?
</question>
<correct_answer>
Invasion of Poland (1939)
</correct_answer>
<incorrect_answers>
["Invasion of Russia (1942)","Battle of Britain (1940)","Invasion of Normandy (1944)"]
</incorrect_answers>
The problem is the resulting json looks like this, wordpress inserts line breaks and paragrpahs:
<p><question><br \/>\nWhat event marked the start of World War II?<br \/>\n<\/question><br \/>\n<correct_answer><br \/>\nInvasion of Poland (1939)<br \/>\n<\/correct_answer><br \/>\n<incorrect_answers><br \/>\n[“Invasion of Russia (1942)”,”Battle of Britain (1940)”,”Invasion of Normandy (1944)”]<br \/>\n<\/incorrect_answers><br \/>\n<question><br \/>\ntesting<br \/>\n<\/question><br \/>\n<correct_answer><br \/>\nyseesf<br \/>\n<\/correct_answer><br \/>\n<incorrect_answers><br \/>\n[“gffdg”,”fdgfdg”,”dfgfdgfd”]<br \/>\n<\/incorrect_answers><\/p>\n","protected":false},"excerpt":{"rendered":"<p>What event marked the start of World War II? Invasion of Poland (1939) [“Invasion of Russia (1942)”,”Battle of Britain (1940)”,”Invasion of Normandy (1944)”] testing yseesf [“gffdg”,”fdgfdg”,”dfgfdgfd”]<\/p>\n
How can I use a store data like this in a more accessible way?
PHP part
$questions = array(
array(
'question' = 'What event marked the start of World War II?',
'answers' = array('Invasion of Russia (1942)', 'Invasion of Poland (1939)', 'Battle of Britain (1940', 'Invasion of Normandy (1944)'),
'correct_answer' = 1
),
array(
'question' = 'Question 2',
'answers' = array('A1', 'A2', 'A3', 'A4'),
'correct_answer' = 0
)
);
$json = json_encode($questions);
update_post_meta($post->ID, 'questions', $json);

Merge and update columns

I am trying to rebuild some MS Access update query logic with R's merge function, as the update query logic is missing a few arguments.
Table link Google drive
In my database "Invoice Account allocation", there are 2 tables:
Account_Mapping Table:
Origin Origin Postal Destination Destination Invoice
country code country postal code Account
FRA 01 GBR * ZR001
FRA 02 BEL * ZR003
BEL 50 ARG * ZR002
GER 01 ITA * ZR002
POL 02 ESP * ZR001
ESP 50 NED * ZR003
* 95 FRA 38 ZR001
BEL * * * ZR002
* * * * ZR003
FRA * FRA 25 ZR004
Load_ID
ID Origin Postal Destination Destination Default
country code postal code Invoice Account
2019SN0201948 FRA 98 FRA 38 XXAC001
2019SN0201958 POL 56 GBR 15 XXAC001
2019SN0201974 BEL 50 ARG 27 XXAC001
2019SN0201986 FRA 02 GER 01 XXAC001
The default invoice account in tables (Load_ID and Status_ID) is updated by the invoice account from the Account_Mapping table.
The tables Account_Mapping and Load_ID can be joined by:
Origin country & Origin country,
Origin Postal code & Postal code,
Destination country & Destination, and
Destination postal code & Destination postal code.
In the account_mapping table, there are several "*", it means the string value can take any value. I am not able build this logic with merge function. Please help me with a better logic.
New_Assigned_Account_Final <- merge(Load_ID, Account_Mapping, by.x =
c("Origin country","Postal code","Destination", "Destination postal code"),
by.y =
c("Origin country","Origin Postal code","Destination country", "Destination
postal code"))
Desired result:
Updated Load_ID table as below.
Load_ID:
ID Origin Postal Destination Destination Default
country code postal code Invoice Account
2019SN0201948 FRA 98 FRA 38 ZR003
2019SN0201958 POL 56 GBR 15 ZR003
2019SN0201974 BEL 50 ARG 27 ZR002
2019SN0201986 FRA 02 GER 01 ZR003
For the first ID, the default ID becomes "ZR003" because, "FRA" as Origin country doesn't have a Postal code - "98", so it falls under the all "*" bucket and is allocated to ZR003.
For the third ID, the default ID becomes "ZR002" because, "BEL" as Origin country has a Postal code - "50" associated with it, and the destination postal code of "ARG" can be anything because of the "*" in the Destination postal code column, therefore it is allocated to ZR002.
Thank you for your inputs.

geocode function removes special chracters

Hi I am using the geocode function to get lat and lng data for some cities, but for some special alphabet character cities such as: "Marcos Juárez Argentina" or "Perú Argentina" it creates mistakes in generating the inquiry:
https://maps.googleapis.com/maps/api/geocode/json?address=Per%FA%20Argentina&key=[**my api key**]
is there a way to fix that?
We can use the enc2utf8() function to read or set the declared encodings for a character vector:
> geocode(enc2utf8("Marcos Juárez Argentina"), output = 'more')
Information from URL : http://maps.googleapis.com/maps/api/geocode/json?address=Marcos%20Ju%C3%A1rez%20Argentina&sensor=false
lon lat type loctype address north south east west locality
1 -62.1058 -32.69786 locality approximate marcos juárez, cordoba, argentina -32.67304 -32.71417 -62.07497 -62.1302 Marcos Juárez
administrative_area_level_2 administrative_area_level_1 country
1 Marcos Juárez Department Cordoba Argentina
Or you can use a geocoding service that does not transliterate inputs: example
In Json:
https://geocode.xyz/Marcos%20Ju%C3%A1rez%20Argentina?json=1
{ "standard" : { "addresst" : {}, "city" : "Marcos Juárez", "prov" : "AR", "countryname" : "Argentina", "postal" : {}, "confidence" : "0.9" }, "longt" : "-62.10158", "alt" : {}, "elevation" : {}, "latt" : "-32.69679"}

Resources