Is there any API to get geocode for airport code?
For ex: if I need to calculate time from home(say its Malibu) to LAX(Los Angeles Intl. Airport), Ideally I would follow below steps:
Get my home address geo location(via geocoder)
Get LAX geo location(via geocoder)
Use above as source and destination in "calculateroute".
However when I use "LAX" in geocoder, its gives some place in CHE(Switzerland).
If I append with country(USA), its listing some other place in Georgia.
*https://geocoder.api.here.com/6.2/geocode.json?app_id=MY-APP-ID&app_code=MY-APP-CODEgen=9&searchtext=LAX
https://geocoder.api.here.com/6.2/geocode.json?app_id=MY-APP-ID&app_code=MY-APP-CODEgen=9&searchtext=LAX,USA*
Is there any alternate way to do it OR the only way is for me to maintain a map of IATA airport codes with their geo coordinates and use it directly in calculateroute?
To get the geocode of an Airport:
Use Landmark geocoding: categoryids=4581
categoryids
xs:integer
Limit landmark results to one or more categories. Examples:
Highway exits: 116
Airports: 4581
Tourist attractions: 7999
Example:
http://geocoder.api.here.com/6.2/search.json?categoryids=4581&gen=8&jsonattributes=1&language=en-US&maxresults=20&searchtext=LAX&app_id={YOUR_APP_ID}&app_code={YOUR_APP_CODE}
Read more at developer.here.com/documentation/geocoder/topics/resource-search.html
Related
I am doing some geocoding of street addresses (n=18,000) using the ggmap package in R and the Google Maps API, which I understand has a limit of 2,500 geocoding requests per day for addresses.
The geocoding script I'm using is very simple and works on the small test dfs I've tried (like the sample below), but I'm wondering about the most simple/elegant way to stitch together the final geocoded df of all 18,000 locations over the next ~7 days for each 2500-row chunk.
I'd thought about just numbering them by day and then binding them all together at the end, using the following line of code each time on a df that looks like the sample below:
library(ggmap)
library(tidyverse)
register_google(key = "MY API KEY", write = TRUE)
pharmacies <- data.frame(pharm_id = c("00001", "00002", "00003"), address = c("250 S. Colonial Drive, Alabaster, AL 35007", "6181 U.S. Highway 431, Albertville, AL 35950", "113 Third Avenue S.E., Aliceville, AL 35442")
pharmacies_geocoded_1 <- mutate_geocode(pharmacies, address, output = "latlon")
pharm_id
address
00001
250 S. Colonial Drive, Alabaster, AL 35007
00002
6181 U.S. Highway 431, Albertville, AL 35950
00003
113 Third Avenue S.E., Aliceville, AL 35442
But it seems like manually doing this day by day will get a bit messy (or that there may be some more elegant loop strategy that I can set up once and walk away from). Is there a better way?
EDIT
As #arachne591 says its also available a R interface to cron with cronR package. Also in Windows taskscheduleR makes the same job.
You can wrap you code on a scrip and run it daily with a cron job:
If you are on UNIX (Linux/MAc):
crontab -e
and then introduce a new line with:
0 0 * * 0-6 Rscript "/route/to/script.R"
This runs your script “At 00:00 on every day-of-week from Sunday through Saturday.”
You can build your own schedule with contrabguru
Additional resources:
Schedule a Rscript crontab everyminute
Running a cron job at 2:30 AM everyday
I have been using Google Places Search API to look for permanently closed locations. As it turns out google filters them out from the query results.
For example : Cumberland River Hospital shows up as permanently closed on the google maps
https://www.google.com/search?rlz=1C1GCEV_enUS859US859&ei=JpzVXdXxDq2d_Qams6KwBg&q=Cumberland+River+Hospital&oq=Cumberland+River+Hospital&gs_l=psy-ab.3..0l6j0i22i30l4.3132.3757..4013...0.0..0.229.882.0j4j1......0....1..gws-wiz.3587Ce7Fh7c&ved=0ahUKEwjVnJrvyvnlAhWtTt8KHaaZCGYQ4dUDCAs&uact=5
Now as a test, I try to search for all hospitals in the same coordinates using googleway package as follows :
library('googleway')
hosps <- google_places(
location = c(36.544690,-85.500820),
keyword = 'hospital',
key = api,
rankby = 'distance',
simplify = TRUE)
and it skips out the "Cumberland River Hospital" from the search results.
However, when I try to search specifically for "Cumberland River Hospital" in the same coordinates Google Places Search API returns information for that particular hospital that includes a flag "permanently_closed" = TRUE indicating that this place is permanently closed.
library('googleway')
pc_hosp <- google_places(
location = c(36.544690,-85.500820),
keyword = "cumberland river hospital",
key = api,
rankby = 'distance',
simplify = TRUE)
Why does Google Places API skip permanently closed location from the regular search results ? Is there a way to include the permanently closed locations along with the regular establishments using Google Places Search API ?
I am using the newsanchor package in R to try to extract entire article content via NewsAPI. For now I have done the following :
require(newsanchor)
results <- get_everything(query = "Trump +Trade", language = "en")
test <- results$results_df
This give me a dataframe full of info of (maximum) a 100 articles. These however do not containt the entire actual article text. Rather they containt something like the following:
[1] "Tensions between China and the U.S. ratcheted up several notches over the weekend as Washington sent a warship into the disputed waters of the South China Sea. Meanwhile, Google dealt Huaweis smartphone business a crippling blow and an escalating trade war co… [+5173 chars]"
Is there a way to extract the remaining 5173 chars. I have tried to read the documentation but I am not really sure.
I don't think that is possible at least with free plan. If you go through the documentation at https://newsapi.org/docs/endpoints/everything in the Response object section it says :
content - string
The unformatted content of the article, where available. This is truncated to 260 chars for Developer plan users.
So all the content is restricted to only 260 characters. However, test$url has the link of the source article which you can use to scrape the entire content but since it is being aggregated from various sources I don't think there is one automated way to do this.
When using Sabre APIs, is there any reliable indicator available in a Sabre TravelItineraryReadRS (or GetReservation) or other API that indicates whether a flight is international or domestic?
I want to avoid adding complexity and having to maintain a separate list of airport codes and countries if possible, and instead just use an indicator from a response.
I've checked <FlightSegment> in <PTC_FareBreakdown> but nothing seems to indicate internationality:
<tir39:FlightSegment ConnectionInd="O" DepartureDateTime="02-24T13:00" FlightNumber="123" ResBookDesigCode="E" SegmentNumber="1" Status="SS">
<tir39:BaggageAllowance Number="01P"/>
<tir39:FareBasis Code="AFB112"/>
<tir39:MarketingAirline Code="VA" FlightNumber="123"/>
<tir39:OriginLocation LocationCode="BNE"/>
<tir39:ValidityDates>
<tir39:NotValidAfter>2019-02-24</tir39:NotValidAfter>
<tir39:NotValidBefore>2019-02-24</tir39:NotValidBefore>
</tir39:ValidityDates>
</tir39:FlightSegment>
and also checked in <ReservationItems><Item>, e.g.:
<tir39:Item RPH="1">
<tir39:FlightSegment AirMilesFlown="0466" ArrivalDateTime="05-18T14:40" DayOfWeekInd="6" DepartureDateTime="2019-05-18T13:05" SegmentBookedDate="2018-12-21T11:20:00" ElapsedTime="01.35" eTicket="true" FlightNumber="0529" NumberInParty="01" ResBookDesigCode="E" SegmentNumber="0001" SmokingAllowed="false" SpecialMeal="false" Status="HK" StopQuantity="00" IsPast="false" CodeShare="false" Id="123">
<tir39:DestinationLocation LocationCode="SYD" Terminal="TERMINAL 3 DOMESTIC" TerminalCode="3"/>
<tir39:Equipment AirEquipType="21B"/>
<tir39:MarketingAirline Code="QF" FlightNumber="0529">
<tir39:Banner>MARKETED BY QANTAS AIRWAYS</tir39:Banner>
</tir39:MarketingAirline>
<tir39:Meal Code="L"/>
<tir39:OperatingAirline Code="QF" FlightNumber="0529" ResBookDesigCode="E">
<tir39:Banner>OPERATED BY QANTAS AIRWAYS</tir39:Banner>
</tir39:OperatingAirline>
<tir39:OperatingAirlinePricing Code="QF"/>
<tir39:DisclosureCarrier Code="QF" DOT="false">
<tir39:Banner>QANTAS AIRWAYS</tir39:Banner>
</tir39:DisclosureCarrier>
<tir39:OriginLocation LocationCode="BNE" Terminal="DOMESTIC" TerminalCode="D"/>
<tir39:UpdatedArrivalTime>05-18T14:40</tir39:UpdatedArrivalTime>
<tir39:UpdatedDepartureTime>05-18T13:05</tir39:UpdatedDepartureTime>
</tir39:FlightSegment>
</tir39:Item>
and although these have origin/destination airports, neither indicate whether the flight is international or not, and the terminal name is not reliable as an indicator.
<PriceQuotePlus> has a DomesticIntlInd attribute that initially looked useful:
<tir39:PriceQuotePlus DomesticIntlInd="I" PricingStatus="S" VerifyFareCalc="false" ItineraryChanged="false" ...>
but PriceQuotePlus and therefore DomesticIntlInd does not seem to be present in all circumstances. e.g. I have TravelItineraryReadRs responses where there is no PriceQuotePlus element, but still contains ReservationItem/Item/FlightSegment elements that I need to be able to identify as International or Domestic.
Not only this, but as an example, I have a reservation where "DomesticIntlInd" is set to "I" in a reservation that does not have an International flight (it has only one flight, and that flight is domestic (BNE-SYD)).
Any other thoughts on where I might find a reliable international flight indicator or is this functionality simply not available?
Sabre does expose a City Pairs API that includes country codes for each airport, which you could use to infer whether a flight started and ended in the same country.
They also expose this as a list that you could build into your own data table, but the API would probably be more futureproof.
The current file can be found here, but I don't know if that link will work forever.
I am trying to find or build a web scraper that is able to go through and find every state/national park in the US along with their GPS coordinates and land area. I have looked into some frameworks like Scrapy and then I see there are some sites that are specifically for Wikipedia such as http://wiki.dbpedia.org/About. Is there any specific advantage to either one of these or would either one work better to load the information into an online database?
Let's suppose you want to parse pages like this Wikipedia page. The following code should work.
var doc = new HtmlDocument();
doc = .. //Load the document here. See doc.Load(..), doc.LoadHtml(..), etc.
//We get all the rows from the table (except the header)
var rows = doc.DocumentNode.SelectNodes("//table[contains(#class, 'sortable')]//tr").Skip(1);
foreach (var row in rows) {
var name = HttpUtility.HtmlDecode(row.SelectSingleNode("./*[1]/a[#href and #title]").InnerText);
var loc = HttpUtility.HtmlDecode(row.SelectSingleNode(".//span[#class='geo-dec']").InnerText);
var areaNodes = row.SelectSingleNode("./*[5]").ChildNodes.Skip(1);
string area = "";
foreach (var a in areaNodes) {
area += HttpUtility.HtmlDecode(a.InnerText);
}
Console.WriteLine("{0,-30} {1,-20} {2,-10}", name, loc, area);
}
I tested it, and it produces the following output:
Acadia 44.35A°N 68.21A°W 47,389.67 acres (191.8 km2)
American Samoa 14.25A°S 170.68A°W 9,000.00 acres (36.4 km2)
Arches 38.68A°N 109.57A°W 76,518.98 acres (309.7 km2)
Badlands 43.75A°N 102.50A°W 242,755.94 acres (982.4 km2)
Big Bend 29.25A°N 103.25A°W 801,163.21 acres (3,242.2 km2)
Biscayne 25.65A°N 80.08A°W 172,924.07 acres (699.8 km2)
Black Canyon of the Gunnison 38.57A°N 107.72A°W 32,950.03 acres (133.3 km2)
Bryce Canyon 37.57A°N 112.18A°W 35,835.08 acres (145.0 km2)
Canyonlands 38.2A°N 109.93A°W 337,597.83 acres (1,366.2 km2)
Capitol Reef 38.20A°N 111.17A°W 241,904.26 acres (979.0 km2)
Carlsbad Caverns 32.17A°N 104.44A°W 46,766.45 acres (189.3 km2)
Channel Islands 34.01A°N 119.42A°W 249,561.00 acres (1,009.9 km2)
Congaree 33.78A°N 80.78A°W 26,545.86 acres (107.4 km2)
Crater Lake 42.94A°N 122.1A°W 183,224.05 acres (741.5 km2)
Cuyahoga Valley 41.24A°N 81.55A°W 32,860.73 acres (133.0 km2)
Death Valley 36.24A°N 116.82A°W 3,372,401.96 acres (13,647.6 km2)
Denali 63.33A°N 150.50A°W 4,740,911.72 acres (19,185.8 km2)
Dry Tortugas 24.63A°N 82.87A°W 64,701.22 acres (261.8 km2)
Everglades 25.32A°N 80.93A°W 1,508,537.90 acres (6,104.8 km2)
Gates of the Arctic 67.78A°N 153.30A°W 7,523,897.74 acres (30,448.1 km2)
Glacier 48.80A°N 114.00A°W 1,013,572.41 acres (4,101.8 km2)
(...)
I think that's a start. If some page fails, you have to see if the layout changes, etc.
Of course, you will also have to find a way of obtaining all the links you want to parse.
One important thing: Do you know if is permitted to scrape Wikipedia? I have no idea, but you should see if it is before doing it... ;)
Though the question is a little old, another alternative available right now is to avoid any scraping and get the raw data direct from protectedplanet.net - it contains data from the World Database of Protected Areas and the UN's List of Protected Areas. (Disclosure: I worked for UNEP-WCMC, the organisation that produced and maintains the database and the website.)
It's free for non-commercial use, but you'll need to register to download. For example, this page lets you download 22,600 protected areas in the USA as KMZ, CSV and SHP (contains lat, lng, boundaries, IUCN category and a bunch of other metadata).
I would conisder this not the best approach.
My idea would be to go to the API from openstreetmap.org (or any other GEO based API that you can query) and ask it for the data you want. National parks are likely to be found pretty easily. You can get the names from a source like Wikipedia and then ask ony of the GEO APIs to give you the information you want.
BTW, what'S wrong with Wikipedias List of National Parks?