Google Maps API In Apps Script Keeps Failing - google-maps-api-3

I'm using google apps script to code a distance finder for Google Maps. I've found examples of such, but they keep failing, so I thought I'd code my own. Sadly, this is failing with the same error:
TypeError: Cannot read property "legs" from undefined. (line 16).
It seems to be that it's sometimes working, and sometimes not. I have a few (3) places in my sheet that are calling the same functions, and at times one or more will return a valid response.
I saw elsewhere that people were suggesting using an API key to make sure that you get a good response, so that's what I've implemented below. (api keys redacted! is there a good way to tell if they've been recognised?)
Any ideas what might be going awry?!
Thanks in advance,
Mike
function mikeDistance(start, end){
start = "CV4 8DJ";
end = "cv4 9FE";
var maps = Maps;
maps.setAuthentication("#####", "#####");
var dirFind = maps.newDirectionFinder();
dirFind.setOrigin(start);
dirFind.setDestination(end);
var directions = dirFind.getDirections();
var rawDistance = directions["routes"][0]["legs"][0]["distance"]["value"];
var distance = rawDistance/1609.34;
return distance;
}

Here's my short term solution while the issue is being fixed.
Not ideal, but at least reduces using your API limit as much as possible.
function getDistance(start, end) {
return hackyHack(start, end, 0);
}
function hackyHack(start, end, level) {
if (level > 5) {
return "Error :(";
}
var directions = Maps.newDirectionFinder()
.setOrigin(start)
.setDestination(end)
.setMode(Maps.DirectionFinder.Mode.DRIVING)
.getDirections();
var route = directions.routes[0];
if (!route) return hackyHack(start, end, level+1); // Hacky McHackHack
var distance = route.legs[0].distance.text;
// var time = route.legs[0].duration.text;
return distance;
}

Related

How to make Google's SpeechClient.StreamingRecognize.WriteAsync faster?

Google's Cloud Speech-To-Text API is incredible, but for some reason it takes around 90 seconds to complete the WriteAsync() call before anything further happens.
I'm using the C# code, unaltered, right off of GitHub (https://github.com/GoogleCloudPlatform/dotnet-docs-samples/blob/master/speech/api/Recognize/Recognize.cs). When I invoke any of the methods (StreamingMicRecognizeAsync, SyncRecognizeWithCredentials, etc.) it works as expected except for this huge delay. I don't see anything wrong anywhere, no exceptions being thrown or anything, it just takes forever to get past this code:
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "en",
},
InterimResults = true,
}
});
This isn't that much of a problem with files, but waiting 90 seconds before being able to speak into a microphone is not a good scenario for our potential customers! :)
Can anyone tell me what I'm doing wrong?
thanks!

Is there a way of finding cities within a route with Google Maps API?

Is there a way of obtaining the cities a route traced by DirectionsService.route() goes through?
For example, in the route in https://goo.gl/maps/trHkPUNzuDFEjYT27 , roads belonging to the cities of Sao Paulo (starting point), Anhanguera, Cajamar, Jundiai, (others...) and Campinas (ending point).
If we input the starting and ending point in the DirectionsService.route() method, we obtain a list of legs, which includes the road, mileage, and time to travel, but not the cities they belong to.
Is there a way of obtaining this data without calling additional API's ? Cost is an important issue when considering Maps API.
EDIT: Clarified that solution should not involve additional calls. This solution is not much better than calling PlacesService for each leg of the route, since it merely boxes parts of the route, and calls them anyways.
My suggestions would be simply to abandon the approach of using the Google API for everything. Undoubtedly, it's the best navigation tool, and it's for this reason it's that expensive. So, I'd suggest to use some other method for the geocoding that's not through Google, especially if you're only looking for big cities (as is appears in your example). There's some 'free' APIs that already exist (in truth, they're usually never really free) - I'd only suggest this if you're serverless. In that case, I'd go with Nominatim - it has no limit caps (kind of, see operations.osmfoundation.org/policies/nominatim - you can spam it, but it's discouraged), no API keys, and is completely free - the only issue, of course, is that as you mentioned you'd have to go through each point and make a request to an API, which would take a lot of time. However, I'd do:
let zoom = 12; // Adjust to your needs: see nominatim.org/release-docs/develop/api/Reverse. Higher numbers will result in more places being matched, while lower numbers will result in faster execution.
let coords = [];
const stepList = [];
Array.from(googleResponse.routes[0].legs).forEach(leg => {
stepList.push(...leg.steps);
});
stepList.forEach(e => {
coords.push([e.endLocation.lat, e.endLocation.long]);
});
coords.push([legList[0].startLocation.lat, legList[0].startLocation.long]);
let arr = [];
let promises = [];
let bboxes = [];
const loopOn = (i, cb) => {
const coord = coords[i];
const nextLoop = () => {
i+1 < coords.length? loopOn(i+1, cb) : cb();
}
let makeRequest = true;
for (let bbox of bboxes) {
if (coord[0] >= bbox[0]
&& coord[0] <= bbox[1]
&& coord[1] >= bbox[2]
&& coord[1] <= bbox[3]){ // If it is within a bounding box we've already seen
makeRequest = false; // there's no need to geocode it, since we already roughly know it's in an area we have already saved.
break;
}
}
if (makeRequest){
var request = $.ajax({
url: `https://nominatim.openstreetmap.org/reverse?format=jsonv2&lat=${coord[0]}&lon=${coord[1]}&zoom=${zoom}`,
type:'GET',
dataType: 'jsonp',
success: resp => {
thisPlace = resp.address.city || resp.address.town || resp.address.village;
thisPlace && arr.indexOf(thisPlace) === -1 && arr.push(thisPlace);
bboxes.push(resp.boundingbox);
nextLoop();
}
});
} else {
nextLoop();
}
};
loopOn(0, () => {
/*The rest of your code*/
});
This code simply goes through each leg (where I'm assuming googleResponse is the unfiltered but JSONified response from the Directions API, and requests it from Nominatim. I've made it a tad bit more efficient using Nominatim's bounding boxes, which return the rectangle around each city/village area, so we don't need to make a request if an instruction/step is literally simply to turn a corner in the same square/suburb/city district/city (this can be defined using the zoom variable).
The problem with this is that Nominatim, being free and quite unoptimised, is obviously not the fastest API out there. Even if Google's servers ran on a slow connection, they'd still be faster simply because they've optimised their product to run faster, using some low-level code. Meanwhile, Nominatim simply does a basic lookup from a file (no rainbow hashing, etc.), so it has to manually narrow down the area.
The solution would be to use a custom dataset. Obviously, this would require a backend to store it on, since downloading the entire CSV to the frontend with each load would take literal hours (and on every reload!). All you'd really need to do for this is replace the AJAX API request with a call to the csv-parser module (or any other parsing function), which works in much the same way regarding promises/async, so you could literally just replace the code with the example on their website:
let resp = [];
fs.createReadStream(<your file here.csv>)
.pipe(csv())
.on('data', (data) => resp.push(data))
.on('end', () => {
results = search(resp, coords);
thisPlace = results.address.city || results.address.town || results.address.village;
thisPlace && arr.indexOf(thisPlace) === -1 && arr.push(thisPlace);
nextLoop();
});
Also, you could remove the bounding-box code, since you don't need to save request time anymore.
However, rearranging it like so would be faster:
let resp = [];
fs.createReadStream(<your file here.csv>)
.pipe(csv())
.on('data', (data) => resp.push(data))
.on('end', () => {
let coords = [];
const stepList = [];
Array.from(googleResponse.routes[0].legs).forEach(leg => {
stepList.push(...leg.steps);
});
stepList.forEach(e => {
coords.push([e.endLocation.lat, e.endLocation.lng]);
});
coords.push([legList[0].startLocation.lat, legList[0].startLocation.lng]);
let arr = [];
let promises = [];
let bboxes = [];
coords.forEach(coord => {
let results = search(coords);
let thisPlace = results.address.city || results.address.town || results.address.village;
thisPlace && arr.indexOf(thisPlace) === -1 && arr.push(thisPlace);
};
/*The rest of your code*/
});
The next thing we need is the actual search function, which is the complicated bit. We need to find something that's quick, but also mostly correct. The actual implementation depends on the format of your file, but here's a quick rundown of what I'd do:
Create two duplicates of resp, but sort one (we'll call this array a_o) by longitude and the other one by latitude (a_a). Ensure you don't use var or let or const when defining these arrays, just... define them.
For each one, remove anything not within a 25km (radius) of the point on the longitude axis in a_o, and the same but with latitude for a_a
delete both arrays to clear the space they are taking up in the RAM.
Find any item that's in both arrays, and put these in an array called a_c
Filter any items which are within 3-4km of each other (make sure to keep one of the points, though, not delete both!)
Go through each item, and work out the absolute distance to the point (using this algorithm - remember, the earth is a sphere, basic Pythagorean thereom will not work!
If you find any item with a distance less than 20km, and has a city or village or town attached, break and return the name.
If you finish, i.e never break, return undefined.
Other then that, you can go with mostly any CSV which contains:
The city's name
The central latitude
The central longitude
I hope this helps. In conclusion, go with the first, tried-and-tested, premade method if you're only doing 5-6 simple routes an hour. If you've got gajillions of waypoints, download the data and spend half an hour or so making what is essentially your own geocoding system. It'll well be worth the performance bump.

Asynchronous with indexeddb problems

I am having problem with a function in IndexedDB, where I need to change the status of some meetings. The Search feature which meetings are checked by grabbing the ID of each one of them, soon after I A for() where I retrace the vector that contains the ids for each database access do I get a different passing the id of the time. The following code example:
var val = [];
var checkbox = $('input:checkbox[class^=checkReunioes]:checked');
if(checkbox.length > 0){
checkbox.each(function(){
val.push($(this).val());
});
}
for(var i = 0; i < val.length; i++){
var transaction = db.transaction(["tbl_REUNIOES"], "readwrite").objectStore("tbl_REUNIOES");
var request = transaction.get(val[i]);
request.onerror = function(event) {
alert("BAD");
};
request.onsuccess = function(event) {
var data = request.result;
data.FLG_STATU_REUNI = 'I';
var codigo_igreja = localStorage.getItem("igreja");
var dataJSON = JSON.stringify(data);
enviarFilaSincronismo("tbl_REUNIOES", "U", dataJSON, " WHERE COD_IDENT_REUNI = '" + val[i] + "' and COD_IDENT_IGREJ = '" + codigo_igreja + "'");
var requestUpdate = transaction.put(data);
requestUpdate.onerror = function(event) {
alert("OK");
};
requestUpdate.onsuccess = function(event) {
$("#listReunioes").html("");
serchAll(w_key_celula);
};
};
}
In my view the problem is occurring due to be a bank indexeddb asynchronous, it passes to the next search, even before the first stop.
But how can I do to confer this ?
What is the good practice for something in this case ?.
If you are inexperienced with writing asynchronous code, a good general rule to consider is to never define functions inside loops. Do not set request.onsuccess to a function from within the for loop.
You can perform multiple get and put requests on the same transaction when you do not expect the individual requests to fail for data-related reasons, such as the violation of a uniqueness constraint of an index, or because you are performing many thousands of requests on the same transaction and reaching processing limits.
You might find that using IDBObjectStore.prototype.openCursor together with IDBCursor.prototype.update is more convenient than using IDBObjectStore.prototype.get and IDBObjectStore.prototype.put.
Your example code indicates that a successful get request means that data was retrieved, when in fact, this is not what actually happens. A successful get request just means that a request occurred without errors (e.g. against an object store that exists, against a database that is not blocked by other requests, against a database connection that is still valid). It does not mean that an object matched your get request query. You should be checking for whether the request's result object is defined, and use that check as a determination of whether an object matched your get query, and not simply that a successful request occurred.
You might want to spend more time organizing your code into smaller functions that use clearer names. Your example code is difficult to read.
It looks like you are using some type of global db variable. If you are not well experienced with writing asynchronous code, avoid using a global db variable. There is no guarantee the db variable will be defined and open when you decide to access it, which could lead to an unexpected error.

WebRTC Peerconnection: Which IP flow of candidates set is used?

I am currently working on a monitoring tool for webrtc sessions investigating into the transferred SDP from caller to callee and vice versa. Unfortunately I cannot figure out which ip flow is really used since there are >10 candidate lines per session establishment and somehow the session is established after some candidates are pushed inside the PC.
Is there any way to figure out which flow is being used of the set of candidate flows?
I solved the issue by myself! :)
There is a function called peerConnection.getStats(callback);
This will give a lot of information of the ongoing peerconnection.
Example: http://webrtc.googlecode.com/svn/trunk/samples/js/demos/html/constraints-and-stats.html
W3C Standard Description: http://dev.w3.org/2011/webrtc/editor/webrtc.html#statistics-model
Bye
I wanted to find out the same thing, so wrote a small funtion which returns a promise which resolves to candidate details:
function getConnectionDetails(peerConnection){
var connectionDetails = {}; // the final result object.
if(window.chrome){ // checking if chrome
var reqFields = [ 'googLocalAddress',
'googLocalCandidateType',
'googRemoteAddress',
'googRemoteCandidateType'
];
return new Promise(function(resolve, reject){
peerConnection.getStats(function(stats){
var filtered = stats.result().filter(function(e){return e.id.indexOf('Conn-audio')==0 && e.stat('googActiveConnection')=='true'})[0];
if(!filtered) return reject('Something is wrong...');
reqFields.forEach(function(e){connectionDetails[e.replace('goog', '')] = filtered.stat(e)});
resolve(connectionDetails);
});
});
}else{ // assuming it is firefox
var stream = peerConnection.getLocalStreams()[0];
if(!stream || !stream.getTracks()[0]) stream = peerConnection.getRemoteStreams()[0];
if(!stream) Promise.reject('no stream found')
var track = stream.getTracks()[0];
if(!track) Promise.reject('No Media Tracks Found');
return peerConnection.getStats(track).then(function(stats){
var selectedCandidatePair = stats[Object.keys(stats).filter(function(key){return stats[key].selected})[0]]
, localICE = stats[selectedCandidatePair.localCandidateId]
, remoteICE = stats[selectedCandidatePair.remoteCandidateId];
connectionDetails.LocalAddress = [localICE.ipAddress, localICE.portNumber].join(':');
connectionDetails.RemoteAddress = [remoteICE.ipAddress, remoteICE.portNumber].join(':');
connectionDetails.LocalCandidateType = localICE.candidateType;
connectionDetails.RemoteCandidateType = remoteICE.candidateType;
return connectionDetails;
});
}
}

Meteor multiplayer game clients get out of sync - how to debug?

I've built a simple real-time multiplayer math game in Meteor that you can try out here: http://mathplay.meteor.com
When playing locally (using different browsers), everything works fine. But when I play over the Internet with friends, the clients often get out of sync: a question listed as active for one player is actually already solved by another player.
My guess is that some code that should be server-only gets executed on one of the clients instead. Any suggestions on how to debug this behavior?
Here is what happens on the client when user submits an answer:
Template.number_input.events[okcancel_events('#answertextbox')] = make_okcancel_handler({
ok: function (text, event) {
question = Questions.findOne({ order_number: Session.get("current_question_order_number") });
if (question.answer == document.getElementById('answertextbox').value) {
console.log('True');
Questions.update(question._id, {$set: {text: question.text.substr(0, question.text.length - 1) + question.answer, player: Session.get("player_name")}});
callGetNewQuestion();
}
else {
console.log('False');
}
document.getElementById('answertextbox').value = "";
document.getElementById('answertextbox').focus();
}
});
callGetNewQuestion() triggers this on both client and server:
getNewQuestion: function () {
var nr1 = Math.round(Math.random() * 100);
var nr2 = Math.round(Math.random() * 100);
question_string = nr1 + " + " + nr2 + " = ?";
question_answer = (nr1 + nr2);
current_order_number = Questions.find({}).count() + 1;
current_question_id = Questions.insert({ order_number: current_order_number, text: question_string, answer: question_answer });
return Questions.findOne({_id: current_question_id});//current_question_id;
},
Full source code is here for reference: https://github.com/tomsoderlund/MathPlay
Your problem lies with this:
callGetNewQuestion() triggers this on both client and server
This will generate a different _id because of the timing difference, as well as a different question which will then get replaced with that one that the server generated. However, this might not always be the case. This makes it very easy to let things get out of sync, simply because your client is generating its own stuff.
You'll need to figure out a better approach at making sure the client generates the same data as the server. Which can be done by making sure that a random number generator is seeded the same way and thus would give the same random numbers every time. This will resolve any flickering because the values are different.
Then, for the actual bug you might not want to do this:
return Questions.findOne({_id: current_question_id});
But do this instead (only on the client, do nothing on the server):
Session.set('current_order', current_order_number); // ORDER! Not the _id / question_id.
That way, you can put the following in a template helper:
return Questions.findOne({ order_number: Session.get('current_order') });
In essence, this will work in a reactive way on the Collection and not dependent on the return value.

Resources