I want to know that do we really need .map when calling any api using http in Angular 2?Please check my below code. It is working fine with .map and even without .map. If api returns data then it will return success else it will return error. I will also return any model data from here after performing some action. So, do I need Observable ? Is there any benefit of using it ? I am using .subscribe at component side to receive data. Is this fine or do I need any improvement ?
returnData: ReturnData;
callyAPI(body: modelData) {
return this.http.post(URL, body)
.do(data => {
for (let i = 0; i < data.length; ++i) {
this.returnData.push(data[i]);
}
return this.returnData;
},
error => {});
});
}
You don't need to use map but do is definitly the wrong operator here
do is supposed to execute some code for every event, but not to modify the events value, while map can update or replace the event by a different value like you do in your example.
https://github.com/ReactiveX/rxjs/blob/master/src/operator/do.ts#L13-L14
Perform a side effect for every emission on the source Observable, but return
an Observable that is identical to the source.
Related
I've been digging around, and I'm not able to find references or documentation on how I can use Asynchronous Functions in Google App Script, I found that people mention It's possible, but not mention how...
Could someone point me in the right direction or provide me with an example?
Promises, Callbacks, or something, that can help me with this.
I have this function lets call it foo that takes a while to execute (long enough that It could time out an HTTP call).
What I'm trying to do Is to refactor it, in a way that it works like this:
function doPost(e) {
// parsing and getting values from e
var returnable = foo(par1, par2, par3);
return ContentService
.createTextOutput(JSON.stringify(returnable))
.setMimeType(ContentService.MimeType.JSON);
}
function foo(par1, par2, par3) {
var returnable = something(par1, par2, par3); // get the value I need to return;
// continue in an Async way, or schedule execution for something else
// and allow the function to continue its flow
/* async bar(); */
return returnable;
}
Now I want to realize that bit in foo because It takes to long and I don't want to risk for a time out, also the logic that occurs there it's totally client Independent, so It doesn't matter, I just need the return value, that I'll be getting before.
Also, I think It's worth mentioning that this is deployed in Google Drive as a web app.
It's been long since this, but adding some context, at that moment I wanted to scheduled several things to happen on Google Drive, and It was timing out the execution, so I was looking for a way to safely schedule a job.
You want to execute functions by the asynchronous processing using Google Apps Script.
You want to run the functions with the asynchronous processing using time trigger.
If my understanding is correct, unfortunately, there are no methods and the official document for directly achieving it. But as a workaround, that can be achieved by using both Google Apps Script API and the fetchAll method which can work by asynchronous processing.
The flow of this workaround is as follows.
Deploy API executable, enable Google Apps Script API.
Using fetchAll, request the endpoint of Google Apps Script API for running function.
When several functions are requested once, those work with the asynchronous processing by fetchAll.
Note:
I think that Web Apps can be also used instead of Google Apps Script API.
In order to simply use this workaround, I have created a GAS library. I think that you can also use it.
In this workaround, you can also run the functions with the asynchronous processing using time trigger.
References:
fetchAll
Deploy the script as an API executable
scripts.run of Google Apps Script API
Benchmark: fetchAll method in UrlFetch service for Google Apps Script
GAS library for running the asynchronous processing
If I misunderstand your question, I'm sorry.
There is another way to accomplish this.
You can use time-based one-off triggers to run functions asynchronously, they take a bit of time to queue up (30-60 seconds) but it is ideal for slow-running tasks that you want to remove from the main execution of your script.
// Creates a trigger that will run a second later
ScriptApp.newTrigger("myFunction")
.timeBased()
.after(1)
.create();
There is handy script that I put together called Async.gs that will help remove the boilerplate out of this technique. You can even use it to pass arguments via the CacheService.
Here is the link:
https://gist.github.com/sdesalas/2972f8647897d5481fd8e01f03122805
// Define async function
function runSlowTask(user_id, is_active) {
console.log('runSlowTask()', { user_id: user_id, is_active: is_active });
Utilities.sleep(5000);
console.log('runSlowTask() - FINISHED!')
}
// Run function asynchronously
Async.call('runSlowTask');
// Run function asynchronously with one argument
Async.call('runSlowTask', 51291);
// Run function asynchronously with multiple argument
Async.call('runSlowTask', 51291, true);
// Run function asynchronously with an array of arguments
Async.apply('runSlowTask', [51291, true]);
// Run function in library asynchronously with one argument
Async.call('MyLibrary.runSlowTask', 51291);
// Run function in library asynchronously with an array of arguments
Async.apply('MyLibrary.runSlowTask', [51291, true]);
With the new V8 runtime, it is now possible to write async functions and use promises in your app script.
Even triggers can be declared async! For example (typescript):
async function onOpen(e: GoogleAppsScript.Events.SheetsOnOpen) {
console.log("I am inside a promise");
// do your await stuff here or make more async calls
}
To start using the new runtime, just follow this guide. In short, it all boils down to adding the following line to your appsscript.json file:
{
...
"runtimeVersion": "V8"
}
Based on Tanaike's answer, I created another version of it. My goals were:
Easy to maintain
Easy to call (simple call convention)
tasks.gs
class TasksNamespace {
constructor() {
this.webAppDevUrl = 'https://script.google.com/macros/s/<your web app's dev id>/dev';
this.accessToken = ScriptApp.getOAuthToken();
}
// send all requests
all(requests) {
return requests
.map(r => ({
muteHttpExceptions: true,
url: this.webAppDevUrl,
method: 'POST',
contentType: 'application/json',
payload: {
functionName: r.first(),
arguments: r.removeFirst()
}.toJson(),
headers: {
Authorization: 'Bearer ' + this.accessToken
}
}), this)
.fetchAll()
.map(r => r.getContentText().toObject())
}
// send all responses
process(request) {
return ContentService
.createTextOutput(
request
.postData
.contents
.toObject()
.using(This => ({
...This,
result: (() => {
try {
return eval(This.functionName).apply(eval(This.functionName.splitOffLast()), This.arguments) // this could cause an error
}
catch(error) {
return error;
}
})()
}))
.toJson()
)
.setMimeType(ContentService.MimeType.JSON)
}
}
helpers.gs
// array prototype
Array.prototype.fetchAll = function() {
return UrlFetchApp.fetchAll(this);
}
Array.prototype.first = function() {
return this[0];
}
Array.prototype.removeFirst = function() {
this.shift();
return this;
}
Array.prototype.removeLast = function() {
this.pop();
return this;
}
// string prototype
String.prototype.blankToUndefined = function(search) {
return this.isBlank() ? undefined : this;
};
String.prototype.isBlank = function() {
return this.trim().length == 0;
}
String.prototype.splitOffLast = function(delimiter = '.') {
return this.split(delimiter).removeLast().join(delimiter).blankToUndefined();
}
// To Object - if string is Json
String.prototype.toObject = function() {
if(this.isBlank())
return {};
return JSON.parse(this, App.Strings.parseDate);
}
// object prototype
Object.prototype.toJson = function() {
return JSON.stringify(this);
}
Object.prototype.using = function(func) {
return func.call(this, this);
}
http.handler.gs
function doPost(request) {
return new TasksNamespace.process(request);
}
calling convention
Just make arrays with the full function name and the rest are the function's arguments. It will return when everything is done, so it's like Promise.all()
var a = new TasksNamespace.all([
["App.Data.Firebase.Properties.getById",'T006DB4'],
["App.Data.External.CISC.Properties.getById",'T00A21F', true, 12],
["App.Maps.geoCode",'T022D62', false]
])
return preview
[ { functionName: 'App.Data.Firebase.Properties.getById',
arguments: [ 'T006DB4' ],
result:
{ Id: '',
Listings: [Object],
Pages: [Object],
TempId: 'T006DB4',
Workflow: [Object] } },
...
]
Notes
it can handle any static method, any method off a root object's tree, or any root (global) function.
it can handle 0 or more (any number) of arguments of any kind
it handles errors by returning the error from any post
// First create a trigger which will run after some time
ScriptApp.newTrigger("createAsyncJob").timeBased().after(6000).create();
/* The trigger will execute and first delete trigger itself using deleteTrigger method and trigger unique id. (Reason: There are limits on trigger which you can create therefore it safe bet to delete it.)
Then it will call the function which you want to execute.
*/
function createAsyncJob(e) {
deleteTrigger(e.triggerUid);
createJobsTrigger();
}
/* This function will get all trigger from project and search the specific trigger UID and delete it.
*/
function deleteTrigger(triggerUid) {
let triggers = ScriptApp.getProjectTriggers();
triggers.forEach(trigger => {
if (trigger.getUniqueId() == triggerUid) {
ScriptApp.deleteTrigger(trigger);
}
});
}
While this isn't quite an answer to your question, this could lead to an answer if implemented.
I have submitted a feature request to Google to modify the implementation of doGet() and doPost() to instead accept a completion block in the functions' parameters that we would call with our response object, allowing additional slow-running logic to be executed after the response has been "returned".
If you'd like this functionality, please star the issue here: https://issuetracker.google.com/issues/231411987?pli=1
I'm using Saga's takeLatest to abort all requests except the latest. This works fine, but now I want to only abort requests which don't have identical url, params, and method.
I know Saga uses the type attribute to compare actions (just like vanilla Redux does), but I've also added url, params, and method to my actions because I was hoping there there was some way to do something like
yield takeLatestIf((action, latestAction) => {
const sameType = action.type === latestAction.type;
const sameUrl = action.url === latestAction.type;
const sameParams = areEqual(action.params, lastAction.params);
const sameMethod = action.method === lastAction.method;
return sameType && sameUrl && sameParams && sameMethod;
});
which should only abort requests if all 4 of those attribute comparisons are false.
How can I accomplish this?
If I get it right from your question, you want this:
Like standard takeLatest().
But when a duplicate request is made, ignore it and wait for the one already executing (a reasonable use case).
So I took takeLatest() implementation provided in the docs and adapted it to your scenario:
const takeLatestDeduped = (patternOrChannel, compareActions, saga, ...args) => fork(function*() {
let lastTask
let lastAction
while (true) {
const action = yield take(patternOrChannel)
// New logic: ignore duplicate request
if (lastTask && lastTask.isRunning() && !compareActions(lastAction, action)) {
continue
}
if (lastTask) {
yield cancel(lastTask)
}
lastTask = yield fork(saga, ...args.concat(action))
// New logic: save last action
lastAction = action
}
})
We have three cases:
No running task: start the new one - standard behavior
Running task, got non-duplicate: cancel old one, start new one - standard behavior
Running task, got duplicate: ignore - new custom hehavior
So I added case #3 logic:
Ignoring duplicate request (nothing should be done in this case, so I continue to handling next action).
Saving last action for future duplicate check.
I have a Meteor Helper that does a GET request and am supposed to get response back and pass it back to the Template, but its now showing up the front end. When I log it to console, it shows the value corerctly, for the life of mine I can't get this to output to the actual template.
Here is my helper:
UI.registerHelper('getDistance', function(formatted_address) {
HTTP.call( 'GET', 'https://maps.googleapis.com/maps/api/distancematrix/json? units=imperial&origins=Washington,DC&destinations='+formatted_address+'&key=MYKEY', {}, function( error, response ) {
if ( error ) {
console.log( error );
} else {
var distanceMiles = response.data.rows[0].elements[0].distance.text;
console.log(response.data.rows[0].elements[0].distance.text);
return distanceMiles;
}
});
});
In my template I pass have the following:
{{getDistance formatted_address}}
Again, this works fine and shows exactly what I need in the console, but not in the template.
Any ideas what I'm doing wrong?
I posted an article on TMC recently that you may find useful for such a pattern. In that article the problem involves executing an expensive function for each item in a list. As others have pointed out, doing asynchronous calls in a helper is not good practice.
In your case, make a local collection called Distances. If you wish, you can use your document _id to align it with your collection.
const Distances = new Mongo.collection(); // only declare this on the client
Then setup a function that either lazily computes the distance or returns it immediately if it's already been computed:
function lazyDistance(formatted_address){
let doc = Distances.findOne({ formatted_address: formatted_address });
if ( doc ){
return doc.distanceMiles;
} else {
let url = 'https://maps.googleapis.com/maps/api/distancematrix/json';
url += '?units=imperial&origins=Washington,DC&key=MYKEY&destinations=';
url += formatted_address;
HTTP.call('GET',url,{},(error,response )=>{
if ( error ) {
console.log( error );
} else {
Distances.insert({
formatted_address: formatted_address,
distanceMiles: response.data.rows[0].elements[0].distance.text
});
}
});
}
});
Now you can have a helper that just returns a cached value from that local collection:
UI.registerHelper('getDistance',formatted_address=>{
return lazyDistance(formatted_address);
});
You could also do this based on an _id instead of an address string of course. There's a tacit assumption above that formatted_address is unique.
It's Meteor's reactivity that really makes this work. The first time the helper is called the distance will be null but as it gets computed asynchronously the helper will automagically update the value.
best practice is not to do an async call in a helper. think of the #each and the helper as a way for the view to simply show the results of a prior calculation, not to get started on doing the calculation. remember that a helper might be called multiple times for a single item.
instead, in the onCreated() of your template, start the work of getting the data you need and doing your calculations. store those results in a reactive var, or reactive array. then your helper should do nothing more than look up the previously calculated results. further, should that helper be called more times than you expect, you don't have to worry about all those additional async calls being made.
The result does not show up because HTTP.call is an async function.
Use a reactiveVar in your case.
Depending on how is the formated_address param updated you can trigger the getDistance with a tracker autorun.
Regs
Yann
I have a JSON object similar to this in the redux store of my application:
tables: [
{
"id":"TableGroup1",
"objs":[
{"tableName":"Table1","fetchURL":"www.mybackend.[....]/get/table1"},
{"tableName":"Table2","fetchURL":"www.mybackend.[....]/get/table2"},
{"tableName":"Table3","fetchURL":"www.mybackend.[....]/get/table3"}
]
},{
"id":"TableGroup2",
"objs":[
{"tableName":"Table4","fetchURL":"www.mybackend.[....]/get/table4"},
{"tableName":"Table5","fetchURL":"www.mybackend.[....]/get/table5"},
{"tableName":"Table6","fetchURL":"www.mybackend.[....]/get/table6"}
]
}
];
To load it, i use the following call (TableApi is a mock api loaded locally, beginAjaxCalls keeps track of how many Ajax calls are currently active);
export function loadTables(){
return function(dispatch,getState){
dispatch(beginAjaxCall());
return TableApi.getAllTables().then(tables => {
dispatch(loadTablesSuccess(tables));
}).then(()=>{
//Looping through the store to execute sub requests
}).catch(error => {
throw(error);
});
};
}
I then want to loop through my tables, call the different URLs and populate a new field called data so that an object after a call looks like this;
{"tableName":"Table1","fetchURL":"www.mybackend.[....]/get/table1","data":[{key:"...",value:"..."},{key:"...",value:"..."},{key:"...",value:"..."},.....]}
The data will be frequently updated by recalling the fetch url, and the table should then re-render in the view.
Which leads me to my questions:
- Is this architecturally sound?
- How would redux handle frequent changes? (because of immutability, will i get performance issues by frequently deep copying a table instance with 10,000+ data entries)
And more importantly, what code could i place to substitute the comment so that it serves its intended purpose? Ive tried;
let i;
for(i in getState().tables){
let d;
for(d in getState().tables[i].objs){
dispatch(loadDataForTable(d,i));
}
}
This code, however doesn't seem like the best implementation and I get errors.
Any suggestions are welcome, thanks!
First of all, you don't need to make a deep copy of all tables.
For sake of immutability you need to copy only changed items.
For your data structure it would look like this:
function updateTables(tables, table) {
return tables.map(tableGroup => {
if(tableGroup.objs.find(obj => table.tableName === obj.tableName)) {
// if the table is here, copy group
retrun updateTableGroup(tableGroup, table);
} else {
// otherwise leave it unchanged
return tableGroup;
}
})
}
function updateTableGroup(tableGroup, table) {
return {
...tableGroup,
objs: tableGroup.objs.map(obj => {
return table.tableName === obj.tableName ? table : obj;
})
};
}
I need to do two $http.get call and I need to send returned response data to my service for doing further calculation.
I want to do something like below:
function productCalculationCtrl($scope, $http, MyService){
$scope.calculate = function(query){
$http.get('FIRSTRESTURL', {cache: false}).success(function(data){
$scope.product_list_1 = data;
});
$http.get('SECONDRESTURL', {'cache': false}).success(function(data){
$scope.product_list_2 = data;
});
$scope.results = MyService.doCalculation($scope.product_list_1, $scope.product_list_2);
}
}
In my markup I am calling it like
<button class="btn" ng-click="calculate(query)">Calculate</button>
As $http.get is asynchronous, I am not getting the data when passing in doCalculation method.
Any idea how can I implement multiple $http.get request and work like above implementation to pass both the response data into service?
What you need is $q.all.
Add $q to controller's dependencies, then try:
$scope.product_list_1 = $http.get('FIRSTRESTURL', {cache: false});
$scope.product_list_2 = $http.get('SECONDRESTURL', {'cache': false});
$q.all([$scope.product_list_1, $scope.product_list_2]).then(function(values) {
$scope.results = MyService.doCalculation(values[0], values[1]);
});
There's a simple and hacky way: Call the calculation in both callbacks. The first invocation (whichever comes first) sees incomplete data. It should do nothing but quickly exit. The second invocation sees both product lists and does the job.
I had a similar problem recently so I'm going to post my answer also:
In your case you only have two calculations and it seems to be the case this number is not mutable.
But hey, this could be any case with two or more requests being triggered at once.
So, considering two or more cases, this is how I would implement:
var requests = [];
requests.push($http.get('FIRSTRESTURL', {'cache': false}));
requests.push($http.get('SECONDRESTURL', {'cache': false}));
$q.all(requests).then(function (responses) {
var values = [];
for (var x in responses) {
responses[x].success(function(data){
values.push(data);
});
}
$scope.results = MyService.doCalculation(values);
});
Which, in this case, would force doCalculation to accept an array instead.