I'm using firebase and would like to update a value no matter what happens using a HttpRequest Cloud function ( connection error, tab or browser closed ).
I thought of a setTimeout but this doesn't solve the issue as if anything happens the request will fail as it is related to client side.
Then my idea was to send the request on a remote server ( a bit like a cron job ) that should perform at a certain delay ( let's say 30 minutes ) but that would cancel the previous request in queue ( based on userID and request path )
How could I achieve that and is this possible?
example:
request number 1 -> www.myrequest with { userUID:1, data:{size:1}}
request time:12:01
request number 2 -> www.myrequest with { userUID:1, data:{size:30}}
request time:12:02
request 2 cancels request 1 and gets executed 30 minutes later
Related
I'm trying to fetch remote config values on first login (and not only after minimumFetchIntervalMillis expires), this is what i do (pseudo):
yield firebaseConfig.setConfigSettings(15 mins)
yield firebaseConfig.setDefaults()
yield firebaseConfig.fetchAndActivate().then(print values)
But i do not get the correct config for the user, just the server default.
Is there a valid way to do that?
I was thinking about setting minimumFetchIntervalMillis to zero on first login and after config was fetched to set it to 15 mins but i'm not sure that's the best approach for this.
Any ideas?
Set the minimumFetchIntervalMillis to 0 for the initial retrieve request if you wish to fetch new configuration data immediately after logging in rather than waiting for the minimumFetchIntervalMillis duration to elapse. This can lead to more requests being sent to the server more often, which could harm the performance of app and raise server expenses.
firebaseConfig.settings.minimumFetchIntervalMillis = 0;
firebaseConfig.fetchAndActivate().then(() => {
console.log(firebaseConfig.getAll());
firebaseConfig.settings.minimumFetchIntervalMillis = 15 * 60 * 1000;
});
The above sample code retrieves the remote configuration with a minimumFetchIntervalMillis of 0, activates it, then outputs the values it has fetched. The minimumFetchIntervalMillis is 15 milliseconds after the initial fetch.
When i change typeahead's value and it's already asynchronized, the request not send, and it loads from the old response.
$('#MyTypeahead').typeahead('val', 'something');// Send async request
$('#MyTypeahead').typeahead('val', 'something else');// Send async request
$('#MyTypeahead').typeahead('val', 'something');// loads from the first one
For some reasons i need to async everytime typeahead's value changes
Undocumented, i had to clear remote cache every single search,
The solution is simply disabling cache
remote : { cache:false ...}
I am trying to use the Marketo activities.json API endpoint, and I am getting a timeout everytime I try. I have set the cURL timeout to 25 seconds and I am using a valid nextPageToken parameter to filter the results. The timeframe is yesterday and today.
When I try other endpoints (lists.json, activities/pagingtoken.json, leads.json, lists.json, and stats/usage/last7days.json) I get a response and my request does not timeout.
Here is the request I am making to activities.json:
method: "GET"
url: "https://[marketo-id].mktorest.com/rest/v1/activities.json"
parameters: Array
(
[nextPageToken] => [paging-token]
[listId] => [list-id]
[activityTypeIds] => 24
[access_token] => [access-token]
)
Why am I getting a timeout just for the activities.json endpoint? Is this API endpoint broken or down?
The global timeout for Marketo's REST API is 30 seconds, can you first try adjusting your local timeout to match this? If you remove list ID from from the call what happens?
So I completely understand how to use getIceServers via your demo, but what's the best practice for implementing on the server side / compiled client-side?
"This token should only be implemented in a secure environment, such as a server-side application or a compiled client-side application."
Do the list of IceServers expire at some point? Should I request new IceServers on each page request or do I cache the list for X amount of time?
The Ice Server credentials expire after about 10 seconds. Because you want to keep your XirSys secret token secure (so no one can hack your account's connection allotment), you'll want to make a backend/server side curl request for the ice servers. It's assumed that your app uses its own authentication. I.e., it'll reject any non-authenticated requests to https://yourdomain.com/ajax/get-ice-servers.
So ... whenever you need to create a PeerConnection object, get a list of Ice servers through an Ajax call ...
var pc = RTCPeerConnection(
getIceServers(),
{optional: []}
);
where ...
function getIceServers() {
var result = jQuery.ajax({
async: false,
url: "https://" + yourDomain + ".com/ajax/get-ice-servers"
}).responseText;
return JSON.parse(result);
}
Note you'll want a synchronous ajax request so the getIceServers() function returns the result before RTCPeerConnection is instantiated.
Also note that if you start a webRTC connection automatically on page load, then you could probably just use the iceServers result from the server curl request.
Can someone help me with this problem that occurs whenever you run a TRIGGER, but works in a normal PROCEDURE?
TRIGGER:
create or replace
procedure testeHTTP(search varchar2)
IS
Declare
req sys.utl_http.req;<BR>
resp sys.utl_http.resp;<BR>
url varchar2(500);
Begin
url := 'http://www.google.com.br';
dbms_output.put_line('abrindo');
-- Abrindo a conexão e iniciando uma requisição
req := sys.utl_http.begin_request(search);
dbms_output.put_line('preparando');
-- Preparandose para obter as respostas
resp := sys.utl_http.get_response(req);
dbms_output.put_line('finalizando response');
-- Encerrando a comunicação request/response
sys.utl_http.end_response(resp);
Exception
When Others Then
dbms_output.put_line('excecao');
dbms_output.put_line(sys.utl_http.GET_DETAILED_SQLERRM());
End;
close your user session and then the problem is fixed.
Internal there is a limit from 5 http requests.
Might a problem is the missing: utl_http.end_response
or an exception in the app and not a close from the resp object.
modify the code like that:
EXCEPTION
WHEN UTL_HTTP.TOO_MANY_REQUESTS THEN
UTL_HTTP.END_RESPONSE(resp);
you need to close your requests once you are done with them, it does not happen automatically (unless you disconnect form the db entirely)
It used to be utl_http.end_response, but I am not sure if it is the same api any more.
Usually we need UTL_HTTP.END_RESPONSE(resp); to avoid of ORA-29270: too many open HTTP requests, but I think I reproduced the problem of #Clóvis Santos in Oracle 19c.
If web-service always returns status 200 (success) then too many open HTTP requests never happens. But if persistent connections are enabled and web-service returns status 404, behavior becomes different.
Let`s call something that always return 404.
First call of utl_http.begin_request returns normally and opens new persistent connection. We can check it with select utl_http.get_persistent_conn_count() from dual;. Second call causes an exception inside utl_http.begin_request and persistent connection becomes closed. (Exception is correctly handled with end_response/end_request).
If I continue then each odd execution returns 404 normally and each even execution gives an exception (handled correctly of course).
After some iterations I get ORA-29270: too many open HTTP requests. If web-service returns status 200 everything goes normally.
I guess, it happens because of the specific web-service. Probable it drops persistent connection after 404 and doesn't after 200. Second call tries to reuse request on persistent connection but it doesn't exist and causes request leak.
If I use utl_http.set_persistent_conn_support (false, 0); once in my session the problem disappears. I can call web-service as many times as I need.
Resolution:
Try to switch off persistent connection support. Probable, on the http-server persistent connections work differently for different requests. Looks like a bug.