I'm developing a web-application in which I have a constant stream of data that is being received every 5 seconds or so in a java servlet (being read from a file written by another application). I want to push this data onto an html page and get it read in javascript so I can graph it in the d3 library.
At the moment I'm using a javascript function that calls the 'doGet' function of the servlet every 5 seconds. I'm worried this is creating a lot of overhead, or that it could be performed more efficiently.
I know it's also possible to run "response.setIntHeader("Refresh", 5);" from the servlet.
Are there any other better ways?
Thanks in advance for the help!
Short polling is currently probably the most common approach to solving the problem you describe
If you can cope with a few seconds lag in notification, then short polling is really simple, here is a basic example:
On page load, call this in your JS:
setInterval(checkFor, 30000);
The above will call a function checkFor() every 30 seconds (obviously, you can change the 30 seconds to any length of time - just adjust the 30000 in the above line according to how regular you want users to be updated).
Then, in your checkForNotifications function, just make an ajax call to your server asking if there are any updates - if the server says yes, then just display the alert using JS, if not (which will be most of the time most likely) then do nothing:
function checkFor(){
$.ajax({
url: "your/server/url",
type: "POST",
success: function( notification ) {
//Check if any notifications are returned - if so then display alert
},
error: function(data){
//handle any error
}
});
}
Related
On 2 occasions in the past month, we have managed to hit our daily limit on asynchronous apex executions. Salesforce temporarily increased our limit to 425000 but it will be scaled down to 250000 in a week's time. Once we reach the limit, a lot of the SF functions will fail and this has tremendously impacted both internal staff and external customers.
So to prevent this from happening in the future, we need to create some kind of alert in Salesforce to monitor our daily asynchronous apex method executions. Our maximum daily limit is 250000. The alert will need to create a P3 helpdesk ticket and notify couple of users say USER A and USER B once it reaches 70% threshold.
Kindly advise what is possible to achieve the same
Thanks & Regards,
Harjeet
There's a promising Limits method but it doesn't seem to work currently ("reserved for future use"): System.debug(Limits.getAsyncCalls() + ' / ' + Limits.getLimitAsyncCalls());
There's an idea you can upvote: https://success.salesforce.com/ideaView?id=0873A0000003VIFQA2 ;)
You could query SELECT COUNT() FROM AsyncApexJob WHERE ... but that sounds like a bad idea ;)
I think your best course of action is to use SF REST API. There's a "limits" resource you can fetch. You could do it from SF itself (bad idea because if you'd schedule it to run every hour then well, of course it will contribute to the limit consumption too ;)) or from some external app that'd connect to your SF...
https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_limits.htm
You can quickly try it out for example in workbench.developerforce.com before you decide you do want to deep dive into coding it.
Of course if you have control over your batch jobs, queuable, schedulable & #future calls you could implement some rough counter of executions in a helper object for example... won't help you much if most of the jobs are coming from managed packages though...
Got 1 more idea but it's pretty hardcore - you should be able to make a REST API call from javascript. so you could create a simple VF page (even without any apex controller), put JS callout on it, have it check every 5 mins and do something if threshold is hit... But that means IT person would have to have this page open all the time (perhaps as a home page component)... Messy :)
I was having the exact same issue so I created a simple JsForce script in NodeJS to monitor the call to the /limits endpoint.
You can connect a Free Monitoring service like UpTimerobot.com or PingDom.com and get an email when you find the Word "Warning" >50% or "Error" > 80%.
async function getSfLimits() {
try {
//Let's login into salesforce
const login = await conn.login(SF_USERNAME, SF_PASSWORD+SF_SECURITY_TOKEN);
//Call the API
const sfLimits = await conn.requestGet('/services/data/v51.0/limits');
return sfLimits;
} catch(err) {
console.log(err);
}
}
https://github.com/carlosdevia/salesforcelimits
Could you please give me some hints, websites, books or research papers that would explain how to calculate the URL dwell time.
in case you don't know what is dwell time : dwell time denotes the time which a user spends viewing a document after clicking a link on a search engine results page.
Thanks in advance
One crude way to do this on a page would be to use a small GET request on a timer, going to a server - an "I'm still here". The frequency of this would be a trade off. This would be relatively easy to do with jquery or a similar framework.
You would not know if it is actually in an abandoned tab or that it is open but not actually being looked at.
A sample for the client end (using jquery):
$session = Math.floor((1 + Math.random()) * 0x10000);
function still_alive() {
$url = $server_url + "/still_alive";
$.get($url, {location: location.href, session: $session});
}
// call it once to prime it
still_alive();
// Set it up on a timer
window.setTimeout(function() {
still_alive();
}, 1000);
1000 is the interval in milliseconds - so this is on a 1 second interval. $server_url is the server to register this at - I am adding "/still_alive" as an endpoint to register this at. $session - this can be some way of identifying the current session - set to something once when the page loads - it could be the result of a uuid function.
The next line is a Jquery GET request to that whole url. It is being passed a plain object - with the key location holding the url of the current location. It may be more appropriate to be a POST instead of a GET - but the principle is still the same.
I have a web page that loads a URL and does the processing using
... Server.createobject("Microsoft.XMLDOM")
every 5 minutes. It processes the data and stores it in a database.
Basically, it needs to be in a loop and run every 5 minutes and PARSE the xml.
Is there a better way to do this?
I'm open to suggestions.
TIA
Steve42
There are two ways to do this. One would be to make a call from client to server using JavaScript code and the other one would be to separate that call to a web service and place it into a Windows Service or something similar.
The first option could be something like:
$(document).ready(function() {
setTimeout(functionToCall, (1000 * 60 * 5)); // Make a call every five minutes
});
function functionToCall() {
$.ajax({
url: 'call to your page',
type: 'GET
}).done(function() {
alert("Things processed");
}).fail(function() {
alert("Error occurred");
});
}
This is something from top of my mind to give you an idea. It might not be the most ideal solution, but it should give you some guideline.
I have a number of pages in a WebMatrix Razor ASP.Net site where I have added one line of code:
Response.OutputCache(600);
From reading about it I had assumed that this mean that IIS would create a cache of the html produced by the page, serve that html for the next 10 minutes, and after 10 minutes when the next request came in, it would run the code again.
Now the page is being fetched as part of an timed jquery call. The time code in the client runs every minute. The code there is very simple:
function wknTimer4() {
$.get('PerfPanel', function(data) {
$('#perfPanel').html(data);
});
It occasionally appears to cache, but when i look at the number of database queries done during the 10 minute period, i might have well over 100 database queries. I know the caching isn't working the way I expect. Does the cache only work for a single session? Is there some other limitation?
Update: it really shouldn't matter what the client does, whether it fetches the page through a jQuery call, or straight html. If the server is caching, it doesn't matter what the client does.
Update 2: complete code dumped here. Boring stuff:
#{
var db = Database.Open("LOS");
var selectQueryString = "SELECT * FROM LXD_funding ORDER BY LXDOrder";
// cache the results of this page for 600 seconds
Response.OutputCache(600);
}
#foreach (var row in db.Query(selectQueryString) ){
<h1>
#row.quotes Loans #row.NALStatus, oldest #(NALWorkTime.WorkDays(row.StatusChange,DateTime.Now)) days
</h1>
}
Your assumptions about how OutputCache works are correct. Can you check firebug or chrome tools to look at the outgoing requests hitting your page? If you're using jQuery, sometimes people set the cache property on the $.get or $.ajax to false, which causes the request to the page to have a funky trailing querystring. I've made the mistake of setting this up globally to fix some issues with jQuery and IE:
http://api.jquery.com/jQuery.ajaxSetup/
The other to look at here is the grouping of DB calls. Are you just making a lot of calls with one request? Are you executing a db command in a loop, within another reader? Code in this case would be helpful.
Good luck, I hope this helps!
In javascript function I am submitin a form:
form.submit();,
with huge amount of data.
Is it possible to cancel this operation beside clicking on browser's stop button?
UPDATE:
Maybe i didn't made my self clear i am submiting large files.
form with uploadcontrol that runs in iframe.
You could do something like what Gmail does: Undo Sending Mails (lab feature).
It justs delays the mail sending by a 5 seconds. If in that time the server receives a undo AJAX call, the mail is not send.
This could potentially increase you serverside complexity, but not that much.
In the case that the information is not sent you could use this:
//IE
document.execCommand('Stop');
//Firefox
window.stop();
So the whole thing would look something like this (pseudocode):
stop_submit(current_id){
try{
document.execCommand('Stop');
}catch(e){}
try{
window.stop();
}catch(e){}
AJAXCall("/myfolder/cancel_submit/", "id="+current_id, "post");
return;
}
This is a classic problem with the client-server architecture. Once the HTTP request has been made, there is no stopping it. You will need a way to restore state after all requests to guarantee that everything is back to the way it was.
Now, you could do this with AJAX and use the abort function. Or, you could try window.stop() or document.execCommand("Stop"), depending on your browser. But those do not solve the problem of the request having already been sent.