About 2 weeks ago I wrote a program implementing the Add Site Account flow chart.
I ran it a number of times over a span of several days, and it worked fine each time.
Then I worked on other things, and about a week later tried running the above program again. Now it did not work, but got into a seemingly infinite loop on the call to getSiteRefreshInfo (the loop in the lower left corner of the flow chart.) I tried running this a number of times over the next 3-4 days, and got into a loop each time. It looped for 20-30 minutes before I killed the program. This is exactly the same source code that worked correctly a week earlier, using exactly the same credentials.
Each time "code" was 0, and "siteRefreshStatus" was "LOGIN_SUCCESS", which according to the flowchart meant I should wait 2-4 seconds and then repeat the call:
{
"siteRefreshStatus": {
"siteRefreshStatusId": 2,
"siteRefreshStatus": "LOGIN_SUCCESS"
},
"siteRefreshMode": {
"refreshModeId": 2,
"refreshMode": "NORMAL"
},
"updateInitTime": 1418945894,
"nextUpdate": 1418946794,
"code": 0,
"itemRefreshInfo": [
{
"memItemId": 10070147,
"itemSuggestedFlow": {
"suggestedFlowId": 2,
"suggestedFlow": "REFRESH"
},
"errorCode": 405,
"retryCount": 0
}
],
"noOfRetry": 0
}
The account I'm passing to addSiteAccount1 is an American Express credit card account, and this is the only account I've added for this user (in other words, this is the only account that needs to be refreshed). Once while the program was in a loop, I manually logged onto the American Express website using these same credentials, and I could view the account, get the list of recent transactions, etc. I realize that Yodlee probably uses a different interface than the browser does, but this did show me that the Amex website was up and functional.
I tried letting the loop run for more than 30 minutes to see what happened. After an hour and 55 minutes I got this exception:
{
"errorOccurred": "true",
"exceptionType": "Unknown Exception Occurred",
"referenceCode": "_022c5fa3-3933-4491-b390-1150d8b28ab3",
"detailedMessage": "Technical Difficulty Processing Request"
}
I tried running the program various times over the next 3-4 days and it got into a loop each time. Then it abruptly started running correctly again, and at the moment is still running correctly. Note that the exact same source code and credentials ran correctly for a while, then got into a loop for 3-4 days, and is now running correctly again.
I have two questions about this:
1) How should I exit from such a loop? The way I interpret the API Flowchart is that I should loop until I get a value for "code" or "siteRefreshStatus" that tells me to exit the loop. I could easily implement my own timer, but I don't know what time value would be appropriate in all cases. Yodlee is in a better position to know if the loop has gone on for too long, so I would expect a return like REFRESH_TIMED_OUT in this case.
2) If we had been running this code in production, we would not have been able to refresh this customer's information for at least 3 days, which for our application would be an extremely long time. Is there anything else we can try in this case?
Thanks!!!
I think you're suffering from a different issue: your itemRefreshInfo has error code 405, which this page says is: "Update Request Canceled(405):Your account was not updated because you canceled the request."
I believe you're cancelling your request somehow.
Related
I have this handle bars statement in SendGrid. When using the variable taskCount the value is only used in the greaterThan block. The other 2 times it is used it appears to be null.
Here is the json data
{
"Username":"ChampCbg",
"JoinedAt":"12/1/2020",
"DaysSinceJoined":"20",
"taskCount":5
}
here is statement with handlebars
{{#greaterThan taskCount 0}}
Congrats on starting {{insert taskCount "default=1"}} task{{#greaterThan taskCount 1}} (s){{/greaterThan}} and taking the first small step.
{{else}}
You have not started a task yet. What are you waiting for? It has been {{DaysSinceJoined}} days since you joined on {{JoinedAt}}.
You have missed {{DaysSinceJoined}} days where you could been making Small Steps towards the dreams of you better tomorrow.
{{/greaterThan}}
Here is the end result
Hello ChampCbg!
Congrats on starting 1 task and taking the first small step.
Don't let another day pass you by. Start designing you vision board
today.
As the results show the greaterThan block selects the correct statement, but then next times taskCount is used the default value and null value are chosen.
What is the cause of this?
Twilio SendGrid developer evangelist here.
Honestly, I thought your template would work as you wrote it. But I tried it out and it did not (not that I didn't believe you, I just had to do that to work out what to do!).
So, the way to deal with this is to refer to the variables in your data using the #root object within the greaterThan conditional (or other conditionals).
Try this as your template:
{{#greaterThan taskCount 0}}
Congrats on starting {{insert #root.taskCount "default=1"}} task{{#greaterThan #root.taskCount 1}} (s){{/greaterThan}} and taking the first small step.
{{else}}
You have not started a task yet. What are you waiting for? It has been {{#root.DaysSinceJoined}} days since you joined on {{#root.JoinedAt}}.
You have missed {{#root.DaysSinceJoined}} days where you could been making Small Steps towards the dreams of you better tomorrow.
{{/greaterThan}}
For the past few hours form recognizer analyze
https://my_ResourceName.cognitiveservices.azure.com/formrecognizer/v2.0-preview/custom/models/my_modelId/analyzeresults/my_referenceId
{
"status": "notStarted",
"createdDateTime": "2021-06-14T21:00:38Z",
"lastUpdatedDateTime": "2021-06-14T21:00:39Z"
}
Anyone has similar experience? I know usually takes some time to process form but now all tries to process form are failing with no error of any sort. Using postman to check both post to analyze and get analyze results. This used to work for months without issues until today!
Form recognizer has new version!
https://my_ResourceName.cognitiveservices.azure.com/formrecognizer/v2.1-preview.3/custom/models/my_modelId/analyzeresults/my_referenceId
It would be nice if old webservice was returning more descriptive message instead of "notStarted", since it will never start again in the future, and point users to new url.
On 2 occasions in the past month, we have managed to hit our daily limit on asynchronous apex executions. Salesforce temporarily increased our limit to 425000 but it will be scaled down to 250000 in a week's time. Once we reach the limit, a lot of the SF functions will fail and this has tremendously impacted both internal staff and external customers.
So to prevent this from happening in the future, we need to create some kind of alert in Salesforce to monitor our daily asynchronous apex method executions. Our maximum daily limit is 250000. The alert will need to create a P3 helpdesk ticket and notify couple of users say USER A and USER B once it reaches 70% threshold.
Kindly advise what is possible to achieve the same
Thanks & Regards,
Harjeet
There's a promising Limits method but it doesn't seem to work currently ("reserved for future use"): System.debug(Limits.getAsyncCalls() + ' / ' + Limits.getLimitAsyncCalls());
There's an idea you can upvote: https://success.salesforce.com/ideaView?id=0873A0000003VIFQA2 ;)
You could query SELECT COUNT() FROM AsyncApexJob WHERE ... but that sounds like a bad idea ;)
I think your best course of action is to use SF REST API. There's a "limits" resource you can fetch. You could do it from SF itself (bad idea because if you'd schedule it to run every hour then well, of course it will contribute to the limit consumption too ;)) or from some external app that'd connect to your SF...
https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_limits.htm
You can quickly try it out for example in workbench.developerforce.com before you decide you do want to deep dive into coding it.
Of course if you have control over your batch jobs, queuable, schedulable & #future calls you could implement some rough counter of executions in a helper object for example... won't help you much if most of the jobs are coming from managed packages though...
Got 1 more idea but it's pretty hardcore - you should be able to make a REST API call from javascript. so you could create a simple VF page (even without any apex controller), put JS callout on it, have it check every 5 mins and do something if threshold is hit... But that means IT person would have to have this page open all the time (perhaps as a home page component)... Messy :)
I was having the exact same issue so I created a simple JsForce script in NodeJS to monitor the call to the /limits endpoint.
You can connect a Free Monitoring service like UpTimerobot.com or PingDom.com and get an email when you find the Word "Warning" >50% or "Error" > 80%.
async function getSfLimits() {
try {
//Let's login into salesforce
const login = await conn.login(SF_USERNAME, SF_PASSWORD+SF_SECURITY_TOKEN);
//Call the API
const sfLimits = await conn.requestGet('/services/data/v51.0/limits');
return sfLimits;
} catch(err) {
console.log(err);
}
}
https://github.com/carlosdevia/salesforcelimits
I have an ASP.Net application that accesses user data from a SQL database.
Visual Studio Version 2012
Windows Server 2012 Standard 6.2
Sql Server 2012
Program in Service since 11/2007 (with problem having never happened previously)
Problem:
First reported by 2 of my customers but I was not experiencing the problem until after a recent MS update.
Unsure of the particulars of those updates or whether it was only a coincident.
Log into application and go around to a few pages, all seems ok, than I select a new Active Company (auto filters list screens by Active Company ID from a session variable, changing active company changes the ID stored in the session variable), everything works fine for a while (1 - 4 mins) switching between screens and even different active companies, than at one point I go to a page that I've been to several times (that worked fine) and it shows everything from the last time I accessed it (literally the identical page from a few mins ago). I change to another page and it appears to be updated, go back to the screen that did not update and it no matter what, will not update again. I query the database and it is indicating the correct active company ID and query the session variable and that too is correct.
** The strange thing is I can wait 4 -5 mins (I just stop doing anything) and than try to access the page again and now it updates.
I have been beating at this now for almost 2 weeks and have not been able to determine to source of the problem.
I literally have tried every settings for session caching I could read up on with no (or minimal) affect.
Since our software utilizes session variables to hold user variables to control their environment (like active company selection), I went to go as far as removing the session variables and making the profile.variables (requiring Sql Session management) with minimal affect).
It seems to work fine for a few minutes (or page accesses) than once it stops updating the page, it will no longer update under any circumstance.
It will occur on pretty much any combination of page changes (after changing the active company, since it will actually change data displayed).
This design has been out in the field for over 8 years now (and is routinely brought up-to-date with the latest dot.net compiler updates, .net framework and IronSpeed Designer engine updates. This error has never occurred before now. No update to the development tools took place prior to the appearance of this issue.
I tried various tests.
Test 1:
I added java code to reset each page.
<script type="text/javascript">
function RefreshPage()
{
window.location.reload()
}
</script>
Result: No change
Test 2:
I stopped as soon as the page did not refresh and started timing when the page would update (1 -2 mins or going back and forth between the change active company and the reports screen several times)
Result:
After 60 - 90 secs, the current page seemed to do an update (the activity icon would appear than go away) so I would than check the page the was not refreshing and it was now correct.
Since I was using the report page for my tests, I would run a report when the screen update failed, to see what active company it thought it was on (since it was also reliant on the session variable, it was bringing up the correct report data, even though the page was not indicating the correct active company. Note: Every one of our screens indicate the current user and active company name at the top, so it is easy to see when it is not updating.
Any direction as to where to look from here would be greatly appreciated, I'm at a lost as to what to check now.
P.S. I installed MS Message Analyzer and had it monitor up to the point where I get a failure. I have never user MS MA before so I don't have much of an idea as to what to look for, other than the operation status was indicating Found (302) for the Get and Post and Ok (200) for the page I received the problem for.
Thanks in advanced!
John R
I propose to check caching options. I mean caching of page, controls, javascript, and browser. As workaround I propose to add some empty paramether to your page, ajax calls. For example instead opening "default.aspx" open "default.aspx?id=someNewGoid". Also consider adding some random paramethers to your ajax calls.
Try following coe for refresh:
<script type="text/javascript">
function S4() {
return (((1+Math.random())*0x10000)|0).toString(16).substring(1);
}
function guid()
{
var guid = (S4() + S4() + "-" + S4() + "-4" + S4().substr(0,3) + "-" + S4() + "-" + S4() + S4() + S4()).toLowerCase();
return guid;
}
function RefreshPage()
{
var url = window.location;
if (url.indexOf("?")>-1){
url = url.substr(0,url.indexOf("?"));
}//this par will cut of additional paramthers
window.location = url + "?id=" + guid();
window.location.reload()
}
</script>
guys!
I'm developing an online auction with time limit.
The ending time period is only for one opened auction.
After logging into the site I show the time left for the open auction. The time is calculated in this way:
EndDateTime = Date and Time of end of auction;
DateTime.Now() = current Date and Time
timeLeft= (EndDateTime - DateTime.Now()).Seconds().
In javascript, I update the time left by:
timeLeft=timeLeft-1
The problem is that when I login from different browsers at the same time the browsers show a different count down.
Help me, please!
I guess there will always be differences of a few seconds because of the server processing time and the time needed to download the page.
The best way would be to actually send the end time to the browser and calculate the time remaining in javascript. That way the times should be the same (on the same machine of course).
Roman,
I had a little look at eBay (they know a thing or two about this stuff :)) and noticed that once the item is inside the last 90 seconds, a GET request gets fired every 2 seconds to update the variables in the javascript via a json response. you can look at this inside firebug/fiddler to see what it does.
here is an example of the json it pulls down:
{
"ViewItemLiteResponse":{
"Item":[
{
"IsRefreshPage":false,
"ViewerItemRelation":"NONE",
"EndDate":{
"Time":"12:38:48 BST",
"Date":"01 Oct, 2010"
},
"LastModifiedDate":1285932821000,
"CurrentPrice":{
"CleanAmount":"23.00",
"Amount":23,
"MoneyStandard":"£23.00",
"CurrencyCode":"GBP"
},
"IsEnded":false,
"AccessedDate":1285933031000,
"BidCount":4,
"MinimumToBid":{
"CleanAmount":"24.00",
"Amount":24,
"MoneyStandard":"£24.00",
"CurrencyCode":"GBP"
},
"TimeLeft":{
"SecondsLeft":37,
"MinutesLeft":1,
"HoursLeft":0,
"DaysLeft":0
},
"Id":160485015499,
"IsFinalized":false,
"ViewerItemRelationId":0,
"IsAutoRefreshEnabled":true
}
]
}
}
You could do something similar inside your code.
[edit] - on further looking at the eBay code, altho it only runs the intensive GET requests in the last 90 seconds, the same json as above is added when the page is initially loaded as well. Then, at 3 mins or so, the GET request is run every 10 seconds. therefore i assume the same javascript is run against that structure whether it be >90 seconds or not.
This may be a problem with javascript loading at different speeds,
or the setInterval will trigger at slightly different times depending on the loop
i would look into those two