ASP.Net page not updating - Literally the same screen from earlier (as if back was selected) - asp.net

I have an ASP.Net application that accesses user data from a SQL database.
Visual Studio Version 2012
Windows Server 2012 Standard 6.2
Sql Server 2012
Program in Service since 11/2007 (with problem having never happened previously)
Problem:
First reported by 2 of my customers but I was not experiencing the problem until after a recent MS update.
Unsure of the particulars of those updates or whether it was only a coincident.
Log into application and go around to a few pages, all seems ok, than I select a new Active Company (auto filters list screens by Active Company ID from a session variable, changing active company changes the ID stored in the session variable), everything works fine for a while (1 - 4 mins) switching between screens and even different active companies, than at one point I go to a page that I've been to several times (that worked fine) and it shows everything from the last time I accessed it (literally the identical page from a few mins ago). I change to another page and it appears to be updated, go back to the screen that did not update and it no matter what, will not update again. I query the database and it is indicating the correct active company ID and query the session variable and that too is correct.
** The strange thing is I can wait 4 -5 mins (I just stop doing anything) and than try to access the page again and now it updates.
I have been beating at this now for almost 2 weeks and have not been able to determine to source of the problem.
I literally have tried every settings for session caching I could read up on with no (or minimal) affect.
Since our software utilizes session variables to hold user variables to control their environment (like active company selection), I went to go as far as removing the session variables and making the profile.variables (requiring Sql Session management) with minimal affect).
It seems to work fine for a few minutes (or page accesses) than once it stops updating the page, it will no longer update under any circumstance.
It will occur on pretty much any combination of page changes (after changing the active company, since it will actually change data displayed).
This design has been out in the field for over 8 years now (and is routinely brought up-to-date with the latest dot.net compiler updates, .net framework and IronSpeed Designer engine updates. This error has never occurred before now. No update to the development tools took place prior to the appearance of this issue.
I tried various tests.
Test 1:
I added java code to reset each page.
<script type="text/javascript">
function RefreshPage()
{
window.location.reload()
}
</script>
Result: No change
Test 2:
I stopped as soon as the page did not refresh and started timing when the page would update (1 -2 mins or going back and forth between the change active company and the reports screen several times)
Result:
After 60 - 90 secs, the current page seemed to do an update (the activity icon would appear than go away) so I would than check the page the was not refreshing and it was now correct.
Since I was using the report page for my tests, I would run a report when the screen update failed, to see what active company it thought it was on (since it was also reliant on the session variable, it was bringing up the correct report data, even though the page was not indicating the correct active company. Note: Every one of our screens indicate the current user and active company name at the top, so it is easy to see when it is not updating.
Any direction as to where to look from here would be greatly appreciated, I'm at a lost as to what to check now.
P.S. I installed MS Message Analyzer and had it monitor up to the point where I get a failure. I have never user MS MA before so I don't have much of an idea as to what to look for, other than the operation status was indicating Found (302) for the Get and Post and Ok (200) for the page I received the problem for.
Thanks in advanced!
John R

I propose to check caching options. I mean caching of page, controls, javascript, and browser. As workaround I propose to add some empty paramether to your page, ajax calls. For example instead opening "default.aspx" open "default.aspx?id=someNewGoid". Also consider adding some random paramethers to your ajax calls.
Try following coe for refresh:
<script type="text/javascript">
function S4() {
return (((1+Math.random())*0x10000)|0).toString(16).substring(1);
}
function guid()
{
var guid = (S4() + S4() + "-" + S4() + "-4" + S4().substr(0,3) + "-" + S4() + "-" + S4() + S4() + S4()).toLowerCase();
return guid;
}
function RefreshPage()
{
var url = window.location;
if (url.indexOf("?")>-1){
url = url.substr(0,url.indexOf("?"));
}//this par will cut of additional paramthers
window.location = url + "?id=" + guid();
window.location.reload()
}
</script>

Related

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

Paypal Processing - Need to grab TransactionId, CorrelationId and TimeStamp

Current Project:
ASP.NET 4.5.2
MVC 5
PayPal API
I am using this example to build myself a PayPal transaction (and yes, my code is virtually identical), as I do not know of any other method that will return the three values in the title.
My main problem is that, the example I am utilizing is much more concise and compact than the one I used for a much older Web Forms application, and as such, I am unsure as to where or even how to grab the three values I need.
My initial thought was to do so right after the ACK, and indeed I was able to obtain the CorrelationId as well as the TimeStamp, but because this was prior to the user being carted off to PayPal’s site (sandbox in this case -- see the return new PayPalRedirect contained within the if), the TransactionId was blank. And in this example, PayPal explicitly redirects the user to a Success page without returning to the Action that sent the user to PayPal in the first place, and I am not seeing any GET values in the URL at all aside from the Token and the PayerId, much less ones that could provide me with the TransactionId.
Suggestions?
I have also looked at the following examples:
For ASP.NET Core, was unsure how to adapt to my current project particularly due to appsettings.json, but it looked quite well done. I really liked how the values were rolled up in lists.
For MVC 4, but I couldn’t find where ACK was being used to determine success or successwithwarning so I couldn’t hook into that.
I have also found the PayPal content to be like trying to drink from a fire hose at full blast -- not only was the content was hopelessly outdated (Web Forms code, FTW!) but there was also so many different examples it would have taken me days to determine which one was most appropriate to use.
Any assistance would be greatly appreciated.
Edit: my initial attempt at modifying the linked code has this portion:
values = Submit(values);
var ack = values["ACK"].ToLower();
if(ack == "success" || ack == "successwithwarning") {
using(_db = new ApplicationDbContext()) {
var updateOrder = await _db.Orders.FirstOrDefaultAsync(x => x.OrderId == order.OrderId);
if(updateOrder != null) {
updateOrder.OrderProcessed = false;
updateOrder.PayPalCorrelationId = values["CORRELATIONID"];
updateOrder.PayPalTransactionId = values["TRANSACTIONID"];
updateOrder.PayPalTimeStamp = values["TIMESTAMP"];
updateOrder.IPAddress = HttpContext.Current.Request.UserHostAddress;
_db.Entry(updateOrder).State = EntityState.Modified;
await _db.SaveChangesAsync();
}
}
return new PayPalRedirect {
Token = values["TOKEN"],
Url = $"https://{PayPalSettings.CgiDomain}/cgi-bin/webscr?cmd=_express-checkout&token={values["TOKEN"]}"
};
}
Everything within and including the using() is my added content. As I mentioned, the CorrelationId and the TimeStamp come through just fine, but I have yet to successfully obtain the TransactionId.
Edit 2:
More problems -- the transactions that are “successful” through the sandbox site (the ReturnUrl is getting called) aren’t reflecting properly on my Facilitator and Buyer accounts, even when I do payments straight from the buyer’s PayPal account (not using the Credit Card). I know I am supposed to see transactions in the Buyer’s account, either through the overall Dev account (Accounts -> Profile -> balance or Accounts -> Notifications) or through the Buyer’s account in the sandbox front end. And yet -- multiple transactions returning me to the ReturnUrl path, and yet no transactions in either.
Edit 3:
Okay, this is really, really weird. I have gone over all settings with a fine-toothed comb, and intentionally introduced errors to see where things should crap out. It turns out that the entire process goes swimmingly - except nothing shows up in my notifications and no amounts get moved between my different accounts (Facilitator and Buyer). It’s like all my transactions are going into /dev/null, yet the process is successful.
Edit 4: A hint!
In the sandbox, where Buyer accepts the transaction, there is a small note, “You will be able to review the transaction before completing it” or something like that -- suggesting that an additional page is not coming up and that the user is being uncerimoniously dumped back to the success page. Why the success page? No clue. But it’s happening.
It sounds like you are only doing the first part of the process.
Express Checkout consists of 3 API calls:
SetExpressCheckout
GetExpressCheckoutDetails
DoExpressCheckoutPayment
SEC generates a token, and then you redirect to PayPal where the user signs in and reviews the transactions before agreeing to pay.
They are then sent to the ReturnURL included in your SEC request, and this is where you'll call GECD in order to obtain all the buyer details that are now available since they signed in.
Using that data you can complete the final DECP request, which is what finalizes the procedure. No money is actually processed until this final call is completed successfully.

What keeps caching from working in WebMatrix?

I have a number of pages in a WebMatrix Razor ASP.Net site where I have added one line of code:
Response.OutputCache(600);
From reading about it I had assumed that this mean that IIS would create a cache of the html produced by the page, serve that html for the next 10 minutes, and after 10 minutes when the next request came in, it would run the code again.
Now the page is being fetched as part of an timed jquery call. The time code in the client runs every minute. The code there is very simple:
function wknTimer4() {
$.get('PerfPanel', function(data) {
$('#perfPanel').html(data);
});
It occasionally appears to cache, but when i look at the number of database queries done during the 10 minute period, i might have well over 100 database queries. I know the caching isn't working the way I expect. Does the cache only work for a single session? Is there some other limitation?
Update: it really shouldn't matter what the client does, whether it fetches the page through a jQuery call, or straight html. If the server is caching, it doesn't matter what the client does.
Update 2: complete code dumped here. Boring stuff:
#{
var db = Database.Open("LOS");
var selectQueryString = "SELECT * FROM LXD_funding ORDER BY LXDOrder";
// cache the results of this page for 600 seconds
Response.OutputCache(600);
}
#foreach (var row in db.Query(selectQueryString) ){
<h1>
#row.quotes Loans #row.NALStatus, oldest #(NALWorkTime.WorkDays(row.StatusChange,DateTime.Now)) days
</h1>
}
Your assumptions about how OutputCache works are correct. Can you check firebug or chrome tools to look at the outgoing requests hitting your page? If you're using jQuery, sometimes people set the cache property on the $.get or $.ajax to false, which causes the request to the page to have a funky trailing querystring. I've made the mistake of setting this up globally to fix some issues with jQuery and IE:
http://api.jquery.com/jQuery.ajaxSetup/
The other to look at here is the grouping of DB calls. Are you just making a lot of calls with one request? Are you executing a db command in a loop, within another reader? Code in this case would be helpful.
Good luck, I hope this helps!

ASP.NET (MVC3) - HttpRuntime.Cache - key intermittently present

I have a really strange problem and I'm completely puzzled.
I have a piece of code that parses some data and stores the result in our webserver's HttpRuntime.Cache using the Insert method. This is stored for 10 seconds. There seem to be some problems so I created a test page that retrieves a simple object from the cache and displays if it is null or not. To add the object to the cache, I use:
HttpRuntime.Cache.Insert(CHECK_KEY, new object(), null, DateTime.Now.AddSeconds(10), System.Web.Caching.Cache.NoSlidingExpiration);
In a test page, I try to retrieve the object:
var isInCache = this.cacheService.Get<object>(CHECK_KEY) != null;
and the method of the cacheService is:
public T Get<T>(string key)
{
return (T)HttpRuntime.Cache.Get(key);
}
Now the strange part. If I call a URL that calls the 'Insert' method, and go to my test page to retrieve it, the value of isInCache is false in about 99% of the time. Sometimes it works correctly for the whole 10 seconds (e.g. I refresh my test page every second and I get true 10 times) but again, most of the time it just returns false.
Now, when I keep F5 pressed, I sometimes see true in my output, in the blink of an eye, which means that the key CAN be found! This is not some browser cache, because it will only flash true intermittetly for the 10 seconds of cache duration, after which is it will only display false (which is logical, since the key is expired). So my question is:
WHY will retrieving a simple object from the cache fail most of the time?
There are other items in the cache (on the same test page) that do get retrieved, just not that object.
To make things worse, this (of course!) works flawlessly on my local machine, on the test machine, just not production. I'm pretty clueless. Please help :-)
EDIT:
Ok so I'm now testing in two different browsers, IE9 and Chrome... and IE9 is correctly showing the items in the HttpRuntime.Cache but Chrome is NOT. It shows always false and no other cached data, except when keeping F5 pressed it will occassionally show it. Since when is HttpRuntime.Cache browser dependant???
Extra edit: IE9 shows no more cached data. So while it can differ across browsers, it's not that IE will always work and chrome not... it differs.
EDIT2:
So I'm passing the variables to my view using ViewData:
ViewData["machineName"] = machineName;
ViewData["isInCache"] = isInCache;
ViewData["A"] = A;
ViewData["B"] = B;
Machinename comes from Server.MachineName, isInCache is the object, variable A is not from the HttpRuntime.Cache, variable B does, which is also intermittently not present.
After much debugging and thought, it appeared that the hosting provider had the 'Maximum Worker Processes' in IIS 7 Application pool settings to a value larger than 1. The HttpRuntime.Cache is not shared in a web farm, thus it could well be that I hit the 'wrong' instance which did not have the object cached. Continuously pressing F5 would have me occassionaly hit the instance which did have the value cached.

WordPress Write Cache Issue with Multiple Sessions

I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)

Resources