What keeps caching from working in WebMatrix? - asp.net

I have a number of pages in a WebMatrix Razor ASP.Net site where I have added one line of code:
Response.OutputCache(600);
From reading about it I had assumed that this mean that IIS would create a cache of the html produced by the page, serve that html for the next 10 minutes, and after 10 minutes when the next request came in, it would run the code again.
Now the page is being fetched as part of an timed jquery call. The time code in the client runs every minute. The code there is very simple:
function wknTimer4() {
$.get('PerfPanel', function(data) {
$('#perfPanel').html(data);
});
It occasionally appears to cache, but when i look at the number of database queries done during the 10 minute period, i might have well over 100 database queries. I know the caching isn't working the way I expect. Does the cache only work for a single session? Is there some other limitation?
Update: it really shouldn't matter what the client does, whether it fetches the page through a jQuery call, or straight html. If the server is caching, it doesn't matter what the client does.
Update 2: complete code dumped here. Boring stuff:
#{
var db = Database.Open("LOS");
var selectQueryString = "SELECT * FROM LXD_funding ORDER BY LXDOrder";
// cache the results of this page for 600 seconds
Response.OutputCache(600);
}
#foreach (var row in db.Query(selectQueryString) ){
<h1>
#row.quotes Loans #row.NALStatus, oldest #(NALWorkTime.WorkDays(row.StatusChange,DateTime.Now)) days
</h1>
}

Your assumptions about how OutputCache works are correct. Can you check firebug or chrome tools to look at the outgoing requests hitting your page? If you're using jQuery, sometimes people set the cache property on the $.get or $.ajax to false, which causes the request to the page to have a funky trailing querystring. I've made the mistake of setting this up globally to fix some issues with jQuery and IE:
http://api.jquery.com/jQuery.ajaxSetup/
The other to look at here is the grouping of DB calls. Are you just making a lot of calls with one request? Are you executing a db command in a loop, within another reader? Code in this case would be helpful.
Good luck, I hope this helps!

Related

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

ASP.Net page not updating - Literally the same screen from earlier (as if back was selected)

I have an ASP.Net application that accesses user data from a SQL database.
Visual Studio Version 2012
Windows Server 2012 Standard 6.2
Sql Server 2012
Program in Service since 11/2007 (with problem having never happened previously)
Problem:
First reported by 2 of my customers but I was not experiencing the problem until after a recent MS update.
Unsure of the particulars of those updates or whether it was only a coincident.
Log into application and go around to a few pages, all seems ok, than I select a new Active Company (auto filters list screens by Active Company ID from a session variable, changing active company changes the ID stored in the session variable), everything works fine for a while (1 - 4 mins) switching between screens and even different active companies, than at one point I go to a page that I've been to several times (that worked fine) and it shows everything from the last time I accessed it (literally the identical page from a few mins ago). I change to another page and it appears to be updated, go back to the screen that did not update and it no matter what, will not update again. I query the database and it is indicating the correct active company ID and query the session variable and that too is correct.
** The strange thing is I can wait 4 -5 mins (I just stop doing anything) and than try to access the page again and now it updates.
I have been beating at this now for almost 2 weeks and have not been able to determine to source of the problem.
I literally have tried every settings for session caching I could read up on with no (or minimal) affect.
Since our software utilizes session variables to hold user variables to control their environment (like active company selection), I went to go as far as removing the session variables and making the profile.variables (requiring Sql Session management) with minimal affect).
It seems to work fine for a few minutes (or page accesses) than once it stops updating the page, it will no longer update under any circumstance.
It will occur on pretty much any combination of page changes (after changing the active company, since it will actually change data displayed).
This design has been out in the field for over 8 years now (and is routinely brought up-to-date with the latest dot.net compiler updates, .net framework and IronSpeed Designer engine updates. This error has never occurred before now. No update to the development tools took place prior to the appearance of this issue.
I tried various tests.
Test 1:
I added java code to reset each page.
<script type="text/javascript">
function RefreshPage()
{
window.location.reload()
}
</script>
Result: No change
Test 2:
I stopped as soon as the page did not refresh and started timing when the page would update (1 -2 mins or going back and forth between the change active company and the reports screen several times)
Result:
After 60 - 90 secs, the current page seemed to do an update (the activity icon would appear than go away) so I would than check the page the was not refreshing and it was now correct.
Since I was using the report page for my tests, I would run a report when the screen update failed, to see what active company it thought it was on (since it was also reliant on the session variable, it was bringing up the correct report data, even though the page was not indicating the correct active company. Note: Every one of our screens indicate the current user and active company name at the top, so it is easy to see when it is not updating.
Any direction as to where to look from here would be greatly appreciated, I'm at a lost as to what to check now.
P.S. I installed MS Message Analyzer and had it monitor up to the point where I get a failure. I have never user MS MA before so I don't have much of an idea as to what to look for, other than the operation status was indicating Found (302) for the Get and Post and Ok (200) for the page I received the problem for.
Thanks in advanced!
John R
I propose to check caching options. I mean caching of page, controls, javascript, and browser. As workaround I propose to add some empty paramether to your page, ajax calls. For example instead opening "default.aspx" open "default.aspx?id=someNewGoid". Also consider adding some random paramethers to your ajax calls.
Try following coe for refresh:
<script type="text/javascript">
function S4() {
return (((1+Math.random())*0x10000)|0).toString(16).substring(1);
}
function guid()
{
var guid = (S4() + S4() + "-" + S4() + "-4" + S4().substr(0,3) + "-" + S4() + "-" + S4() + S4() + S4()).toLowerCase();
return guid;
}
function RefreshPage()
{
var url = window.location;
if (url.indexOf("?")>-1){
url = url.substr(0,url.indexOf("?"));
}//this par will cut of additional paramthers
window.location = url + "?id=" + guid();
window.location.reload()
}
</script>

How to retrieve only an ASP.NET MVC3 Collection's Count without element properties

I am using ASP.NET MVC 3 to track hits on a set of stored links. I run into problems when displaying the hits' counts. I think this is because since I am using lazy loading, whenever I call
link.Hits.Count
it loads all of each hit's data, including such things as agent and referrer information. (Hits is a Collection.) This is an issue when a link has over 9000 hits. Is there a way of just getting the Hits' Count without it pulling in the Hits' data?
If the Hits has been written like this, the Count will work:
Hits = repository.GetAll(....).Where(....);
But if like this, the Count will not work, because the ToList() has already loaded all the data:
Hits = repository.GetAll(....).Where(....).ToList();

WordPress Write Cache Issue with Multiple Sessions

I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)

How to show status of huge process to client(web)

Environment : ASP.NET, C#
I am writing a web program that will export 12 tables to another database. When the user click the "Export" button, the export process will get started. The thing is, I like to show some messages to client such as "preparing..., deleting old records..., exporting..., export completed."
But now, It is showing all status once 12 table are completed.
Please guide me how to do this.
Doing long processes like this is on a web page is probably not a good idea, you will need change the timeout settings on pages etc
Rather build a windows service that does all the work. You can then use the Page to initialte the process. The service could update some status that is read by the page. This way the page does very little work, ie initiate the process and poll for status updates. This can be done via simple jquery/ajax calls.
try this
put this on c#
and call ajavascript function per process
Page.RegisterClientScriptBlock("1","alert('add to table now')");
Use a javascript timer to periodically load content from a status page or service.
The timer is fairly simple:
setTimeout('checkProcessStatus()', waitTime);
The update itself depends on your choice of tech - for instance if using MVC and jQuery:
function checkProcessStatus() {
//dynamically add the html from the status page to <div id="taskStatusPanel"
$('#taskStatusPanel').load('export/status/' + taskId);
//set timer to call this again
setTimeout('checkProcessStatus()', waitTime);
}
Then the ExportController.Status action would return a simple view that displayed the HTML for the current status.

Resources