I have an asp.net website to manage projects. I realized that when I click for the next project really fast and often it overrides all the next projects with the data of the first one. I call a method to save before to go for the next project. I use also a session variable for the id of the project.
EDIT:
It looks like the sever stacks the save method and the ids but keeps the values of the first project in the controls
Am I right?
this is the ajax that calls a server method to get the id and set it in an hidden field:
function NextClick() {
var tabvalue = $("#<%=TabsToFocus.ClientId%>").val();
$.ajax(
{
type: "POST",
url: "Projet.aspx/NextProj",
data: "{tab:'" + tabvalue + "'}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(d) {
if (d.d != "False") {
$("#<%=hid_selProjetID.ClientID%>").val(d.d);
var btn = $("#<%=btnClickLstProjet.ClientID%>");
fillHidden();
btn.click();
}
}
});
}
And btn.click() calls this method on the server side to save
Private Sub Button1_ServerClick(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnClickLstProjet.ServerClick
If HttpContext.Current.Session.Item("isUser") IsNot Nothing AndAlso HttpContext.Current.Session.Item("isUser") = True Then
If HttpContext.Current.Session.Item("curProjetID") IsNot Nothing Then
btnSaveIndicateurs_Click()
btnSaveEnTete_Click()
btnSaveGen_Click()
btnSavePlanif_Click()
End If
End If
HttpContext.Current.Session.Item("curProjetID") = hid_selProjetID.Value
Response.Redirect("Projet.aspx")
End Sub
Thank you
The very first thing you should do is STOP using session.
Seriously, back away from the session object.
A proper use of session is long term, fairly unchanging data. Data that has to change on literally every post back etc belongs in the page itself.
Here is what's happening.
You click on a link to load up the project. The Session variable is being set with the current project id.
You then click on a link to get the next project,
Then quickly click on the link to get the one after that.
The server, meanwhile, is multithreaded. #3 basically interupted #2's execution.. and ran before #2. This means your session variable is jacked up.
Why would the 3rd request run before the 2nd? Well, you are executing a number of queries. It's likely that the queries for request 2 are taking slightly longer to exec than the ones for request 3.
Solution: Stop using Session.
Why: You cannot predict the order in which IIS is going to respond to requests. IIS is a parallel (not serial) engine and requests might very well happen out of the sequence you think they should.
Finally, the guy who said that session is locked by the first requestor wasn't entirely accurate. It is WRITE locked.. but that only occurs when the page starts writing to session. Reads are not locked.
So, when request 3 executes it is using the ID of request 1, or 2. Depending on which one is still active by the time it hits the write code.
Related
UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.
I'm developing a web-application in which I have a constant stream of data that is being received every 5 seconds or so in a java servlet (being read from a file written by another application). I want to push this data onto an html page and get it read in javascript so I can graph it in the d3 library.
At the moment I'm using a javascript function that calls the 'doGet' function of the servlet every 5 seconds. I'm worried this is creating a lot of overhead, or that it could be performed more efficiently.
I know it's also possible to run "response.setIntHeader("Refresh", 5);" from the servlet.
Are there any other better ways?
Thanks in advance for the help!
Short polling is currently probably the most common approach to solving the problem you describe
If you can cope with a few seconds lag in notification, then short polling is really simple, here is a basic example:
On page load, call this in your JS:
setInterval(checkFor, 30000);
The above will call a function checkFor() every 30 seconds (obviously, you can change the 30 seconds to any length of time - just adjust the 30000 in the above line according to how regular you want users to be updated).
Then, in your checkForNotifications function, just make an ajax call to your server asking if there are any updates - if the server says yes, then just display the alert using JS, if not (which will be most of the time most likely) then do nothing:
function checkFor(){
$.ajax({
url: "your/server/url",
type: "POST",
success: function( notification ) {
//Check if any notifications are returned - if so then display alert
},
error: function(data){
//handle any error
}
});
}
I have a web page that loads a URL and does the processing using
... Server.createobject("Microsoft.XMLDOM")
every 5 minutes. It processes the data and stores it in a database.
Basically, it needs to be in a loop and run every 5 minutes and PARSE the xml.
Is there a better way to do this?
I'm open to suggestions.
TIA
Steve42
There are two ways to do this. One would be to make a call from client to server using JavaScript code and the other one would be to separate that call to a web service and place it into a Windows Service or something similar.
The first option could be something like:
$(document).ready(function() {
setTimeout(functionToCall, (1000 * 60 * 5)); // Make a call every five minutes
});
function functionToCall() {
$.ajax({
url: 'call to your page',
type: 'GET
}).done(function() {
alert("Things processed");
}).fail(function() {
alert("Error occurred");
});
}
This is something from top of my mind to give you an idea. It might not be the most ideal solution, but it should give you some guideline.
when I run the code, the thread process before the response.write !! why and how to make them in order ?
insertUser.ExecuteNonQuery()
con.Close()
Response.Write("Done successfully ...")
Thread.Sleep(4000)
Response.Redirect("Default.aspx")
A response is a one-time thing in a web application. You can't "respond a little, do something else, and respond some more." This is especially true when you consider something like Response.Redirect() which modifies the headers of the response. (Essentially, Response.Redirect() will entirely "clobber" any content that you've added to the response so that the user will never see it.)
It looks like what you're trying to do here is:
Show the user a message.
Wait a few seconds.
Send the user to another page.
There are a couple of standard ways to accomplish this. You can either respond with a page that includes step 1 which, in client-side code, performs steps 2 and 3 or you can perform step 3 in server-side code and on the second page perform step 1 (and possibly two, hiding the message after a few seconds).
For example, let's say you want to show the message on Page A, wait a few seconds, then send the user to Page B. Then in Page A you might include something like this:
<script type="text/javascript">
$(function() {
$('#dialog-message').dialog({
modal: true,
buttons: {
Ok: function() {
$(this).dialog('close');
}
},
close: function() {
window.location.href='Default.aspx';
}
});
});
</script>
<div id="dialog-message">Done successfully ...</div>
Using jQuery, what this does is show the user a dialog (using the jQuery UI Dialog) with the intended message, and when the user closes the dialog it then performs the redirect.
You can do it using client side functionality in your code
plese refer following link
http://social.msdn.microsoft.com/Forums/da/sharepointdevelopmentprevious/thread/087c6b95-fe8d-48ea-85e6-b7fbcb777a5c
Web Response will get on the page only after the complete processing of webrequest,ie you can see response after the excution of code completly.So your code is excuting correct order.You can test it by insert Response.End() methord as shown below
insertUser.ExecuteNonQuery()
con.Close()
Response.Write("Done successfully ...")
Response.End();
Thread.Sleep(4000)
Response.Redirect("Default.aspx")
I have a shopping cart based application in asp.net 2.0 which uses the concept of group buying.My requirement is that when user checks out a particular product,he should do it with the latest price of that item at that time.
Now there is a scenario.
I have a product with price 50.I did a checkout.50 is displayed in my cart.At the same time some other user is accessing the product and now based on some business logic,we calculate the price.the second user did some activity which reduced the price to 45. I have a trigger which updates all shopping cart items with this new price.
I want to show this updated price on the frontend of the first user without a postback. or i want to give him a message that this price has changed so do a page refresh.
I have the following options.
1) The repeater control which shows the cart should be put under an update panel and that update panel should be refreshed after some interval using a timer.
2) use SQL Server notification services and invalidate the cache as soon as the data changes in database.
I do not want to use notification services as I am not caching the data.Secondly problem with update panel and timer control in that it will be a overhead to refresh it and refresh time is hard to find.
Please tell me a way to accomplish this scenario.
I would generate JavaScript that updates all Repeater items (html fields with prices), and has setTimeout() for periodical checking to web service (wcf, asmx or ashx) if price has changed. If it has, then i would retrieve all prices and update HTML fields! User don't need to prees anything, you can show him some notice that price has changed.
With jQuery and JSON for object serialisation this could be easily accomplished.
This is usually done via ajax from the client (browser) side of things. Perhaps this approach will work for your requirements as well?
Using ExtJs core, you can make an ajax call as follows:
Ext.Ajax.request({
url: 'ajax_demo/sample.aspx',
params : {'requestType':'check-sql-prices'}
success: function(response, opts) {
var obj = Ext.decode(response.responseText);
// process the response JSON object.
},
failure: function(response, opts) {
// this writes the the firebug console.
console.log('server-side failure with status code ' + response.status);
}
});
Ext Core also handles "Timed Code Execution", so you could run the check, say, every 30 seconds. Round trip timing is usually less than 100 milliseconds, but that would definitely depend on your "SQL price checking" logic.