I'm still fairly new to ASP.Net, so forgive me if this is a stupid question.
On page load I'm displaying a progress meter after which I do a post back in order to handle the actual loading of the page. During the post back, based on certain criteria I'm disabling certain links on the page. However, the links won't disable. I noticed that if I force the links to disable the first time in (through debug) that the links disable just fine. However, I don't have the data I need at that time in order to make the decision to disable.
Code Behind
If (Not IsCallback) Then
pnlLoading.Visible = True
pnlQuote1.Visible = False
Else
pnlLoading.Visible = False
pnlQuote1.Visible = True
<Load data from DB and web service>
<Build page>
If (<Some Criteria>) Then
somelink.Disable = True
End If
End If
JavaScript
if (document.getElementById('pnlQuote1') === null) {
ob_post.post(null, 'PerformRating', ratingResult);
}
ob_post.post is an obout js function that does a normal postback and then follows up with a call to the server method named by the second param. then followed by the call to a JavaScript method named by the third param. The first parameter is the page to post back to. A value of null posts back to the current page.
The post back is working fine. All methods are called in the correct order. The code that gives me trouble is under the code behind in bold. (somelink.disabled = True does not actually disable the link) Again, if I debug and force the disabling of the link to happen the first time in, it disables. Does anyone know what I might do to get around this?
Thanks,
GRB
Your code example is using the IsCallBack check, while the question text talks about the IsPostback Check. I'd verify that you're using Page.IsPostBack in your code to turn off the links.
Related
I have database that I have multiple orders entered into. Everything seems to be working fine except for a few old entries which will not accept updates/changes to their Fields.
Note: The majority of the Fields are Strings with Possible Values entered via a DropDown Box.
So if I open Order A I can make adjustments just fine and those changes persist even after closing the page and coming back or refreshing.
But if I open Order B, I can make changes via the dropdowns and it looks like they have adjusted, however if I leave the page or refresh all the changes have reverted back.
One piece of info that may be helpful is that each of these orders has at least one Field that contains an entry that is no longer a Possible Value (the original entries were removed/changed per request of the client).
Maybe they are "locked" because of this? Is there a way to look at an error log for a Published app?
I can delete the "corrupt" entries and recreate them (since there are currently only a few), but I would prefer to find a better solution in case this happens again in the future.
Any help would be greatly appreciated.
It's a bug. Such field level value updates should get through.
As workaround you can update prohibited(not possible anymore) values with allowed ones in OnSave Model's Event like:
switch (record.Field) {
case "old_value_1":
record.Field = "new_value_1";
break;
case "old_value_2":
record.Field = "new_value_2";
break;
...
}
Sorry for the inconvenience.
Each deployment has its own log. Have you tried "App Settings > DEPLOYMENTS > (click on the desployment) > VIEW LOGS"?
UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.
In adobe analytics I try to implement link tracking for all links can be found in a page using this:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
Try to test it using a page like this
The extra calls are likely coming from how you have Adobe Analytics configured. There are a handful of config variables that will cause extra requests depending on how you set them (on their own and/or in relation to each other).
Here is a listing of Adobe Analytics variables for reference. These are the ones for you to look at:
s.trackDownloadLinks - If this is enabled, any standard links with href value ending in value(s) specified in s.linkDownloadFileTypes will trigger a request on click. Generally, this is to enable automatic tracking for links that prompt a visitor to download something (e.g. a pdf file).
s.trackExternalLinks - If this is enabled, any standard links with href NOT matched in s.linkInternalFilters OR matched with s.linkExternalFilters will trigger a request on click. Generally, this is to enable automatic tracking for links you count as visitor navigating off your site(s).
s.linkInternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkExternalFilters. Generally, this should include values that represent links you do NOT want to count as navigating off your site(s).
s.linkExternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkInternalFilters. Generally, you should never set this. It's intended for edge-use-cases for people who know what they are doing and have a complex site eco-system and definitions of what counts as internal vs. external.
s.trackInlineStats - This is for clickmap/heatmap tracking. This may or may not trigger an extra request, depending on how a lot of different stars align.
In addition to these, you may already have some plugins or other custom code that triggers click tracking. For example, there are linkHandler, exitLinkTracker, and downloadLinkTracker plugins that you may have included in your code that may play a part in extra requests being triggered.
Finally, more recent versions of Adobe Analytics code may trigger multiple requests depending on how much data you are trying to send in the request (whereas older versions just truncated the request, which resulted in data loss).
In any case, the long story short here is if you are looking to roll your own custom link tracking, you should make sure the above variables/plugins are removed or otherwise disabled.
But on the note of rolling your own custom link tracking.. I'm getting a sense of de ja vu here, like I already made a comment about this relatively recently in another post, over this exact same code... but generally speaking, this is not a good idea:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
You are wholesale implementing exit link tracking on every single link of your page. And you are giving them all the same generic "external" label. And the native exit link reports are pretty limited and useless to begin with, so ideally you should also pop an eVar or something with the exit url or something.
But more importantly.. unless literally every single link on your pages are links that navigate your visitor off-site, this is not going to be useful to you in reports in general, and it's even going to ruin a lot of your reports.
I can't believe (or accept) that you really want to count every link on your pages as exit links..
I assume s.tl does an ajax call.
It should then forward the link to the href of the link - if the link is allowed to be followed immediately, the ajax call will be interrupted which seems to be what you see
You may want to change to
$(document).on('click', 'a', function(e) {
e.preventDefault();
s.tl(this, 'e', 'external', null, 'navigate');
});
I found this article when looking to see what s.tl is https://marketing.adobe.com/developer/forum/general-topic-forum/difference-between-s-t-and-s-tl-function
Take a standard web page with lots of text fields, drop downs etc.
What is the most efficient way in webdriver to fill out the values and then verify if the values have been entered correctly.
You only have to test that the values are entered correctly if you have some javascript validation or other magic happening at your input fields. You don't want to test that webdriver/selenium works correctly.
There are various ways, depending if you want to use webdriver or selenium. Here is a potpourri of the stuff I'm using.
Assert.assertEquals("input field must be empty", "", selenium.getValue("name=model.query"));
driver.findElement(By.name("model.query")).sendKeys("Testinput");
//here you have to wait for javascript to finish. E.g wait for a css Class or id to appear
Assert.assertEquals("Testinput", selenium.getValue("name=model.query"));
With webdriver only:
WebElement inputElement = driver.findElement(By.id("input_field_1"));
inputElement.clear();
inputElement.sendKeys("12");
//here you have to wait for javascript to finish. E.g wait for a css Class or id to appear
Assert.assertEquals("12", inputElement.getAttribute("value"));
Hopefully, the results of filling out your form are visible to the user in some manner. So you could think along these BDD-esque lines:
When I create a new movie
Then I should see my movie page
That is, your "new movie" steps would do the field entry & submit. And your "Then" would assert that the movie shows up with your entered data.
element = driver.find_element(:id, "movie_title")
element.send_keys 'The Good, the Bad, the Ugly'
# etc.
driver.find_element(:id, "submit").click
I'm just dabbling in this now, but this is what I came up with so far. It certainly seems more verbose than something like Capybara:
fill_in 'movie_title', :with => 'The Good, the Bad, the Ugly'
Hope this helps.
I am using ReportViewer.WebForms in asp.net page. I have the parameter toolbar showing up where I can select a parameter, it does a postback to get the next parameter (dependent on the first), and so forth.
When I click the View Report button there is a postback and the report will display fine.
All this works great.
What I would like to do is set the ShowReportBody to false
ReportViewer.ShowReportBody = False
At this point I would like to grab all the parameters that have been selected by the user and run the render method to export to a file (of my choosing .. maybe excel, maybe pdf .. does not really matter, nor is the topic of this question).
So, my question is, how (or can I) trap the button event of the View Report button? I would like to be able to use the ReportViewer UI in order to capture all the parameters instead of building custom parameters.
I'd also like to rename this, but, again .. another topic :)
You can use the IsPostBack flag and QueryString
Example:
Viewer.ShowReportBody = false
if(IsPostBack)
{
if(Request.QueryString["Exec"] = "Auto")
Viewer.ShowReportBody = true;
...
}
else
{
Viewer.ShowReportBody = true;
}