Twilio access POST data on page .net - asp.net

I am doing my best to try and get a .net webforms twilio app up and running but I am stuck at one part which I haven't found an answer too throughout all the blog posts and tutorials I have read. I have found numerous stack overflow questions but no real answer yet.
Basically: I have an .aspx page that executes: var call = client.InitiateOutboundCall(options);,
which has the url set as an ashx page that has:
twiml.BeginGather(new {
action = "http://sydheller.com/default.aspx",
numDigits = "1"});
twiml.Say("Please enter your PIN");
twiml.Hangup();
and then on that default.aspx in the Page_Load I have:
if (Request.Form["Digits"] != null)
{
string digits = Request.Form["Digits"];
digitsLabel.Text = digits;
}
When I click the button to make the call, I get the call and complete it and it hangs up but nothing happens on my page. I assume I need to use AJAX somehow but I am not sure how to write an ajax request to get information from something that the twilio api/ashx page is posting back.
Does that make sense?
This is driving me crazy and I have a feeling that it is something not particularly complicated that I just don't know and cant figure out.
I would really appreciate any help anyone could throw my way!
Thank you so much in advance,
Syd

Related

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

Check if a webpage is online using VB in asp.net

I have been toying with getting this working for a while now to no avail.
I want to check if a website is available and display its status in a label. I have no code to share as I haven't got this anywhere close. I am using VS 2013 building a asp.net website using VB.
I thought I could just ping the website but after multiple test using command.exe the website doesn't respond to pings even when up.
I know the page will be taken offline periodically for updates as this happened last week and when it is you get page cannot be displayed. I need to test if the page is online and return true if it is and false if not.
The simple way is to do an httpWebRequest and examine the result. If the page doesn't exist, you will get an error that indicates that. I'm not as familiar with .webClient but that apparently works as well.
This past question gives you examples of both.
You can easily do this using HttpWebRequest.
try {
HttpWebRequest myHttpWebRequest = (HttpWebRequest) WebRequest.Create("siteAddress");
HttpWebResponse myHttpWebResponse = (HttpWebResponse) myHttpWebRequest.GetResponse();
myHttpWebResponse.Close();
}
catch(WebException e) {
//There is a problem accessing the site
}

Retrieve comments from website using disqus

I would like to write a scraping script to retrieve comments from cnn articles. For example, this article: http://www.cnn.com/2012/01/19/politics/gop-debate/index.html?hpt=hp_t1
I realize that cnn uses disqus for their comment discussion. As the comment loading is not webpage-based (ie, prev page, next page) and is dynamic (ie, need to click "load next 25"), I have no idea how to retrieve all the 5000+ comments for this article.
Any idea or suggestion?
Thanks so much!
I needed to get comments via scraping a page that had disqus comments via ajax. Because they were not rendered on the server, I had to call the disqus api. In the source code, you will need the identifier code:
var identifier = "456643" // take note of this from the page source
// this is the ident url query param in the following js request
also,look in the js source code to get the pages public key, and forum name. Place these in the url where appropriate.
I used javascript nodejs to test this, ie :
var request = require("request");
var publicKey = "pILMw27bsbJsdfsdQDh9Eh0MzAgFL6xx0hYdsdsdfaIfBHRvLGqFFQ09st";
var disqusUri = "https://disqus.com/api/3.0/threads/listPosts.json?&api_key=" + publicKey + "&thread:ident=456643&forum=nameOfForumFromSource";
request(disqusUri, function(res,status,err){
console.log(res.body);
if(err){
console.log("ERR: " + err);
}
});
The option for scraping (other then getting the page), which might be less robust (depends on you're needs) but will offer a solution for the problem you have, is to use some kind of wrapper around a full fledged web browser and literally code the usage pattern and extract the relevant data. Since you didn't mention which programming language you know, I'll give 3 examples: 1) Watir - ruby, 2) Watin - IE & Firefox via .net, 3) Selenium - IE via C#/Java/Perl/PHP/Ruby/Python
I'll provide a little example using Watin & C#:
IE browser = new IE();
browser.GoTo(YOUR CNN URL);
List visibleComments = Browser.List(Find.ById("dsq-comments"));
//do your scraping thing
Link moreComments = Browser.Link(Find.ByClass("dsq-paginate-append-text");
moreComments.click();
//wait util ajax ended by searching for some indicator
Browser.WaitUntilContainsText(SOME TEXT);
//do your scraping thing
Notice:
I'm not familiar with disqus, but it might be a better option to force all the comments to show by looping the Link & click parts of the code I posted until all the comments are visible and the scrape the List element dsq-comments

How to integrate Payseal in ASPNET

Anyone, plz, provide me the details of integrating payseal(ICICI) in my website.. They hav given some testing code.. But, I couldn't understand it.. If I get any code sample, it will be useful for me.
Thanks in Advance
Ok..I found the solution.. Just add the dll which is provided by them. They hav given the sample test pages. In that, run the aspx file, "testssl.aspx" by changing the function setmerchantdetails with your information like merchant id,response page etc., like
objMerchant.setMerchantDetails("00001212", "00001212", "00001212", "", transactionid, "Orderno", "http://localhost/SFAClient/SFAResponse.aspx", "POST", "INR", "INVoiceno", "req.Preauthorization", "1550.00", "GMT+05:30", "Ext1", "Ext2", "Ext3", "Ext4", "Ext5");
That will take u to the icici payseal page. From that the customer hav to response. After finishing the payment, it will directly return to the response page designed by you(http://localhost/SFAClient/SFAResponse.aspx).
Hope, it will help others!!!!

ASP.Net links won't disable if done during postback

I'm still fairly new to ASP.Net, so forgive me if this is a stupid question.
On page load I'm displaying a progress meter after which I do a post back in order to handle the actual loading of the page. During the post back, based on certain criteria I'm disabling certain links on the page. However, the links won't disable. I noticed that if I force the links to disable the first time in (through debug) that the links disable just fine. However, I don't have the data I need at that time in order to make the decision to disable.
Code Behind
If (Not IsCallback) Then
pnlLoading.Visible = True
pnlQuote1.Visible = False
Else
pnlLoading.Visible = False
pnlQuote1.Visible = True
<Load data from DB and web service>
<Build page>
If (<Some Criteria>) Then
somelink.Disable = True
End If
End If
JavaScript
if (document.getElementById('pnlQuote1') === null) {
ob_post.post(null, 'PerformRating', ratingResult);
}
ob_post.post is an obout js function that does a normal postback and then follows up with a call to the server method named by the second param. then followed by the call to a JavaScript method named by the third param. The first parameter is the page to post back to. A value of null posts back to the current page.
The post back is working fine. All methods are called in the correct order. The code that gives me trouble is under the code behind in bold. (somelink.disabled = True does not actually disable the link) Again, if I debug and force the disabling of the link to happen the first time in, it disables. Does anyone know what I might do to get around this?
Thanks,
GRB
Your code example is using the IsCallBack check, while the question text talks about the IsPostback Check. I'd verify that you're using Page.IsPostBack in your code to turn off the links.

Resources