I'm new to semantic-ui and to javascript as well, so please bear with me. I have a basic form that I'm trying to get working with the built-in form validation semantic-ui provides. This form is part of a web app using node, express, and pug. The structure of the specific form (view) I'm working on looks like this:
Sorry for using a picture but I'm trying to keep this high-level.
As you can see, I have a form (with I believe the requisite classes), a submit button and a script block at the end, which is where I've got the validation code at the moment.
Here's my validation code such as it is:
$('#new-password-form').form({
on: 'blur',
fields: {
password: {
identifier : 'password',
rules: [
{
type : 'empty',
prompt: 'You must enter a password'
},
{
type : 'regExp[/^(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*[!##\$%\^&\*])(?=.{8,})/]',
prompt: 'Your password must be at least 8 characters long, and contain upper and lower case characters, digit(s), and symbol(s)'
}
]
}
}
}, {
onSuccess : function(e){
e.preventDefault();
}
});
The trouble is, validation isn't working. When I click the submit button the form attempts a regular get submit and it's as if the javascript isn't even there somehow. Yet it is there. Node is showing everything is properly fetched and loaded by the browser. Inspecting the rendered HTML shows everything is there, and I see no errors in the console. The page appears to have loaded all the requisite files successfully as follows:
GET /enroll/new 200 31.281 ms - 2652
GET /stylesheets/style.css 200 0.859 ms - 735
GET /jquery/jquery.js 200 1.491 ms - 280364
GET /ui/semantic.min.css 200 1.508 ms - 628536
GET /ui/semantic.min.js 200 2.070 ms - 275709
GET /images/amm.png 200 2.068 ms - 25415
GET /ui/semantic.min.css 200 0.418 ms - 628536
GET /favicon.ico 404 9.768 ms - 1499
So what gives? Can someone tell me what I'm doing wrong? Happy to provide more detail if what I have here isn't enough.
Update:
I extracted the HTML to a flat file, and relinked the assets (png, js, css) so there is no server involved at all. The page loads in the browser just fine. I get the exact same behavior (nothing happens when submit is clicked except the page reloads with get parameters—default non-js behavior AFAIK). It's making me think something is wrong with jQuery or javascript itself.
Well I found the problem... a missing ',' at the end of the "type:" row (regExp[/^(?=.*[a-z])...).
Now validation is working for both the static file, and the server version. For what it's worth coming from other languages where this is not an issue, javascript's ubiquitous use of inline functions and data structures nested multiple levels deep is something I've had a very hard time getting used to. Makes it all too easy to miss some critical little piece.
I guess let this be another example of a basic setup of semantic-ui that works... so long as you don't leave off any commas. (I fixed the code above for those that might want to copy/paste)
Related
A user attempted to upload a file that was too large (70MB for a single PDF page) and the system errored out. This is correct and expected behavior, however in the response.responseText (in a jQuery AJAX call) instead of just being the message, it was raw text of an entire html page, cut off at a certain point, which I believe coincides with the default style of IIS error pages.
I do not want to increase the limit of the file size to allow the file to come through, but I do wish to make it to where response.responseText just returns the message (effectively, what's between the < title > < /title > tags).
I attempted to set breakpoints in the upload.ashx file to see if I could find where this was happening, but it never gets that far (if it is a normal file, these breakpoints hit). Which is fine, I'm okay with IIS gatekeeping (I imagine if I try to bypass IIS for handling it, the file is going to get uploaded to the server and then rejected. Plus, lose out on just letting IIS configuration handle this), but I don't want to return an entire page if possible.
To my mind, the resolution I see is to see if response.responseText contains DOCTYPE and if so, scrape what is inside the title tag, but I feel like there may be a more by the book way of doing this?
edit: I did see where someone recommended setting existingResponse="PassThrough" on the httpErrors section of web.config, but when I did this the responseText just became blank and it still didn't touch breakpoints so I don't think this is achieving what I'm after.
This probably isn't the best way to handle, but seems to work in this case so just running with it:
changed:
error: function (response) {
alert(response.responseText);
}
to:
error: function (response) {
var titleIndex = response.responseText.indexOf('<title>');
var titleEndIndex = response.responseText.indexOf('</title>');
var message = response.responseText.substr(titleIndex + 7, titleEndIndex - titleIndex - 7);
alert(message);
}
which returns "IIS 10.0 Detailed Error - 413.1 - Request Entity Too Large" in this particular instance.
UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.
I am pretty new to web-scraping and recently I am trying to automatically scrap phone number for pages like this. I am not supposed to use Selenium/headless url browser libraries and I am trying to find the a way to actually request the phone number using let say a web service or any other possible solution that could give me the phone number hopefully directly without having to go through the actual button press by selenium.
I totally understand that it may not even be possible to automatically reveal the phone number in one shut as it is meant not be accessible by nosy newbie web-scraper like me; but I still like to raise the question for my information to get detailed answer from an expert point of view.
If I search the "Reveal" button DOM element, it shows some tags which I have never seen before. I have two main questions which I believe could be helpful for newbies like me.
1) Given a set of unknown tags/attribues (ie. data-q and data-reveal in the blow button), how is one able to find out which scripts in the page are actually using them?
2) I googled the button element's tag like: data-q and data-reveal the only relevant I could find was this which for some reason I don't have access two even-if I use proxy.
Any clue particularly on the first question is much appreciate it.
Regards,
Below is the href-button code
Reveal
Ok, according to your demand there are several steps before you finally get a solution.
1st step : open your own browser and enter your target page(https://www.gumtree.com/p/vans/2015-ford-transit-custom-2.2tdci-290-l1-h1/1190345514)
2nd step : (Assume you are using Chrome as your favorite browser) Press Ctrl+Shift+I to open the console, and then select 'Network' tag in the console.
3rd step : Press the 'Reveal' button on that page, watch the console carefully, catch the http request which is sent immediately when you press the 'Reveal' button. You can see the request contains a long string of number in Query String Parameters, actually it is a timestamp.
4th step : Also you can see there is a part named 'Request Headers' in that http request, and you should copy the values of referer , user-agent , x-gumtree-token.
5th step : Try to construct your request (I am a fan of Python, So I am going to show you my example code in Python)
import time
import requests
import json
headers = {
'referer': 'please enter the value you just copied from that specific request',
'user-agent': 'please enter the value you just copied from that specific request',
'x-gumtree-token': 'please enter the value you just copied from that specific request'
}
url = 'https://www.gumtree.com/ajax/account/seller/reveal/number/1190345514?_='
current_time = time.time()
current_time = str(current_time)
current_time = current_time.split('.')[0] + current_time.split('.')[1] + '0'
url += current_time
response = requests.get(url=url,headers=headers)
response_result = json.loads(response.content)
phone_number = response_result['data']
I have a number of pages in a WebMatrix Razor ASP.Net site where I have added one line of code:
Response.OutputCache(600);
From reading about it I had assumed that this mean that IIS would create a cache of the html produced by the page, serve that html for the next 10 minutes, and after 10 minutes when the next request came in, it would run the code again.
Now the page is being fetched as part of an timed jquery call. The time code in the client runs every minute. The code there is very simple:
function wknTimer4() {
$.get('PerfPanel', function(data) {
$('#perfPanel').html(data);
});
It occasionally appears to cache, but when i look at the number of database queries done during the 10 minute period, i might have well over 100 database queries. I know the caching isn't working the way I expect. Does the cache only work for a single session? Is there some other limitation?
Update: it really shouldn't matter what the client does, whether it fetches the page through a jQuery call, or straight html. If the server is caching, it doesn't matter what the client does.
Update 2: complete code dumped here. Boring stuff:
#{
var db = Database.Open("LOS");
var selectQueryString = "SELECT * FROM LXD_funding ORDER BY LXDOrder";
// cache the results of this page for 600 seconds
Response.OutputCache(600);
}
#foreach (var row in db.Query(selectQueryString) ){
<h1>
#row.quotes Loans #row.NALStatus, oldest #(NALWorkTime.WorkDays(row.StatusChange,DateTime.Now)) days
</h1>
}
Your assumptions about how OutputCache works are correct. Can you check firebug or chrome tools to look at the outgoing requests hitting your page? If you're using jQuery, sometimes people set the cache property on the $.get or $.ajax to false, which causes the request to the page to have a funky trailing querystring. I've made the mistake of setting this up globally to fix some issues with jQuery and IE:
http://api.jquery.com/jQuery.ajaxSetup/
The other to look at here is the grouping of DB calls. Are you just making a lot of calls with one request? Are you executing a db command in a loop, within another reader? Code in this case would be helpful.
Good luck, I hope this helps!
This is my first bash at using extJS, and after a few hours of struggling, some things are working OK, except I have combo lists that I can't filter down to less than 2000 items in edge cases, so I'm trying to page the lists through remotely, but I must be doing something wrong.
My data store and combo look as follows:
var remoteStore = new Ext.data.JsonStore({
//autoLoad : true,
url : 'addition-lists.aspx',
fields : [{name: 'extension_id'}, {name: 'extension'}],
root : 'extensionList',
id : 'remoteStore'
});
.
.
xtype : 'combo',
fieldLabel : 'Remote',
name : 'remote',
displayField : 'extension',
valueField : 'extension_id',
mode : 'remote',
//pageSize : 20,
triggerAction : 'query',
typeAhead : true,
store : remoteStore,
anchor : '95%'
The combo works loading locally, but as soon as I switch to remote it remains blank.
My ASP.NET page returning the JSON is like this:
protected void Page_Load(object sender, EventArgs e)
{
Response.Clear();
Response.Write(GetRemote());
}
On remote stores the combo defaults its minChars property to 4, so the query only gets sent after typing 4 chars. Setting minChars almost gives the desired behaviour.
I say almost because even if the item sought by autocomplete is in the current page, a new server query still gets sent, defaulting the selection to the first item in the new page.
The way you configured your store above, the result from your ASP script should read something like this:
{"extensionList": [
{"extension_id": 1, "extension": "js"},
{"extension_id": 2, "extension": "aspx"}
]}
If it doesn't look like that, your remote store will not find anything.
You can refer to this question ExtJS combobox problem in IE
Several things. First, when doing this:
remoteStore.loadData(<%= GetRemote() %>);
you are not actually making a remote call from your javascript. You are echoing the result of calling the GetRemote server function directly into the page at render time. Probably not what you intend? If GetRemote is writing out your combo data (and it's working correctly), then you should be able to use a combo setup for local data. If the intention really is to make a remote call, then you need to remove the server tag and load data via the proxy url as shown in several examples that come with Ext.
Another thing is that your Page_Load code doesn't actually show anything about how you are loading, formatting or returning your data. I would suggest viewing source and verifying that your tag is actually being replaced with the data you expect. If/when you switch it to a true remote call to load data then you can use Firebug to inspect your XHR calls and verify the data coming down that way.
You must set a proxy, i.e. set
proxy: new ScriptTagProxy
property for loading 'store' remotely. Look at examples for exact syntax.
EDIT: Please disregard my previous note since you're using JsonStore shortcut.
Try to apply all of these properties to your combo:
typeAhead: true,
typeAheadDelay: 500,
triggerAction: 'all',
selectOnFocus:true,
And please do not do server-side prefetch of records (using loadData). It hurts internal filter very much, so that you stick with filtered records from different prefetches.
On the other hand, if you do prefetch all records on the server-side, why do you need then remote access for your combo anymore?!