I am having an issue with call Drupal node.save using MooTool's JSONP. Here is an example.
Here is my request:
callback Request.JSONP.request_map.request_1
method node.save
sessid 123123123123123
node {"type":"blog","title":"New Title","body":"This is the blog body"}
Here is my result
HTTP/1.0 500 Internal Server Error
I got this working before, but i used AMFPHP and was able to send objects to drupal. I am assuming that this has to do with Drupal expecting an object, but since it is a GET it gets transformed as a string. Is there any way of getting around this with out hacking the code?
Here is my code:
$('newBlogSubmit').addEvent('click', function()
{
var node = {
type : "blog",
title:"New Title",
body :"This is the blog body"
}
var string = JSON.encode(node);
string.escapeRegExp()
var sessID = _sessID;
DrupalService.getInstance().node_save(string, sessID, drupal_handleBlogSubmit);
});
My Drupal Service JS Code:
//NODE
DrupalService.prototype.node_save = function(node, sessid, callback){
var dataObj = {
method : "node.save",
sessid : sessid,
node : node
}
DrupalService.getInstance().request(dataObj, callback);
}
//SEND REQUEST AND CALLBACK FUNCTION
DrupalService.prototype.request = function(dataObject, callback){
new JsonP('http://myDrupalSite.com/services/json', {data: dataObject,onComplete: callback}).request();
}
I am trying to connect the dots, but not too familiar with Drupal, but i would guess all I need to do is turn the string back into an object. Any ideas where I should be looking, or if there is an existing patch?
A first question could be why you use mootools since Drupal comes with jQuery and use it extensively throughout the different modules and Drupal core itself.
Anyways I don't know mootools so can't help you there, but if your request in ending in a internal server error, you have a problem with your drupal code or your js code. So even if I knew exactly what you were doing, I couldn't tell you the problem without looking at the drupal code for your http://myDrupalSite.com/services/json callback.
In general, what you want to make sure is:
You make a POST request, as drupal will cache get's and the semantic of this, is that you are posting data - the node - to the server.
Your data should be sent as post params, this will make them end up in the PHP $_POST variable
Your callback should validate the data and act accordingly, creating a node when the data is intact. You don't need session id's since the script will have the same session the browser has.
I've answered a similar question in detail, which was about altering a field instead of saving a node, but much of the work is still the same. You can take a look on the post, although this is with jQuery and not Mootools.
Related
I am trying to make some further adjustments to an address text field. But the problem is that its made with airtable. I want to get the input of that address and use it to get some data from zillow API for the user. How can I do this? I have viewed the source HTML and I only see the airtable script.
This probably went unanswered for being too vague. I'll try to leave some pointers if anyone stumbles upon this in the future.
Are you the host of the WP page? do you have access to the Airtable base in question? Is the "frontend" viewable? Airtable's API is pretty well-documented, simple as it may be. Whatever you need, if it's from a shared view, it can be fetched with a curl request.
Other than that,if the base is public, or shared publicly, and particularly if you need this data at a steady rate or in larger quantities, you'd be better off requesting access and collecting the information with a script from the Scripting app. Since ES6, this is as trivial as doing something like
let query = await base.getTable
(cursor.activeTableId)
.selectRecordsAsync()
let payload, selectAll = query.records.map(rec => rec.name),
selectAll ?
payload = { records: JSON.stringify(selectAll) }
: console.error('something went wrong')
remoteFetchAsync(('your scraping endpoint', payload)=>
//rock'n'roll past this point
})
UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.
I'm creating a web scraping bot in PhantomJs, I'm using onResourceReceived to sniff the the requests of the site and retrieve them using this simple code:
page.onResourceReceived = function(response)
{
if (response.url.match("XXXXXXX"))
{
console.log(response.url);
}
};
My problem is that response.url automatically update the data to a URL-decoded version of this. I need to check some parameters but instead of receiving something like this :
xxx.com?...&events=event20%2Cevent4%%2Cevent89%3D7%2Cevent50%2Cevent51%2Cevent52%2Cevent53%2Cevent54%2Cevent55%2Cevent56&...
I get this
xxx.com?... &events=event20%2Cevent4%2Cevent89%3D7&....
It looks like when %3D is reached it cuts the value and continues to the next property.
Is there a way to access the raw version of this data?
Thanks a lot for the help.
Is there a way to get the previous page location before going to the next page in IronRouter?
Is there an event I can use to fetch this information?
Thanks in advance.
Since Iron Router uses the usual History API, you can just use the plain JS method:
history.go(-1);
or
history.back();
Edit: or to check the previous path without following it:
document.referrer;
You can achieve the behavior you want by using hooks.
// onStop hook is executed whenever we LEAVE a route
Router.onStop(function(){
// register the previous route location in a session variable
Session.set("previousLocationPath",this.location.path);
});
// onBeforeAction is executed before actually going to a new route
Router.onBeforeAction(function(){
// fetch the previous route
var previousLocationPath=Session.get("previousLocationPath");
// if we're coming from the home route, redirect to contact
// this is silly, just an example
if(previousLocationPath=="/"){
this.redirect("contact");
}
// else continue to the regular route we were heading to
this.next();
});
EDIT : this is using iron:router#1.0.0-pre1
Apologies for bumping an old thread but good to keep these things up to date saimeunt's answer above is now deprecated as this.location.path no longer exists in Iron Router so should resemble something like the below:
Router.onStop(function(){
Session.set("previousLocationPath",this.originalUrl || this.url);
});
Or if you have session JSON installed (see Session JSON)
Router.onStop(function(){
Session.setJSON("previousLocationPath",{originalUrl:this.originalUrl, params:{hash:this.params.hash, query:this.params.query}});
});
Only caveats with thisis that first page will always populate url fields (this.url and this.originalUrl there seems to be no difference between them) with full url (http://...) whilst every subsequent page only logs the relative domain i.e. /home without the root url unsure if this is intended behaviour or not from IR but it is currently a helpful way of determining if this was a first page load or not
I have to use a http get request in my angular script where I have to send some variables to the server.
My question is if the sending variable is changed somehow, then will the request call again automatically?, or do I have to call the request again??
Thanks
updated:
code in my controller:
$scope.startDate = "";
$http.get('/Controller/Action', {startDate: $scope.startDate}).success(data){
alert(data)
}
if somehow the value of the startDate is changed will the http request be called again or I have to place it into a watch.
While the question is unclear, I believe what you are referring to is a $watch setup on a scope property. If you make a normal request, such as this:
$scope.myResource = 'path/to/resource'; //could be used use without $scope for this example
$http.get($scope.myResource) //etc
the call is just made once, because that's all it is told to do. If you want it to update when the path "myResource" changes, then do this:
$scope.$watch('myResource', function(newPath) { //watching $scope.myResource for changes
$http.get(newPath) //etc
})
Now, when the value of $scope.myResource changes, the $http call will be again, this time requesting the new path.