Dynamic param value calculation depending on params in Paw - paw-app

I have some API that must be signed with request params hash.
For example I have 2 params - login and password in request params. So I need to add the param checksum that is calculated with login and password fields hash.
How can I implement it? Now when I try to calculate it, I have the self-dependency error.
login = test
password = test
somefield = lalala
checksum = md5([login][password][somefield]) <- here is dynamic evaluation

The self-dependency error is shown because it actually tries to evaluate the full URL to get one of the other parameters. That's probably something that needs to be fixed in Paw.
However, you can simply ignore the warning, as it still works. Here is an example:
In your example, the checksum is 8bc22595f820ff1612fd16294c02359a which is the expected result.
Update: if you want to do that with JavaScript code, here's an example.
function evaluate(context) {
var url = context.getCurrentRequest().url;
var query = url.split('?')[1];
var fragments = query.split('&');
var login, password, somefield;
for (var i in fragments) {
var keyvalue = fragments[i].split('=');
if (keyvalue[0] == "login") {
login = keyvalue[1];
} else if (keyvalue[0] == "password") {
password = keyvalue[1];
} else if (keyvalue[0] == "somefield") {
somefield = keyvalue[1];
}
}
// you can now compute whatever hash you want with these values
// the self-dependency error will be shown but it should work
return "" + login + "-" + password + "-" + somefield;
};
To calculate MD5 hashes with JS, you'll need to include a 3rd party library. That can be more easily (and more cleanly) done via npm. See how we are managing dependencies in other Extensions: https://github.com/LuckyMarmot/Paw-PythonRequestsCodeGenerator

Related

How to maintain a session with .aspx server while web scraping through pagination?

I am unable to maintain a session with a .aspx server. I am trying to scrape data by paginating, but it keeps telling me "The Results have expired. Please resubmit the search." I have tried maintaining cookies so I don't think that is the problem unless I somehow did it wrong?
I have to navigate through by first making a GET request to the following URL:
https://www.wandsworth.gov.uk/planning-and-building-control/search-planning-applications/
The following is the code I use to make the request.
First these are all my requires
const cheerio = require('cheerio');
const url = require('url');
const rp = require('request-promise');
const ss = require('string-similarity');
const tc = require('tough-cookie');
Here is how I make my request
var options = {
uri: 'https://www.wandsworth.gov.uk/planning-and-building-control/search-planning-applications/',
transform: function(body){ return cheerio.load(body) },
method: 'GET'
}
var $ = await rp(options);
Now I extract the information I need in order to make a successful post request, and I use the 'string-similarity' package to find a select element that closely matches a tag that matches my input.
// Extract selectable elements
var obj_collection = $('#cboStreetReferenceNumber')[0].children;
var collection = []; // array of inner strings for each select element
// Push innerHTML strings to collection
for(let i=0; i<obj_collection.length; i++){
try {
collection.push(obj_collection[i].children[0].data);
} catch(e) {
collection.push('');
}
}
// Find the best match for our given address
var matches = ss.findBestMatch(address, collection);
var cboStreetReferenceNumber=
obj_collection[matches.bestMatchIndex].attribs.value;
// These are used to verify us
var __VIEWSTATE = $('#__VIEWSTATE')[0].attribs.value;
var __VIEWSTATEGENERATOR = $('#__VIEWSTATEGENERATOR')[0].attribs.value;
var __EVENTVALIDATION = $('#__EVENTVALIDATION')[0].attribs.value;
var cboMonths = 1;
var cboDays = 1;
var csbtnSearch = 'Select';
var rbGroup = 'rbNotApplicable';
// Modify options
options.uri = $('#M3Form')[0].attribs.action;
options.method = 'POST';
options.form = {
cboStreetReferenceNumber,
__VIEWSTATE,
__VIEWSTATEGENERATOR,
__EVENTVALIDATION,
cboMonths,
cboDays,
csbtnSearch,
rbGroup
};
options.followAllRedirects = true;
options.resolveWithFullResponse = true;
delete options.transform;
Now with these options, I'm ready to make my request to page 1 of the data I'm looking for.
// method: #POST
// link: "Planning Explorer"
var body = await rp(options);
var $ = cheerio.load(body.body);
console.log(body.request);
var Referer = 'https://planning1.wandsworth.gov.uk' + body.req.path;
var scroll_uri = 'https://planning1.wandsworth.gov.uk/Northgate/PlanningExplorer/Generic/StdResults.aspx?PT=Planning%20Applications%20On-Line&PS=10&XMLLoc=/Northgate/PlanningExplorer/generic/XMLtemp/ekgjugae3ox3emjpzvjtq045/c6b04e65-fb83-474f-b6bb-2c9d4629c578.xml&FT=Planning%20Application%20Search%20Results&XSLTemplate=/Northgate/PlanningExplorer/SiteFiles/Skins/Wandsworth/xslt/PL/PLResults.xslt&p=10';
options.uri = scroll_uri;
delete options.form;
delete options.followAllRedirects;
delete options.resolveWithFullResponse;
options.method = 'GET';
options.headers = {};
options.headers.Referer = Referer;
options.transform = function(body){
return cheerio.load(body);
}
var $ = await rp(options);
Once I get the next page, I am given a table with 10 items and some pagination if there are more than 10 items available based on my POST request.
This all goes fine until I try to paginate to page 2. The resulting HTML body tells me that my search has expired and that I need to resubmit a search. That means going back to step 1 and submitting a POST request again, however that will always bring me to page 1 of the pagination.
Therefore, I need to somehow find a way to maintain a connection with this server while I 'scroll' through its pages.
I am using node.js & request-promise to make my requests.
The following is my code:
I have already tried maintaining cookies between requests.
Also, __VIEWSTATE shouldn't be the problem because the request to page 2 should be a GET request.
I was able to find a workaround by using the headless browser "Puppeteer" in order to maintain a connection with the server. However, I still do not know how to solve this problem by making raw requests.

Create a dynamic URL parameter that includes every other URL parameter

I'm trying to create a signature for the JW Platform API. It requires a URL parameter named "api_signature" that a SHA1 digest of every other URL parameter.
I tried creating the parameter with the SHA1 digest dynamic with a JS Script as its input. Here's the basic code from the JS Script:
function evaluate(context)
{
var request = context.getCurrentRequest()
var params = request.getUrlParameters()
var sbs = ''
for (key of params) {
if (key === 'api_signature')
continue;
if (sbs.length > 0)
sbs += '&'
sbs += encodeURIComponent(key) + '=' + encodeURIComponent(params[key])
}
return sbs + '<API SECRET GOES HERE>'
}
But with this I get the warning:
JS Script cannot be used in URL Parameter Value because this creates a self-dependency.
How can I get around this?
The reason this warning appears is because request.getURLParameters() has to evaluate all the url parameters of the request, including api_signature, and therefore has to call the method evaluate of the JS Script, thus creating an infinite recursion issue which is detected and blocked at the first iteration (the content of the dangerous url parameter is replaced by '').
In theory, one could avoid this issue by using request.getURLParameters(true), which returns a non-evaluated DynamicString for each url parameter. However, it seems that in practice, the detection is a bit too strong and it will still raise the warning.
Another solution could be to use request.getURLParametersNames() and request.getURLParameterByName(name) together. However, the detection is again too strong and a warning is raised.
Both of these solutions should in theory work without raising a warning, and we are working on a fix for this issue in Paw 3.0.13. In any case, although a warning is raised, the script still functions normally.
Example with request.getURLParameters(true)
function evaluate(context)
{
var request = context.getCurrentRequest()
var params = request.getUrlParameters(true)
var sbs = []
for (var key in params) {
if (key === 'api_signature')
continue;
sbs.push(encodeURIComponent(key) + '=' +
encodeURIComponent(params[key].getEvaluatedString()))
}
return sbs.join('&') + '<API SECRET GOES HERE>'
}
Example with request.getURLParametersName()
function evaluate(context)
{
var request = context.getCurrentRequest()
var params = request.getUrlParametersNames()
var sbs = []
for (var param of params) {
if (param === 'api_signature')
continue;
sbs.push(encodeURIComponent(param) + '=' +
encodeURIComponent(request.getUrlParameterByName(param)))
}
return sbs.join('&') + '<API SECRET GOES HERE>'
}

POST data empty ( or not exist ) when I receive post back from TPV provider

I'm trying to implement a service Redsys payments on my .net website.
The payment is successful (data are sent by post) but when the POST data back to my website ( to confirm the payment ) and i try to retrieve them with:
Request.form string value = [ "name"]
the value is always empty
I tried to count how many are in Request.Form.Keys.Count and always gives me zero values.
In the vendor documentation it indicated that the variables may be collected with Request.form [ "name"] and I called them to ask why I do not get the data and they dont know why...so I'm desperate,
What may be due?
I have reviewed the header I get from the server ( width Request.Headers ) and have the following parameters. HttpMethod:: GET Requestype: GET and contentlength: 0 . My bank tell me that they response POST not GET... so it´s a mistery. May be the error is becouse that sendings are made from a https to a htttp ?
You are receiving a POST without parameters.
The parameters are in the content of the call.
You should read the content and get the values of each parameter:
[System.Web.Http.HttpPost]
public async Task<IHttpActionResult> PostNotification()
{
string body = "";
await
Request.Content.ReadAsStreamAsync().ContinueWith(x =>
{
var result = "";
using (var sr = new StreamReader(x.Result))
{
result = sr.ReadToEnd();
}
body += result;
});
In body you can read the parameters (the order of them can change).

How can you get a url for every piece of data?

I am building an IM platform based on Firebase and I would like that every user got an address that directed them to the chat room.
http://chatt3r.sitecloud.cytanium.com/
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script src="https://cdn.firebase.com/v0/firebase.js"></script>
<script>
var hash; // global incase needed elsewhere
$(function() {
hash = window.location.hash.replace("#", "");;
myRootRef = new Firebase("http://xxxx.firebaseio.com");
var postingRef;
if (hash) {
// user has been given a hashed URL from their friend, so sets firebase root to that place
console.log(hash);
postingRef = new Firebase("http://xxxx.firebaseio.com/chatrooms/" + hash);
} else {
// there is no hash, so the user is looking to share a URL for a new room
console.log("no hash");
postingRef = new Firebase("http://xxxx.firebaseio.com/chatrooms/");
// push for a unique ID for the chatroom
postingRef = postingRef.push();
// exploit this unique ID to provide a unique ID host for you app
window.location.hash = postingRef.toString().slice(34);
}
// listener
// will pull all old messages up once bound
postingRef.on("child_added", function(data) {
console.log(data.val().user + ": " + data.val().message);
});
// later:
postingRef.push({"message": "etc", "user": "Jimmybobjimbobjimbobjim"});
});
</script>
</head>
That's working for me locally. You need to change xxxx to whatever URL yours is at, and add on however many characters that first part is at the .slice() bit.
Hashes.
If I understand your question correctly, you want to be able to share a URL that will allow anyone who clicks on the URL to log onto the same chatroom.
I did this for a Firebase application I made once. The first thing you need to be doing is using the .push() method. Push the room to Firebase, then use the toString() method to get the URL of the push. Some quick JS string manipulation - window.location.hash = pushRef.toString().slice(x) - where 'x' is whatever place you want to snip the URL at. window.location.hash will set the hash for you. Add the hash to the sharing URL, and then for the next step:
You will want a hash listener to check if there is already a hash when you open the page, so $(window).bind('hashchange', function() {UpdateHash()}) goes into a doc.ready function, and then...
function UpdateHash() {
global_windowHash = window.hash.replace("#", "");
// to assign the hash to a global hash variable
if (global_windowHash) {
// if there was already a hash
chatRef = new Firebase("[Your Firebase URL here]" + "/chatrooms/" + global_windowHash);
// chatRef is the global that you append the chat data to, and listen from.
} else {
// there wasn't a hash, so you can auto-create a new one here, in which case:
chatRef = new Firebase("[Your Firebase URL here]" + "/chatrooms/");
chatRef = chatRef.push();
window.location.hash = chatRef.toString().slice(x);
}
}
I hope that helps (and works :P ). If there are any questions or problems, then just ask!

Cant get digest auth to work with node.js

I'm trying to get a simple (!) digest authentication working with node js using an an API from gathercontent.com.
Everything seems to be working except I still get a "Wrong credentials" response that looks like this:
{ success: false, error: 'Wrong Credentials!' }
The code looks like this:
var https = require('https'),
qs = require('querystring');
apikey = "[my api key goes in here]",
pwd = "[my password goes in here]",
crypto = require('crypto');
module.exports.apiCall = function () {
var options = {
host:'abcdefg.gathercontent.com',
port:443,
path:'/api/0.1/get_pages_by_project/get_me',
method:'POST',
headers:{
"Accept":"application/json",
"Content-Type":"application/x-www-form-urlencoded"
}
};
var req = https.request(options, function (res) {
res.on('data', function (d) {
var creds = JSON.parse(d);
var parsedDigest = parseDigest(res.headers['www-authenticate']);
console.log(parsedDigest);
var authopts = {
host:'furthercreative.gathercontent.com',
port:443,
path:'/api/0.1/get_pages_by_project/get_me',
method:'POST',
headers:{
"Accept":"application/json",
"Content-Type":"application/x-www-form-urlencoded",
"Authorization" : getAuthHeader(parsedDigest, apikey, parsedDigest['Digest realm'], pwd)
}
};
console.log(authopts);
console.log('\n\n\n');
var req2 = https.request(authopts, function (res2) {
console.log("statusCode: ", res2.statusCode);
console.log("headers: ", res2.headers);
res2.on('data', function (d2) {
var result = JSON.parse(d2);
});
});
req2.end();
});
});
req.write('id=1234');
req.end();
req.on('error', function (e) {
console.error(e);
});
};
function parseDigest(s){
var parts = s.split(',');
var obj = {};
var nvp = '';
for(var i = 0; i < parts.length; i++){
nvp = parts[i].split('=');
obj[nvp[0]] = nvp[1].replace(/"/gi, '');
}
return obj;
}
function getAuthHeader(digest, apikey, realm, pwd){
var md5 = crypto.createHash('md5');
var s = '';
var nc = '00000001';
var cn = '0a4f113b';
var HA1in = apikey+':'+realm+':'+pwd;
md5 = crypto.createHash('md5');
md5.update(HA1in);
var HA1out = md5.digest('hex');
var HA2in = 'POST:/api/0.1/get_pages_by_project/get_me';
md5 = crypto.createHash('md5');
md5.update(HA2in);
var HA2out = md5.digest('hex');
md5 = crypto.createHash('md5');
var respIn = HA1out + ':' + digest.nonce + ':'+nc+':'+cn+':'+digest.qop+':'+ HA2out;
md5.update(respIn);
var resp = md5.digest('hex');
s = [ 'Digest username="',apikey,'", ',
'realm="',digest['Digest realm'],'", ',
'nonce="',digest.nonce,'", ',
'uri="/api/0.1/get_pages_by_project/get_me", ',
'cnonce="',cn,'", ',
'nc="',nc,'", ',
'qop="',digest.qop,'", ',
'response="',resp,'", ',
'opaque="',digest.opaque,'"'].join('')
return s;
}
I'd try and Curl to it but I'm not sure how!
Any help appreciated!
I see a couple of issues potentially related to your problem. It's hard to tell which ones are the actual culprits, not knowing anything about gathercontent's implementation. If you pasted an example of their 'WWW-Authenticate' header, it would be much easier to provide specific help.
So I'm speculating what the actual cause is, but here are some actual problems that you should address anyway, to conform to the spec (i.e. protect it from breaking in the future because the server starts doing things slightly differently):
in the Authorization headers you are creating, remove the double quotes around nc, and maybe also qop
I don't know what qop value gathercontent is using. If it's auth-int, then you'd also have to append the hashed HTTP body to HA2, see #3.2.2.3 of the spec - furthermore, they might be specifying a comma-separated list of qop values for you to choose from - or the server might not send a value for qop at all, i.e. they use the most basic from of HTTP digest auth, in which your implementation would be violating the spec, as then you aren't allowed to e.g. send a cnonce, nc etc.
you try to get the realm via parsedDigest['Digest realm'], i.e. you are assuming that the realm is the first attribute after the initial Digest keyword. That might or might not be the case, but you should not rely upon it (modify your parseDigest function to strip of the string "Digest " before splitting the rest)
the way you use parsedDigest, you make the assumption that Digest is always capitalized that way, and that realm, nonce, etc. are always in lowercase. According to the spec, these are all case-insensitive
A couple of unrelated issues:
Does the server really force you to use Digest authentication? This is HTTPS, so you might as well do Basic authentication, it's way easier, and with HTTPS, just as safe. (Answering myself here, after checking out gathercontent: Basic auth is apparently not possible)
As mentioned in my comment to your question, cnonce should be random for every request, especially, you shouldn't copy and paste it from Wikipedia, which makes you more vulnerable (but not an issue here, as all data goes over SSL anyway in your case)
Regarding how to curl it - try this:
curl --data 'id=1234' --digest --user "apikey:pwd" https://abcdefg.gathercontent.com:443/api/0.1/get_pages_by_project/get_me
It's Peter from GatherContent.
The first, pretty obvious thing would be to use just get_me instead of get_pages_by_project/get_me. You are mixing two different things in the latter. get_me doesn't require any parameters sent via POST, so you can drop them.
Also, please make sure that your password is always lowercase x.
Does it change anything?
Edit:
For anyone interested, here's our API docs:
http://gathercontent.helpjuice.com/questions/26611-How-do-I-use-the-API
The express-auth module supports multiple authentication schemes, including HTTP Digest. See: https://github.com/ciaranj/express-auth
Another excellent option is passport at: https://github.com/jaredhanson/passport
The http-digest examples in the two modules tend to focus on establishing an authentication for you node.js application vs. forwarding the authentication request to a third-party. However, you should be able to make it work with a little noodling.
If pressed, i would use passport. The examples offered are a lot more clear and documented.
Hope that helps...
I would recomand you to use mikeal's request module it make it a lot easier and cleaner.
Request does not have support for HTTP Auth yet, saddly but you would just have to set the Authorization header.
Try urllib it will work with simple and disgest auth.
See Exemple :
https://stackoverflow.com/a/57221051/8490598

Resources