Trouble entering POST parameters into url for ".do" page - http

I'm doing some heavy web scraping using Python. In some cases, post data is sent not through a form submit but through some Javascript, which I cannot interact with via this approach. In order to circumvent this, I've been appending names and values for the post requests to the url and then visiting that url.
This method was working fine until I came across a site that used this kind of structure: [sitename].com/?[pagename].do/. I admit total ignorance about this .do extension, though some light searching tells me that it has to do with Struts and a Java-based backend. In this case it seems to be a way of dynamically generating a table; I'm trying to filter the results of that table. What I want to enter is something like [sitename].com/?[pagename].do?[name]=[value]&[name]=[value], but this doesn't work, nor does it even seem like it should work. I attempted it using several variations in syntax. It seems like something I don't quite understand is going on here.
I wish I could direct you to the actual site, but unfortunately I cannot due to the sensitive nature of the project. Let me know, though, if there's any additional information that would be helpful in providing an answer. Thanks in advance.
Edit: This is not really a "my code isn't working" question, as it's the underlying functionality that I would like to emulate in my code which is troubling me, but I'll do my best to get grittier. I'm contractually bound not to share the names of the sites that we're studying, but I will try to model the problem. I am hoping that someone with some familiarity of the back-end activity sending this .do page to the browser will be able to shed light.
import urllib
import urllib2
#
## case 1: a site that i have success in scraping
url = 'http://[sitename]/[pagename]'
values = {'s' : '40', 'pg' : '1'}
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
print the_page #i get the filtered data that i am looking for
#
## case 2: the site that poses a problem for the encoding of post parameters
url = 'http://[sitename]/?[pagename].do/' # this site uses a .do file to generate
# the content i want to filter. note that the page name is preceded by ?.
values = {'s' : '40', 'pg' : '1'}
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
print the_page # i am taken back to the root of the site,
# the same result i would get if i entered nonsense
# post parameters that did not correspond with actual control names.
here, also, is an example of some javascript on the page that accomplishes what i'd like to do with my scraper:
function page_next (id) {
$("#loading").fadeIn("normal");
$.post("/?dumps.do/", {s: id, pg: 2},
function( data ) {
var content = $( data ).find( '#dumps' );
}
)
}

I don't know what site you are parsing, but this: [sitename].com/?[pagename].do/ is not something I would call default Struts behaviour, assuming it's indeed a Struts application.
Having a .do extension was indeed something Struts used to use as request mapping, but the URL in that case should be [sitename].com/[pagename].do not [sitename].com/?[pagename].do/
In the second form, the action is in fact a parameter in a query string. This is why this syntax is broken: [sitename].com/?[pagename].do?[name]=[value]&[name]=[value]. You want to send a query string to the action but the action itself is a parameter in the query string.
But that's not the issue. The issue is that the site is doing something with that parameter and expects to receive it's data in a certain way, a way you were not able to reverse engineer.
Assuming again that this is a Struts application, Struts uses a front controller to intercept all action.do URLs and then use the action to invoke a particular class in the application, a class that is mapped to that particular action. The format for this should be [sitename].com/[pagename].do. That would be similar to having, say, [sitename].com/[pagename].php.
But having the action as a parameter makes me think that the site has a different front controller (not that of Struts) that is taking the parameter from the query string and passing it downstream to the Struts framework.
There could be a lot of reasons for having this funky way of handling requests, including making it harder for others to scrape the site, although this seems kinda straight forward:
$.post("/?dumps.do/", {s: id, pg: 2}, ...
Have you tried doing a POST to the root of the application with the action in the query string?

Related

Python 3.5 requests for clawing

I have a coding problem regarding Python 3.5 web clawing.
I try to use 'requests.get' to extract the real link from 'http://www.baidu.com/link?url=ePp1pCIHlDpkuhgOrvIrT3XeWQ5IRp3k0P8knV3tH0QNyeA042ZtaW6DHomhrl_aUXOaQvMBu8UmDjySGFD2qCsHHtf1pBbAq-e2jpWuUd3'. An example of the code is like below:
import requests
response = requests.get('http://www.baidu.com/link?url=ePp1pCIHlDpkuhgOrvIrT3XeWQ5IRp3k0P8knV3tH0QNyeA042ZtaW6DHomhrl_aUXOaQvMBu8UmDjySGFD2qCsHHtf1pBbAq-e2jpWuUd3')
c = response.url
I expected that 'c' should be 'caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm'. (I remove http:// from the link as I can't post two links in one question.)
However, it doesn't work, and keeps return me the same link as I putted in.
Can anyone help on this. Many thanks in advance.
#
Thanks a lot to Charlie.
I have found out the solution. I first use .content.decode to read the response history, but that will be mixed up with many irrelevant info. I then use .findall to extract the redirect url from the history, which should be the first url displayed in the response history. Then, I use requests.get to retrieve the info. Below is the code:
rep1 = requests.get(url)
cont = rep1.content.decode('utf-8')
extract_cont = re.findall('"([^"]*)"', cont)
redir_url = extract_cont[0]
rep = requests.get(redir_url)
You may consider looking into the response headers for a 'location' header.
response.headers['location']
You may also consider looking at the response history, which contains a response for each response instance in a chain of redirects
response.history
Your sample URL doesn't redirect; The response is a 200 and then it uses a JavaScript window.location change. The requests library won't support this type of redirect.
<script>window.location.replace("http://caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm")</script>
<noscript><META http-equiv="refresh" content="0;URL='http://caifu.cnstock.com/fortune/sft_jj/tjj_yndt/201605/3787477.htm'"></noscript>
If you know you will always be using this one service, you could parse the response, maybe using regex.
If you don't know what service will always be used and also want to handle every possible situation, you might need to instantiate a WebKit instance or something and somehow try to determine when it finally finishes. I'm sure there's a page load complete event which you could use, but you still might have pages that do a window.location change after the page is loaded using a timer. This will be very heavyweight and still not cover every conceivable type of redirect.
I recommend starting with writing a special handler for each type of edge case and fallback on a default handler that just looks at the response.url. As new edge cases come up, write new handlers. It's kind of the 'trial and error' approach.

Send query string parameter from a non-web application

Ok, I've been bugging with this for too long.
I need to call my website from a rough VB.net app. Then only thing I need is to attach a query string parameter to the calling url, so I can distinguish the pages to be shown to different VB app users.
So I want to click a button and launch that website, giving this parameter.
First I was bugging with adding the system.web libraries. Now I can't use Request/Response.QueryString as well.
I tried getting some example help from this post. but as I said before - I cannot make use of Request.QueryString as I cannot import it.
I am stuck here:
Process.Start("http://localhost:56093/WebSite1?id=")
I need to attach a query string parameter to the url and then open the website with that url.
Can someone just give me a sample code for my problem.
Query parameters are parsed by the web server/http handler off the URL you use to call the page. They consist of key and value pairs that come at the end of the URL. Your code is nearly there. Say you needed to pass through the parameters:
ID = 1234
Page = 2
Display = Portrait
Then you'd turn them into a URL like this:
http://localhost:56093/WebSite1?ID=1234&Page=2&Display=Portrait
Therefore in your code you'd have:
Process.Start("http://localhost:56093/WebSite1?ID=1234&Page=2&Display=Portrait");

Is it valid to combine a form POST with a query string?

I know that in most MVC frameworks, for example, both query string params and form params will be made available to the processing code, and usually merged into one set of params (often with POST taking precedence). However, is it a valid thing to do according to the HTTP specification? Say you were to POST to:
http://1.2.3.4/MyApplication/Books?bookCode=1234
... and submit some update like a change to the book name whose book code is 1234, you'd be wanting the processing code to take both the bookCode query string param into account, and the POSTed form params with the updated book information. Is this valid, and is it a good idea?
Is it valid according HTTP specifications ?
Yes.
Here is the general syntax of URL as defined in those specs
http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
There is no additional constraints on the form of the http_URL. In particular, the http method (i.e. POST,GET,PUT,HEAD,...) used don't add any restriction on the http URL format.
When using the GET method : the server can consider that the request body is empty.
When using the POST method : the server must handle the request body.
Is it a good idea ?
It depends what you need to do. I suggest you this link explaining the ideas behind GET and POST.
I can think that in some situation it can be handy to always have some parameters like the user language in the query part of the url.
I know that in most MVC frameworks, for example, both query string params and form params will be made available to the processing code, and usually merged into one set of params (often with POST taking precedence).
Any competent framework should support this.
Is this valid
Yes. The POST method in HTTP does not impose any restrictions on the URI used.
is it a good idea?
Obviously not, if the framework you are going to use is still clue-challenged. Otherwise, it depends on what you want to accomplish. The major use case (redirection of a data subset to a new POST target) has been irretrievably broken by browser implementations (all mechanically following the broken lead of Mosaic/Netscape), so the considerations here are mostly theoretical.

Nesting HTTP GET parameters (request within a request)

I want to call a JSP with GET parameters within the GET parameter of a parent JSP. The URL for this would be http://server/getMap.jsp?lat=30&lon=-90&name=http://server/getName.jsp?lat1=30&lon1=-90
getName.jsp will return a string that goes in the name parameter of getMap.jsp.
I think the problem here is that &lon1=-90 at the end of the URL will be given to getMap.jsp instead of getName.jsp. Is there a way to distinguish which GET parameter goes to which URL?
One idea I had was to encode the second URL (e.g. = -> %3D and & -> %26) but that didn't work out well. My best idea so far is to allow only one parameter in the second URL, comma-delimited. So I'll have http://server/getMap.jsp?lat=30&lon=-90&name=http://server/getName.jsp?params=30,-90 and leave it up to getName.jsp to parse its variables. This way I leave the & alone.
NOTE - I know I can approach this problem from a completely different angle and avoid nested URLs altogether, but I still wonder (for the sake of knowledge!) if this is possible or if anyone has done it...
This has been done a lot, especially with ad serving technologies and URL redirects
But an encoded URL should just work fine. You need to completely encode it tho. A generator can be found here
So this:
http://server/getMap.jsp?lat=30&lon=-90&name=http://server/getName.jsp?lat1=30&lon1=-90
becomes this: http://server/getMap.jsp?lat=30&lon=-90&name=http%3A%2F%2Fserver%2FgetName.jsp%3Flat1%3D30%26lon1%3D-90
I am sure that jsp has a function for this. Look for "urlencode". Your JSP will see the contents of the GET-Variable "name" as the unencoded string: "http://server/getName.jsp?lat1=30&lon1=-90"

how to submit query to .aspx page in python

I need to scrape query results from an .aspx web page.
http://legistar.council.nyc.gov/Legislation.aspx
The url is static, so how do I submit a query to this page and get the results? Assume we need to select "all years" and "all types" from the respective dropdown menus.
Somebody out there must know how to do this.
As an overview, you will need to perform four main tasks:
to submit request(s) to the web site,
to retrieve the response(s) from the site
to parse these responses
to have some logic to iterate in the tasks above, with parameters associated with the navigation (to "next" pages in the results list)
The http request and response handling is done with methods and classes from Python's standard library's urllib and urllib2. The parsing of the html pages can be done with Python's standard library's HTMLParser or with other modules such as Beautiful Soup
The following snippet demonstrates the requesting and receiving of a search at the site indicated in the question. This site is ASP-driven and as a result we need to ensure that we send several form fields, some of them with 'horrible' values as these are used by the ASP logic to maintain state and to authenticate the request to some extent. Indeed submitting. The requests have to be sent with the http POST method as this is what is expected from this ASP application. The main difficulty is with identifying the form field and associated values which ASP expects (getting pages with Python is the easy part).
This code is functional, or more precisely, was functional, until I removed most of the VSTATE value, and possibly introduced a typo or two by adding comments.
import urllib
import urllib2
uri = 'http://legistar.council.nyc.gov/Legislation.aspx'
#the http headers are useful to simulate a particular browser (some sites deny
#access to non-browsers (bots, etc.)
#also needed to pass the content type.
headers = {
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',
'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',
'Content-Type': 'application/x-www-form-urlencoded'
}
# we group the form fields and their values in a list (any
# iterable, actually) of name-value tuples. This helps
# with clarity and also makes it easy to later encoding of them.
formFields = (
# the viewstate is actualy 800+ characters in length! I truncated it
# for this sample code. It can be lifted from the first page
# obtained from the site. It may be ok to hardcode this value, or
# it may have to be refreshed each time / each day, by essentially
# running an extra page request and parse, for this specific value.
(r'__VSTATE', r'7TzretNIlrZiKb7EOB3AQE ... ...2qd6g5xD8CGXm5EftXtNPt+H8B'),
# following are more of these ASP form fields
(r'__VIEWSTATE', r''),
(r'__EVENTVALIDATION', r'/wEWDwL+raDpAgKnpt8nAs3q+pQOAs3q/pQOAs3qgpUOAs3qhpUOAoPE36ANAve684YCAoOs79EIAoOs89EIAoOs99EIAoOs39EIAoOs49EIAoOs09EIAoSs99EI6IQ74SEV9n4XbtWm1rEbB6Ic3/M='),
(r'ctl00_RadScriptManager1_HiddenField', ''),
(r'ctl00_tabTop_ClientState', ''),
(r'ctl00_ContentPlaceHolder1_menuMain_ClientState', ''),
(r'ctl00_ContentPlaceHolder1_gridMain_ClientState', ''),
#but then we come to fields of interest: the search
#criteria the collections to search from etc.
# Check boxes
(r'ctl00$ContentPlaceHolder1$chkOptions$0', 'on'), # file number
(r'ctl00$ContentPlaceHolder1$chkOptions$1', 'on'), # Legislative text
(r'ctl00$ContentPlaceHolder1$chkOptions$2', 'on'), # attachement
# etc. (not all listed)
(r'ctl00$ContentPlaceHolder1$txtSearch', 'york'), # Search text
(r'ctl00$ContentPlaceHolder1$lstYears', 'All Years'), # Years to include
(r'ctl00$ContentPlaceHolder1$lstTypeBasic', 'All Types'), #types to include
(r'ctl00$ContentPlaceHolder1$btnSearch', 'Search Legislation') # Search button itself
)
# these have to be encoded
encodedFields = urllib.urlencode(formFields)
req = urllib2.Request(uri, encodedFields, headers)
f= urllib2.urlopen(req) #that's the actual call to the http site.
# *** here would normally be the in-memory parsing of f
# contents, but instead I store this to file
# this is useful during design, allowing to have a
# sample of what is to be parsed in a text editor, for analysis.
try:
fout = open('tmp.htm', 'w')
except:
print('Could not open output file\n')
fout.writelines(f.readlines())
fout.close()
That's about it for the getting of the initial page. As said above, then one would need to parse the page, i.e. find the parts of interest and gather them as appropriate, and store them to file/database/whereever. This job can be done in very many ways: using html parsers, or XSLT type of technogies (indeed after parsing the html to xml), or even for crude jobs, simple regular-expression. Also, one of the items one typically extracts is the "next info", i.e. a link of sorts, that can be used in a new request to the server to get subsequent pages.
This should give you a rough flavor of what "long hand" html scraping is about. There are many other approaches to this, such as dedicated utilties, scripts in Mozilla's (FireFox) GreaseMonkey plug-in, XSLT...
Most ASP.NET sites (the one you referenced included) will actually post their queries back to themselves using the HTTP POST verb, not the GET verb. That is why the URL is not changing as you noted.
What you will need to do is look at the generated HTML and capture all their form values. Be sure to capture all the form values, as some of them are used to page validation and without them your POST request will be denied.
Other than the validation, an ASPX page in regards to scraping and posting is no different than other web technologies.
Selenium is a great tool to use for this kind of task. You can specify the form values that you want to enter and retrieve the html of the response page as a string in a couple of lines of python code.
Using Selenium you might not have to do the manual work of simulating a valid post request and all of its hidden variables, as I found out after much trial and error.
The code in the other answers was useful; I never would have been able to write my crawler without it.
One problem I did come across was cookies. The site I was crawling was using cookies to log session id/security stuff, so I had to add code to get my crawler to work:
Add this import:
import cookielib
Init the cookie stuff:
COOKIEFILE = 'cookies.lwp' # the path and filename that you want to use to save your cookies in
cj = cookielib.LWPCookieJar() # This is a subclass of FileCookieJar that has useful load and save methods
Install CookieJar so that it is used as the default CookieProcessor in the default opener handler:
cj.load(COOKIEFILE)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
To see what cookies the site is using:
print 'These are the cookies we have received so far :'
for index, cookie in enumerate(cj):
print index, ' : ', cookie
This saves the cookies:
cj.save(COOKIEFILE) # save the cookies
"Assume we need to select "all years" and "all types" from the respective dropdown menus."
What do these options do to the URL that is ultimately submitted.
After all, it amounts to an HTTP request sent via urllib2.
Do know how to do '"all years" and "all types" from the respective dropdown menus' you do the following.
Select '"all years" and "all types" from the respective dropdown menus'
Note the URL which is actually submitted.
Use this URL in urllib2.

Resources