Wireshark showing post parameters in diffrent locations in follow TCP stream window - http

Im running some tests on a app and Ive stumbled upon an odd thing that happends when I sniff the traffic between me and the app server using wireshark:
In a scenario when Im making the post request through the app's html, it looks like that:
But when Im requesting the same thing using the chrome extension "postman", it looks like that:
why are the parameters are now shown at the top of the request? I mean, what changed here?
Im trying to find a clue why is it working in the first option and refuses to work on the second. thats why I need to investigate every little thing..
Edit:
I wrote a short html page to illustrate this and the second option happens here as well:
...
<form action="http://x.x.x.x/page.cgi?id=1726931735&host_name=blah" method="post">
....

postman will send an empty body by default, you need to enable the rows of key-value fields for them to be added to the request. postman does not read the form on the page, its parameters must be inserted into the plugin fields. Otherwise postman will send an empty request just like you see.
You need to enter the fields in the Body tab in postman:

It's very odd. The first one is correct. The second is using get semantics but doing a post with no content. I'd try something else instead of this chrome extension.

Related

web scraping from https://ngodarpan.gov.in/index.php/home/statewise_ngo/61/35

Is it possible for me to scrape the data from the pop up appears after clicking the link.the website is https://ngodarpan.gov.in/index.php/home/statewise_ngo/61/35
Of course it's possible, it's just a table with pagination.
But you'd better check the legal part before scraping a website, moreover on a governmental one.
Yes, you have to follow exactly what browser does. See network behaviour from your browser.
First, you have to send request to https://ngodarpan.gov.in/index.php/ajaxcontroller/get_csrf in order to get token like this :{"csrf_token":"0d1c59184c7df788dc4b8759f6da40c6"}
After, send another POST request to https://ngodarpan.gov.in/index.php/ajaxcontroller/show_ngo_info. As parameters you have to mention csrf_test_name which which is equals to csrf_token and id which is found from onclick attribute of each link.
You will get JSON as response and just to parse it as you need.

Webscraping a tricky asp.net page

The overall goal is to perform a search on the following webpage http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx with a container value of CMAU1173561. I have tried two approaches, the php extension cURL and python's mechanized. The php approached involves a performing a POST submit using the input fields found on the page (NOTE: These are really ugly on the asp.net page). The returned page does not contain any of the search results. The second approaches involves using python's mechanize module. In this approach I load the page, select the form, then change the text field ctl00$ContentPlaceBody$TextSearch to the container value. When I load the response again no search results.
I am at a really dead end. Any help would be appreciate because as it stands my next step is to become a asp.net expertm which i perfer not to.
The source of that page is pretty scary (giant viewstate, tables all over the place, inline CSS, styles that look like they were copied from Word).
Regardless...an ASP.Net form still passes the same raw data to the server as any other form (though it is abstracted to the developer).
It's very possible that you are missing the cookies which go along with the request. If the search page (or any piece of the site) uses session state, the ASP.Net session cookie must be included in the request. You will be able to tell it from its name (contains "asp.net" and "session").
I assume that you have used a tool like Firebug or Chrome to view the complete outgoing request when the page is submitted. From my quick test, it looks like the request may be performed with a GET, not a POST. I submitted a form, looked at the request, and pasted the URL into a new browser window.
Example: http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx?ContNum=CMAU1173561&T=57201202648
This may be all you need to do.

Using Selenium : How to modify or inject into HTTP Post Data Request Header?

Please note this question is related to Selenium.
Before a HTML form submit i.e., selenium.click("//button[#type='submit']");
I want to inject a name value pair at native level in the HTTP Post back to the Server e.g.
Change HTTP Post from:
POSTDATA=register=true&accountType=customer
To:
POSTDATA=register=true&accountType=customer&mynewfield=true
Working with Selenium commands its not obvious how to intercept and modify what is posted back to the server.
Any ideas of how to achieve desired result in Selenium or something that can be called from Selenium? Kindly appreciated NJ
In theory you could use javascript or jQuery to alter the page. For example, using jQuery you could add a hidden form element with a default value or pre-set value that will then be passed upon form submission. (if i understand your question right - emulating TamperData?)

Erratic requests arising from client when using a custom view engine in ASP.Net MVC

I have spent about 7 hours trying to figure this out but gotten nowhere.
This is how my fiddler trace looks like
I have two routes that look like below that are registered for this page.
[route name="DummyResultsWithMarketStateNames" url="DummyResults/state-{statename}/market-{marketname}/page-{page}/{action}"
controller="DummyResults" action="Show"/]
[route name="DummyResultsWithMarketId" url="DummyResults/market-{marketid}/page-{page}/{action}"
controller="DummyResults" action="Show"/]
For this url, the first route matches and it goes to the right action. However, the client is sending in another request a second later in which it removes the last parameter 'page-1' and replaces it with 'none'. I've traced for XHR's and there are none. I'm not sure if this is an issue with the MVC framework itself but how would that translate as a request from the client?!!! Also, I'm getting different behavior with different browsers (IE trace above). Anyone encountered such strange behavior? I'd be happy to provide more info if you'd like.
UPDATE:
I setup the site on IIS and eliminated all image, css or script requests. I still end up with multiple requests. The original dummyresults page seems to be working now after I removed the .htc's. However, I have another page (screenshot below) that is not 'co-operating'. Should I add Ignoreroutes for certain extensions? This is driving me nuts!!! Pardon the 'bleep' on the image (IP reasons). PS: I setup another site for serving up all static resources.
Q: Should I add Ignoreroutes for certain extensions?
A: Of course! By default the WCF extension "*.svc" is ignored. The first thing I add on a ne page is for instance the ignore rule for the favicon.ico.
RouteTable.Routes.IgnoreRoute("*.svc");
RouteTable.Routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
RouteTable.Routes.IgnoreRoute("favicon.ico");

Screen Scraping - how to get AJAX based filtered data

I am working on screen scraping, its easy when filteration in query string, but the problem in AJAX based filteration,
e.g. here is an sample URL
When you open this page, enter hotel name and click Go, Ajax filter work and show the result accordingly or you click on Next Page, it will shown next record using AJAX based.
please suggest me, how to handle these kind of issues when working in Screen Scraping?
Thank alot
You may want to try 2 Firefox add-ons. They are "Firebug" and "Tamper Data".
The "Console" window of Firebug shows the AJAX request and response.
You can then write scripts using the PHP/cURL library to mimic the request.
Do a http request as you normally do for any link or form sumbit but use the url used with ajax. Sometimes you may need to read the javascript source to determine how the url is built.

Resources