I am trying to create a JMeter test case for article creation in Drupal 8. I am able to add steps for other navigations. But when clicking the Create Article button after entering some values in the form fields, from JMeter I am getting HTTP response 200. But the article is not getting created.
If I do the same steps in browser I am getting HTTP response 303 and article getting created successfully.
I found this in request headers of POST request while hitting the Create Article button. I am suspecting this might be the reason Drupal server is not accepting the request. Because I am not sure how this dynamic ID "JJPKbuyIinQT5mQZ" is getting generated
Is this being generated by browser? If yes, how to do the same action in JMeter?
Is this being generated by server? If yes, I don't see this token in previous request, like form_token.
This dynamic ID should be automatically generated by JMeter given you tick Use multipart/form-data for POST box, this is so called multipart boundary
Other things to be considered:
Don't forget to add HTTP Cookie Manager, otherwise you will not be able to even perform a login
Correlate form_build_id and form_token. You can do this using CSS/JQuery Extractor
Correlate changed, you can generate timestamp like 1532969982 using __groovy() function like: ${__groovy(Math.round(System.currentTimeMillis() / 1000),)}
Correlate created[0][value][date]. You can do this using __time() function like ${__time(YYYY-MM-dd,)}
Correlate created[0][value][time]. You can do this using the same __time() function like ${__time(HH:mm:ss,)}
That's probably it, other values should be good to be used from the recorder.
Related
I'm trying to fetch data from a website (https://gesetze.berlin.de/bsbe/search). Using Mozilla, I've taken a look at the network analysis. Usually, I'm just messing around with the parameters of the POST-Request to see how I might influence the response of the server. But when I simply re-send the request (making no changes at all), I'm getting HTTP-response 500. The server answer states as message: security_notAuthenticated.
Can anyone explain that behaviour? The request is done by the same PC, the same browser in the same session, and there is no login function on that website. Pictures shown below.
Picture 1 - Code 200
Picture 2 - Code 500
The response security_notAuthenticated indicates, that your way of repeating the request omits some authentication-related information.
When I repeat the request, using Mozilla Firefox's "Resend" or "Edit and resend" function, the Cookie header is not sent with the request. Although it occurs in the editable header list when using "Edit and resend" it's missing when watching the actual sent request. I'm not sure whether this is a feature or a bug.
When using Firefox's "Use as Fetch in Console" function, the header will automatically be included and you still have the ability to change the headers and the body. The fetch API is a web standard and some introductory material about fetch can be found on MDN.
If you want to do custom requests, in the browser, fetch is a good option.
In other environments and languages you usually use some HTTP client (just search the web for "...your language... http request" or similar, you will find something).
I'm trying to scrape some pages on a website that uses ASPX forms. The forms involve adding details of people by updating the server (one person at a time) and then proceeding to a results page that shows information regarding the specified people. There are 5 steps to the process:
Hit the login page (the site is HTTPS) by sending a POST request with my credentials. The response will contain cookies that will be used to validate all subsequent requests.
Hit the search criteria page by sending a GET request (no parameters). The only purpose of this is to discover the __VIEWSTATE and __EVENTVALIDATION tokens in the HTML response to be used in the next step.
Update the server with a person. This involves hitting the same webpage in step 2 but using a POST request with form parameters that correspond to the form controls on the page for adding person details and their values. The form parameters will include the __VIEWSTATE and __EVENTVALIDATION tokens gained from the previous step. The server response will include a new __VIEWSTATE and __EVENTVALIDATION. This step can be repeated using the new __VIEWSTATE and __EVENTVALIDATION, or can proceed to the next step.
Signal to the server that all people have been added. This involves hitting the same page as the previous 2 steps by sending a POST request with form parameters that correspond to the form controls on the page for signalling that all people have been added. The server response will simply be 25|pageRedirect||/path/to/results.aspx|.
Hit the search results page specified in the redirect response from the previous step by sending a GET request (no parameters - cookies are enough). The server response will be the HTML that I need to scrape.
If I follow the process manually with any browser, filling in the form controls and clicking the buttons etc. (testing with just one person) I get to the results page and the results are fine. If I do this programmatically from an application running on my machine, then ultimately the search results HTML is wrong (the page returns valid HTML, but there are no results compared with the browser version and some null values were there should not be).
I've run this using a Java application with Apache HttpClient handling the requests. I've also tried it using a Ruby script with Mechanize handling the requests. I've setup a proxy server using Charles to intercept and examine all 5 HTTPS requests. Using Charles, I've scrutinized the raw requests (headers and body) and made comparisons between requests made using a browser and requests made using the application(s). They are all identical (except for the VIEWSTATE / EVENTVALIDATION values and session cookie values, which I would expect to differ).
A few additional points about the programmatic attempts:
The login step returns successful data, and the cookies are valid (otherwise the subsequent requests would all fail)
Updating the server with a person (step 3) returns successful responses, in that they are the same as would be returned from interaction using a browser. I can only assume this must mean the server is updating successfully with the person added.
A custom header is being added to requests in step 3 X-MicrosoftAjax: Delta=true (just like the browser requests are doing)
I don't own or have access to the server I'm scraping
Given that my application requests are identical to the browser requests that succeed, it baffles me that the server is treating them differently somehow. I can't help but feel that this is an ASP.net issue with forms that I'm overlooking. I'd appreciate any help.
Update:
I went over the raw requests again a bit more methodically, and it turns out I was missing something in the form parameters of the requests. Unfortunately, I don't think it will be of much use to anyone else, because it would seem to be specific to this particular ASP servers logic.
The POST request that notifies the server that all people have been added (step 4) requires two form parameters specifying the county and address of the last person that was added to the search. I was including these form parameters in my request, but the values were empty strings. I figured the browser request was just snagging these values because when the user hits the Continue button on the form, those controls would have the values of the last person added. I figured they wouldn't matter and forgot about them, but I was wrong.
It's a peculiar issue that I should have caught the first time. I can't complain though, I am scraping a site after all.
Review Charles logs again. It is possible that the search results and other content may be coming over via Ajax, and that your Java/Ruby apps are not actually doing all of the requests/responses that happen with the browser. Look for any POST or GET requests in between the requests you are already duplicating. If search results are populated via Javascript your client app may not be able to handle this?
I'm trying to manipulate a .net ASP form on a site that's using AJAX Control Toolkit. The site is only accessible to valid logins, and I do have a valid account. It consists of a search page with a form. Each time a submit button is clicked on the form, the server is updated using the values of some text fields on the form, and then the VIEWSTATE and EVENTVALIDATION tokens will be updated based on the response from the server, ready for the next request.
I'm using HttpClient in Java to do this. I suspect there's something I'm not doing correctly with regard to interacting with ASPX forms in general.
When I hit the main search page for the first time (cookies are validating my login with the server), I get the HTML for the search page back. I extract the VIEWSTATE and EVENTVALIDATION tokens for the next request. I've examined the exact form fields and their values that need to be sent to the server in a POST by looking at the Chrome debugger utility after making a request on the site manually. I've replicated them exactly as they should be, inserting the VIEWSTATE and EVENTVALIDATION appropriately.
But the response I get back from the server is not what it should be. What I get back is just the same HTML for the main search page that I get the first time I hit the webpage. The form data I'm using looks like this:
ctl00$ScriptManager1:ctl00$ContentPlaceHolder1$UpdatePanel1|ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$acceptButton
ctl00_ContentPlaceHolder1_TabContainer1_ClientState:{"ActiveTabIndex":0,"TabState":[true,true]}
__EVENTTARGET:
__EVENTARGUMENT:
__LASTFOCUS:
__VIEWSTATE:<token extracted from first page hit>
__VIEWSTATEENCRYPTED:
__EVENTVALIDATION:<token extracted from first page hit>
ctl00$ContentPlaceHolder1$LabelFee:0
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$RadioButtonList1:Person
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$snameText:aSurname
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$HiddenField1:
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$fnameText:aFirstname
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$dayFromTextBox:01
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$monthFromTextBox:January
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$yearFromTextBox:2001
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$dayToTextBox:01
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$monthToTextBox:January
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$yearToTextBox:2008
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$DropDownList1:aCity
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$PropText:
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel2$RefText:
__ASYNCPOST:true
ctl00$ContentPlaceHolder1$TabContainer1$TabPanel1$acceptButton:Accept
I've also tried replicating the headers that the Chrome debugger shows, so my request is including the same Content-Type, Host, Origin, Referer, User-Agent (for my browser) and every other header, including this header X-MicrosoftAjax: Delta=true.
I know there's a lot of moving parts here, but I intentionally haven't mentioned how I'm actually making the POST request with the HttpClient lib because I'd don't want to complicate the question anymore or alienate anyone who doesn't know Java but knows ASP. I'd like to know if there's an ASP issue I'm not addressing, but I can post the Java code is necessary.
Edit:
I've checked the debugging info that HttpClient is outputting just before sending the request, and the form data is being added properly as multi-part form data. The headers are all there too.
This answer is a long shot, but I've seen weirder things.
You mention this header:
X-MicrosoftAjax: Delta=true
I did some deep googling and found that this is often shown as all lower case in dumps of Ajax and UpdatePanel POST requests:
x-microsoftajax: Delta=true
See here and here.
Could it be as simple as not casing the header correctly?
I eventually got this working. The problem was not specific to ASP in general, it was actually a problem with how Java (specifically HttpClient) was sending the request. I was using HttpClient to compile the request using multi-part form, but after using Fiddler to analyse and compare the requests (see the edited part of this question for more details on that) sent from both my application and the actual webpage, my app request was structured very differently.
The real website request had the form options embedded in the request body in what looked like a URL encoded query string. My request was a series of entries in the request body where each option was wrapped in the Content-Type and Content-Disposition headers. The requests succeeded after changing the POST to add the parameters like:
request.setEntity(new UrlEncodedFormEntity(paramList));
So I have a webpage, ("http://data.terapeak.com/verify/") and I don't see any & tags in the URL so I am unaware how to post data to this. I need to do this via HTTPRequest rather than browser control. I am creating a double threaded batch searching program. I have already successfully made this using a single browser control but that wont allow for multi-threading, atleast with my current knowledge due to the fact that even when creating a new frmBrw that already exists it needs for me to set the threat apartment to single. If i set it to single, I am unable to have it send the data the the excel sheet I need both threads to access. I hope this is clear... The basic question is how can I log into this form via HTTP request.
This isn't going to be easy to answer without further details however I suspect you'll need to provide the variables via a HTTP POST request.
Can you successfully login to this page in your browser? If so, run a proxy tool such as fiddler and check the HTTP headers it makes to the server. You should see the form variables being passed over. You then need to mimic this in code.
How to: Send Data Using the WebRequest Class
Hope this gets you started
While looking at the code in "petclinic", part of Spring 3.0 samples I noticed the following lines
<c:choose>
<c:when test="${owner.new}"><c:set var="method" value="post"/></c:when>
<c:otherwise><c:set var="method" value="put"/></c:otherwise>
</c:choose>
In this discussion at SO it seems that PUT should be used for "create/update" and POST for "updates".
Which is right?
What is the impact of using post for "create" and put for "update"?
Note : According to the HTTP/1.1 spec. quoted in the referenced SO discussion, the code given above seems to have the correct behavior.
Both POST and PUT are have well defined behavior as per HTTP spec.
The result of a POST request should be a new resource that is subordinate to the request URL; the response should contain Location header with the URL of the newly created resource.
The result of a PUT should be an update of the resource at the request URL. if there is no existing resource at the request URL, a new one can be created.
The confusion arises from the fact that POST is also used with forms as a mechanism to pass the form data. Most common implementation of forms is to post back to the same URL at which the form page is located, thus giving the false idea that the POST operation is used for an update. However, in this particular usage, the form page is not the resource.
With all this in mind, here's the correct (in my opinion of course :-)) usage:
POST should be used to create new resources when:
- the new resource is subordinate to an existing resource
- the resource identity/URL is not known at creation time
PUT should be used to update existing resources with well-known URL. It can be used to create a resource at well-known URL as well; however, it does help to think about this scenario in a different way - if the resource URL is known before the PUT request is made, this could be treated the same as the resource at this location already existing but being empty.
It's quite simple:
POST allows anything to happen, and it isn't restricted to creating "subordinate" resources, but allows the client to "provide a block of data ... to a data-handling process" (RFC 2616 sec 9.5). POST means "Here's that data you asked for just now"
PUT is used as an opposite of GET. The usual flow is that you GET a resource, modify it somehow, and then you PUT it back at the same URI that you got it from. PUT means "Please store this file at this URI".
The uniformity of PUT (which is to store a file) allows intermediaries (e.g. caches) to invalidate any cached responses they might have at that exact URI (since they know that it's about to change). The uniformity of PUT also allows clients (that understand this) to modify a resource by first retrieving it (GET) and then send a modified copy back (PUT). It also allows clients to retry on a network failure, due to PUT's idempotency.
Side note: Using PUT to create resources is dubious. While it's possible within the spec, I don't see it as a good idea, just as using POST to perform searches isn't a good idea, just as tunneling SOAP over HTTP isn't a good idea. AtomPub explicitly states that PUT isn't used to create atom entries.
POSTs ubiquitousness comes from the fact that HTML defines <form> elements that result in POSTing a application/x-www-form-urlencoded entity, with which the recipient can do anything it pleases, including
creating subordinate resources (The repsonse is usually accompanied by a 201 response and Location header)
creating a completely different resource (again usually a 201 response and Location header)
creating many subordinate and/or unrelated resources (perhaps with a simple response indicating the URIs of the created resources)
doing nothing except return a response (e.g. 200 or 302) (a case where perhaps GET should have been used)
modifying the resource that received the POST itself (returning or redirecting back to the updated resource).
delete one or more resource.
any combination of the above.
The only one who knows what will happen in a POST request is the user who initiated the request (by clicking the huge "yes I confirm deleting my Facebook profile" button) and the server that's handling the request. To the rest of the world, the request is opaque and doesn't mean anything other than "this URI is being passed some data".
So the answer to your question is that both POST and PUT can be used for both create and update.
POST is often use to create resources (like AtomPub 9.2)
PUT semantics fits well for modifying resources (like AtomPub 9.3)
POST may be used to modify resources (like a www form edit your profile)
PUT can technically be used to create resources (although I advise against it)