We are implementing support for punchout/oci which follows this basic flow:
Our procurment system opens a new tab/window to an external webshop and in our request we append a parameter called HOOK_URL. When the user finishes the external site will redirect to the hook_url and perform a post. example here:
http://help.sap.com/saphelp_crm20c/helpdata/en/30/67483936dd7607e10000000a11402f/content.htm
That system is a black box for us and we are basically just supposed to retrieve the post then process the form data into whatever information we need and send it on its way through our business logic.
So I need to build a simple Test application that can play the role of an external webshop and I need our system to send and receive.
We are using servlet 2.5, JSF 2.0, and CDI. But I'm guessing that I'll be needing a good old servlet or two for this purpose.
So far what I've got is:
procurment system perform window.open and send to test system
Test system presents a very basic html page and posts to a simple servlet that redirects to hook_url
procurment system get's the response
But what I can't figure out how to do nicely is to actually perform the post? When I recieve the response from the test system it's a totally new request. And must I use servlets?
I have tried to follow some guides but the examples map to poorly. It must be a Post by specification.
cheers
Solved by using apache http and url connections
Related
I was wondering if there was a way to see what parameters or other information is passed upon submitting a form from another website of which you don't have any of the server code.
Here is the page I am trying to debug - https://umbc.t2hosted.com/cit/index.aspx.
When I put information into the fields, and submit it, there is not added data to the url like there would be in a regular get request. Is there any tool that can help me find out what parameters are actually passed so that I may simulate user requests with a program?
Thank you in advance with you help.
You can use a debugger proxy such as fiddler to see all the data that is sent from your machine to the website when doing the query.
This will allow you to see the HTTP messages sent from your browser to the website. Once you've seen and understood how the message are sent, it should be relatively easy to reproduce them with another program.
I have an application of type asp.net mvc and web api.
I m little bit confused over http post and http put.
When to use what and what is the pros and cons of each.
I have gone through many blogs but no solid reason what is designed for what.
Use POST where you would have to create completely new record from scratch.
Use PUT where you would have to update existed record in your database
Here are Differences between PUT & POST
`POST is Not idempotent`-->
Means running POST operation again and again will create new instance everytime when you run call it.
`PUT is Idempotent`-->
PUT is Idempotent operation calling PUT again and again will result same result.
So POST is not idempotent while PUT is idempotent.
`There is also PATCH` -->
Use patch when you would have to update only few properties of your model.In other words Partial Updates.
Put simply (no pun intended):
POST is usually used to CREATE new objects.
PUT is usually used to UPDATE existing objects
Using the correct HTTP verbs allows you to publish a cleaner API and negates the need for encoding intent within the endpoint (url). For example, compare:
Using the correct verbs:
GET api/user/12345
POST api/user/12345
PUT api/user/12345
DELETE api/user/12345
Hacking the endpoint:
GET api/user/12345
POST api/user/12345/create
POST api/user/12345/update
POST api/user/12345/delete
I think the only Cons of using PUT etc are that not all developers are familiar with them and some third party software may not support them or at least it may not be as easy as using the more familiar verbs like GET & POST.
For example, I had a problem a few weeks ago when a proxy was placed in front of an API just before it was to go live and the proxy didn't support the HTTP PUT verb (maybe a config issue - but we didn't have access to the proxy to fix it) so we had to tweak the API and change it to POST at the last minute (which also meant we had to change the clients (mobile apps) that were using it).
With my recent development work, I need a way to determine whether the current response received is from Cache or if the Server has sent a very fresh response. This is so because there are some javascript codes that needs to be executed for every fresh response & NOT every fresh user.
You all may agree that showing the Javascript code Which Will be executed on every fresh response won't add anything meaningfull to my question, since it's totally irrelevant and not connected with the way a server respose is sent.
So, Is there any way to differentiate whether the response is from the Cache or is a new fresh copy sent by the server ??
You should consider creating a custom OutputCacheProvider that extends the built in cache provider used in MVC.
Some links that might help:
MSDN Article: Building and Using Custom OutputCache Providers in ASP.NET
Creating a Custom Output Cache Provider in ASP.NET 4
Custom Output Caching with MVC3 and .NET 4.0 – Done Properly!
Within your provider, you can use the same functionality as the regular output cache provider. And on the Get() action, you can add content to the item returned from the cache that will indicate that it was in fact retrieved from cache (you will want to experiment with this, making sure that you are only adding this to the items that you want, and are doing it in a way that doesn't mess up the output).
I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.
What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets?
I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this:
The HTTP packet format changes very often in our application (e.g. when
we optimize an AJAX call). We do not want to adapt all test scripts just because
there is a slight change in HTTP packet format.
Our test team shall not need to know any internals about our application
to write their test scripts. A tool that replays HTTP packets, however, requires
the team to know the format of HTTP requests and responses, such that they
can adapt details of the replayed HTTP packets (e.g. user name).
The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as:
Invoke web page at URL http://... .
First, enter XXX into text field XXX.
Then, press button XXX.
Wait until response has been received from web server.
Verify that text field XXX now contains the text XXX.
The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.
JMeter I've found to be pretty helpful, it also has a recording functionality to record use cases so you don't have to specify each GET/POST manually but rather "click" the use case once and then let JMeter repeat it.
http://jmeter.apache.org/
A license can be expensive for it (if you dont have MSDN), but Visual Studio 2010 Ultimate edition has a great set of load and stress testing tools that do what you describe. You can try it out for free for 90 days here.
TestMaker by PushToTest.com can run recorded scripts such as Selenium as well as many different languages like HTML, Java, Ruby, Groovy, .Net, VB, PHP, etc. It has a common reporting infrastructure and you can create load in your test lab or using cloud testing environments like EC2 for virtual test labs.
They provide free webinars on using open source testing tools on a monthly basis and there is one next Tuesday.
http://www.pushtotest.com
There are a few approaches; I've been in situations, however, where I've had to roll my own load generating utilities.
As far as your test script is concerned it involves:
sending a GET request to http://form entry page (only checking if a 200 response is given)
sending a POST request to http://form submit page with pre-generated key/value pairs for text XXX and performing a regexp check on the response
Unless your web page is complex AJAX there is no need to "simulate a button press" - this is taken care of by the POST request.
Given that your test consists of just a 2-step process there should be several automated load packages that could do this.
I've previously used httperf for load testing a large website: it can simulate a session consisting of several requests and can simulate a large number of users (i.e. sessions) simultaneously. For example, if your website generated a session cookie from the home page you could make that the first request, httperf would then use that cookie for subsequent requests, until it had finished doing the list of requests supplied.
What about http://watin.sourceforge.net/ ?