I want to write some tests which will fire off some HTTP requests to make sure I can login to my app, see some pages etc.
Are my better off using Apache HTTP client or WebDriver?
Thanks
I have used both, and I personally prefer WebDriver, because it is more powerful (and easy to use, IMO).
HttpClient won't be able to press a button, run javascript, or other browser functions.
However, if you are looking to make a bunch of HTTP requests, Apache HTTP client will perform them faster.
Related
I would like to be able to monitor all the HTTP requests being made by a web page in an automated testing scenario.
I know how to drive browsers with Selenium.
Is there some kind of proxy that can be interacted with programatically. What would help is something that can be flagged to start recording all the HTTP requests then flagged to stop.
I believe FireFox has some proxy settings that can be driven from Selenium but Chrome is the highest priority browser for testing.
I have heard of BetaMax but think this is more about simulating and replaying REST calls rather than monitoring traffic programatically.
Take a look at Hoverfly and it has a mode where it acts as a proxy. I haven't used it, but I believe you can replay whatever is recorded by the proxy when the requests are re-sent. And yes there is an API.
I try understand how http work's and can't understand on which level http protocol implemented, it's OS level, or it's depend from where I need use it protocol? For example if I want use it on C I must implement it on C language as library and only then use it?
Http runs on top of tcp - and tcp is implemented in the network stack of your OS.
Http protocol is used between a client and a server. What a client sends is what a server receives, and vice-versa. Http was designed for the server to simply sit and wait for requests (possibly including data), and then respond (possibly including data).
All web servers implement the server side of http. In terms of applications (let's use the term "application" to mean "client", although some might say the server is an application), the client side of http protocol will, I suppose, most commonly be implemented in an application like a browser, but also command-line applications like curl and wget implement an http client. For languages such as Python there is a http server implementation in the standard library, or there are libraries such as requests which handle the client side of http so the python author just worries about the higher-level problem of which http requests to make.
So the answer is, http is not implemented in the OS, it is implemented in applications - some client-side, some server-side.
For your C application you will either have to implement http yourself (doesn't sound like fun to me but would be a good way of understanding http implementation, I suppose) or (much less stress and much more likely to have predictable correctish behaviour) use a library if you can find one.
Suppose I am logged in and connected to a website in firefox (or any other browser) now I can make download requests in the browser. Suppose I want to use wget or curl using the connection of firefox. Is there a way to use firefox as a systemwide proxy for port 443 and 80? Here is a usage scenario: This would be interesting for a download manager, if the requests are proxied and made by the browser, all the credentials stored in the browser could be used.
So the browser would receive the request on port 443 and replicate it or forward it. Proxy and forwarding are probably not the right words in this context.
I am not aware of any feature of Firefox (or any other mainstream browser) that allows to really use it as some kind of proxy, sorry.
You cannot somehow "use the connection firefox already has", since there is no permanent connection between client and server in an http communication. http is a stateless protocol without some socket permanently kept open. Instead each http request is sent separately, each time a new socket is opened.
However something similar might be "half possible" using a crude workaround:
What you can try however is to simply start a new instance of the browser for each request you want to do. In reality this does not start a new instance, but reuses an already existing instance and typically opens a new tab in there. That way you can "remote control" your already started browser in a primitive way and do downloads, if and only if the url you specify will result in a download. However that all depends on the browser settings, so for example downloads will be stored in files in your local file system where you have to read the payload from again.
This all is not really efficient and convenient which is why it probably does not make much sense. Instead you should create a simple script for such communication. The effort for that is not that high.
CGI programs typically get a single HTTP request.
HTTP 1.1 supports persistent HTTP connections whereby multiple HTTP requests/responses are made w/o closing the connection.
Is there a way for a CGI program (or similar mechanism) to handle multiple HTTP requests/responses on the same connection?
I am using Apache httpd.
Keep-alives are one of the higher-level HTTP features that is wholly dealt with by the web server. They are out-of-scope for CGI applications themselves.
Accessing CGI scripts through Apache mod_cgi works with keep-alive for me. The browser re-uses the same TCP connection to fetch the page and then resources referred to by it, without the scripts in question having to do anything special.
If you mean you would like to have the same CGI process handle one request and then the next (instead of the process ending and a new one being spawned), then I'm afraid that's not possible. The web server will intercept keep-alives and make them look like single requests before your scripts can do anything about it. (If you want to do that to improve performance, consider a different gateway interface, such as FastCGI or language-specific options like WSGI.)
SCGI sounds exactly like what you want. It is similar to FastCGI but a simpler solution to implement (the S stands for Simple :)).
I need to track http/url requests & redirects from a windows forms application using C#. It should handle both IE & firefox. Not sure if Fiddler is open-source but if i'm not mistaken, it's written using .NET. Sample codes or online articles on how to listen to http/url requests & redirects will be appreciated.
Thanks!
Fiddler works as standard HTTP proxy. There is no magic here. See HTTP protocol for details. In both IE/Firefox, you need to set Fiddler (or your custom program) as proxy, and then browser will use it for all outgoing requests. Proxy is responsible for forwarding request to correct server, and returning response. Proxies are typically used for 1) caching, 2) controlling access (and avoiding firewalls), 3) debugging.
See also Open Source Proxy Library for .Net for .NET proxy library (just quick googling... I have no experience with it).
You'd probably be interested in the new FiddlerCore library: http://fiddler.wikidot.com/fiddlercore