fiddler autoresponder add latency rule not working - networking

Using Fiddler I am trying to use the Autoresponder to add a rule that when I hit my web service url which is as per below:
http://uummas09:28020/RestfulRetekService/ItemWebService.json?action=keywordSearch&username=StockOnHandPortlet&sessionId=P_ISomGc6U5_433Vh3ApmwI&keywords=Green&itemStatus=A
I want fiddler to add latency of 50000 milliseconds (50 seconds). But I am having troubles getting fiddler to do that for me. Here is how I've tried to set up the rule in fiddler.
The rule is specified as...
EXACT:http://uummas09:28020/RestfulRetekService/ItemWebService.json?action=keywordSearch&username=StockOnHandPortlet&sessionId=P_ISomGc6U5_433Vh3ApmwI&keywords=Green&itemStatus=A
My first question is how can I wildcard the url in the rule so that it does not consider the query string?
Also I tried to get a rule to work for me for a simple url. i.e. I set a rule for
EXACT:http://www.google.com.au
but it still did not work for me. Can someone point me out to what I might be doing wrong.
thanks

To expand on #EricLaw's answer, to enable the auto-responder:
Check Enable automatic responses to tell Fiddler that you want to respond to requests.
Check Unmatched requests passthrough to tell Fiddler that you want any requests not matched by your rules.
Your rule won't cause a delay because rules aren't matched when they have a blank action to respond with, so if you only want to add a delay, use the *delay:1000 as the response, instead of a file path. You can get the latency to work by typing something like *action (which isn't a real action) to get it to match and cause the delay.

For those who are trying to delay HTTPS traffic, check that you trust your root certificate and capture HTTPS traffic.

You haven't checked the box at the top-left, Enable Automatic Responses, so none of your rules run.
To create a rule that ignores the query string, remove EXACT: from the front of the rule and delete everything after the ?.

Related

HTTP to HTTPS issues

I have a question, I am a bit confused, I don't really understand why this is happening.
I have a website which works well over http. When I force redirect to https something happens. Even if I replace all my urls in my code, only GET request will work. Anybody has any idea why is this happening?
I also have admin part of the website. it works to login into the admin but it doesn't work to make any requests on it. I am trying to post or delete but I receive a 401 err, even if I am logged in and set the token right...
So bottom line is:
On Https, the website works, it shows all the resources from the db, I can login in the Admin but I can not post or delete.
On Http everything works.
I am in a huge need of advice or ideas.
thanks.
From my experience you cannot serve mixed content, that's my first suggestion is to call all your scripts/dependencies without the prefix; ie: script src="https://blahblah" to "script src="//blahblah"; you're going to make sure you are sticking consistently to one serving source; so that's the first thing I'd check (also look at console logs, they often give hints as to what failed);
Secondly I am unsure of the response or how the server handles traffic from non https, possibly there's a rule in htaccess or some form of redirection trying to force the call via https so http fails? these are all steps in debugging right you need to troubleshoot and play process of eliminations; first though I'd make sure we are serving everything from // or https; when on http I would look at console logs for clues but even more so I would force a redirect to use https exclusively (as most sites do now)
Check for mixed content issues first though, this is something that can have a multitude of solutions based on the many variations of what could be causing this issue.

Is Response.Redirect(Request.Url.AbsolutePath) Always "Safe"?

I have the need to redirect back to the current page minus any query arguments.
I just found Request.Url.AbsolutePath, which looks like it provides just the ticket to pass to Response.Redirect().
It seems to work on my dev machine okay. Does anyone know of any potential problems redirecting to the value of this property? It's hard to confirm it's "safe" in all cases.
It could be a problem if you "re-written" the URL internally. For example, the user request "/team.aspx" but internally you transfer execution or rewrite the url as "/page.aspx?id=137".
Personally, I prefer to use the Request.RawUrl (which is always local) and you can strip the query-string.
Getting rid of the host part of a request is not an issue because HTTP Redirect can be path on Absolute Paths ("/foo/bar") and the browser will preserve the protocol, port and hostname.
I would use Request.Url.OriginalString.
Absolute path gets rid of the host part of the URL.
Take a look at this: http://wdevs.blogspot.com/2009/03/url-properties-of-request-to-aspnet.html

Get the final destination after WP_Http redirects (WordPress)

I'm doing some requests to an API via WordPress, and the API uses SSL connections if they're turned on in the API settings. I'd like to determine whether SSL is turned on or off without having to ask the user if SSL is turned on on their account, and the API does a good job at redirecting, meaning
If I access http://api/endpoint and SSL is turned on, I'm redirected to https://api/endpoint
If I access https://api/endpoint and SSL is turned off, I'm redirected to http://api/endpoint
Now what I'd like to do is see whether a redirect happened or not and record that to my options so that the other requests are fired to the correct URL without any redirections.
So my question is: is there a way to determine the final destination after firing a WP_Http->request() when the request is being redirected?
I can't see any info about that in the response arrays, I only get to see the final response but I have no idea what URL that came from. What I can do is set the redirection parameter to 0 and catch the max redirects allowed error, but that's not bullet-proof, since I still don't know whether the redirect happened from http to https or simply another page under http.
I hope this all makes sense, let me know if you have any ideas.
Thanks!
~ K
check $response['headers'] - they may contain 'location' key.
It all depends on the HTTP library you are using.
See class-http.php(wp 3.0.1) file:
line 1393, http_api_curl action - curl handle available directly to catch anything.
fopen:
check lines 887-888, and $http_response_header variable.
also, try to override processHeaders function as it has an access to raw http headers.
The WP_Http class processes the headers and removes all but the last one. So you could do what jetdog described above. Check the original URL and compare it to the returned $response['headers']['location']. If it is different, than you know it redirected.

Tamper with first line of URL request, in Firefox

I want to change first line of the HTTP header of my request, modifying the method and/or URL.
The (excellent) Tamperdata firefox plugin allows a developer to modify the headers of a request, but not the URL itself. This latter part is what I want to be able to do.
So something like...
GET http://foo.com/?foo=foo HTTP/1.1
... could become ...
GET http://bar.com/?bar=bar HTTP/1.1
For context, I need to tamper with (make correct) an erroneous request from Flash, to see if an error can be corrected by fixing the url.
Any ideas? Sounds like something that may need to be done on a proxy level. In which case, suggestions?
Check out Charles Proxy (multiplatform) and/or Fiddler2 (Windows only) for more client-side solutions - both of these run as a proxy and can modify requests before they get sent out to the server.
If you have access to the webserver and it's running Apache, you can set up some rewrite rules that will modify the URL before it gets processed by the main HTTP engine.
For those coming to this page from a search engine, I would also recommend the Burp Proxy suite: http://www.portswigger.net/burp/proxy.html
Although more specifically targeted towards security testing, it's still an invaluable tool.
If you're trying to intercept the HTTP packets and modify them on the way out, then Tamperdata may be route you want to take.
However, if you want minute control over these things, you'd be much better off simulating the entire browser session using a utility such as curl
Curl: http://curl.haxx.se/

Are there any safe assumptions to make about the availability of a URL?

I am trying to determine if there is a way to check the availability of a potentially large list of urls (> 1000000) without having to send a GET request to every single one.
Is it safe to assume that if http://www.example.com is inaccessible (as in unable to connect to server or the DNS request for the domain fails), or I get a 4XX or 5XX response, then anything from that domain will also be inaccessible (e.g. http://www.example.com/some/path/to/a/resource/named/whatever.jpg)? Would a 302 response (say for whatever.jpg) be enough to invalidate the first assumption? I imagine sub domains should be considered distinct as http://subdomain.example.com and http://www.example.com may not direct to the same ip?
I seem to be able to think of a counter example for each shortcut I come up with. Should I just bite the bullet and send out GET requests to every URL?
Unfortunately, no you cannot infer anything from 4xx or 5xx or any other codes.
Those codes are for individual pages, not for the server. It's quite possible that one page is down and another is up, or one has a 500 server-side error and another doesn't.
What you can do is use HEAD instead of GET. That retrieves the MIME header for the page but not the page content. This saves time server-side (because it doesn't have to render the page) and for yourself (because you don't have to buffer and then discard content).
Also I suggest you use keep-alive to accelerate responses from the same server. Many HTTP client libraries will do this for you.
A failed DNS lookup for a host (e.g. www.example.com) should be enough to invalidate all URLs for that host. Subdomains or other hosts would have to be checked separately though.
A 4xx code might tell you that a particular page isn't available, but you couldn't make any assumptions about other pages from that.
A 5xx code really won't tell you anything. For example, it could be that the page is there, but the server is just too busy at the moment. If you try it again later it might work fine.
The only assumption you should make about the availability of an URL is that "Getting an URL can and will fail".
It's not safe to assume that a sub domain request will fail when a parent one does. Namely because inbetween your two requests your network connection can go up, down or generally misbehave. It's also possible for the domains to be changed in between requests.
Ignoring all internet connection issues. You are still dealing with a live web site that can and will change constantly. What is true now might not be true in 5 minutes when they decide to alter their page structure or change the way the display a particular page. Your best bet is to assume any get will fail.
This may seem like an extreme view point. But these events will happen. How you handle them will determine the robustness of your program.
First don't assume anything based on a single page failing. I have seen many cases where IIS will continue to serve static content but not be able to serve any dynamic content.
You have to treat each host name as unique you cannot assume subdomain.example.com and example.com point to the same IP. Or even if they do there is no guarentee that are the same site. IIS again has host headers that allows you to run multiple sites using a single IP Address.
If the connection to the server actually fails, then there's no reason to check URLs on that server. Otherwise, you can't assume anything.
In addition to what everyone else is saying, use HEAD requests instead of GET requests. They function the same, but the response doesn't contain the message body, so you save everyone some bandwidth.

Resources