How can I get the source of a HTTP request? - http

The referer header does not always provide the full url of the site spawning the http request, and I would like to know if there is any way I could figure out the source url of the site that is making the request.
I am currently using OWASP ZAP as a proxy, but am unable to trace some of the http requests back to the source site due to the incomplete referer.

Try searching for the full URL in the ZAP Search tab. If that doesnt work try searching for just the path.
If the URL is generated by JavaScript then that might not work.
Depending on how you are exploring the app you may be able to work back through the history and work it out by a process of elimination, but that could take a while...

Related

When I make a request to a server can I see all the requests made by that server to another server?

I need to know which requests a webpage sends. Basically the site i call, calls another service/api/url whatever and receives the data (probably within javascript) and show me this. Can i see all the calls it make?
Edit: concrete example:
From this site (http://www.flickriver.com/lenses/nikon/) you can choose a lens, at that moment, the page sends a request to flickr, and get all the data. But in chrome developer tools i could not see this request.
Here is a screenshot of get requests. I have looked through them but could not see any request to flickr.
The first is request to the page. And the sixth one is the picture request already, where it requests the picture by its id. So in between other 4 requests should contain a request to the external source which gives the picture id in return or do i miss sth?
And what if the backend makes this request? Do i still need to see this request in developer tools?
No, of course you cannot see the calls made by some server to another server. Why would you expect to be able to do that? Those calls have nothing to do with the browser. The browser knows nothing about those requests. The browser knows only about requests that it itself initiated. Devtools can only report on requests made by the browser. If in fact there were some way to spy on the requests made by a server to another server, it would be gaping security hole.

How do I generate a 403 error when someone tries to access a particular page

I may be barking up completely the wrong tree here but what I would like to do is protect my .js pages by having them return a 403 Forbidden http error status page if someone tries to access them directly via http. I use them to support my index.html page but would like for them to remain hidden.
The helpdesk guys at my ISP basically say they don't know if it's possible but it may be something you could do with a web.config file (which is not something I have used before).
Any help at all would be gratefully received - I am a bit out of my comfort zone with this one
I would like to […] protect my .js pages by having them return a 403 Forbidden http error status page if someone tries to access them directly via http.
Please note that if you include some resource, for example a script via the <script>-tag in HTML or an image via the <img>-tag, the browser does nothing else than simply run another HTTP request to get that resource. The whole communication already happens over HTTP.
While a browser may include additional details in its HTTP request when requesting additional resources, like the Referer-header, it definitely is not required to do so. So if you look out for the Referer-header, be advised that you may lock out other valid clients which do not send the Referer-header in their requests.
Also note that this will not give you any protection whatsoever. One can simply construct HTTP headers when requesting things, so “faking” requests your server would allow (because it thinks they are correct) is not a problem at all. And even without that; every resource you tell the client to use to make your website work will be downloaded by the client. And after that, the client can do whatever he wants with it. It can cache them on the hard disk, or allow the user to quickly look at it without having to run another request.
So if you want to do this for protecting your code, then just forget about it, and make it easier for everyone by just not adding a non-optimal protection. Code you put on the web can be made difficult to read, but if you want the user to see the end result, then you also give out your code in the same step.
In php you can do this with:
header("HTTP/1.0 403 Forbidden");

Circular redirect path detected and wrong Open Graph data displayed

When sharing the following URL to Facebook
www.magicsoftware.com
You will get outdated information. Facebook refers to the site (magicsoftware.com/en) and takes all the information from the cache.
I tried to clear the cache by going to the dubugger-
https://developers.facebook.com/tools/debug/og/object?q=www.magicsoftware.com
But that didn't help much.
Someone has an idea what I can do?
P.S - if you checked the debugger link, you would see that there are two critical errors mentioned:
Could Not Follow Redirect: URL requested a HTTP redirect, but it could
not be followed. Errors That Must Be Fixed
Circular Redirect Path: Circular redirect path detected (see 'Redirect
Path' section for details).
What does that mean?
Your server is issuing redirect to the same URL as visited based on some condition, actually according to my tests on any requests that came without Accept-Language header get redirected.
See with Accept-Language header, and without any headers
Facebook linter doesn't seems to pass this header while crawling your OpenGraph meta and hung due to redirection loop.
You should avoid that redirection (or at least have some fallback) for Facebook linter to be able to collect updated data and update the cached version.
Same thing is happening to me now. I have no redirect in place. but I am getting this message " there was an error following the redirect path." when using the debugger on this URL http://www.mmaid.co/cleaning-services/offers/coupons/social-discount.php I will give it time and see if it fixes itself.
I found the solution myself - and it's only patience :)
Facebook just needs time to remove their cache files. So the solution is simply to use the Facebook Debugger to enter your URL and then to wait. Facebook will automatically refresh this URL cache.

Find Site from HTTP Request

Is there a way to go through a series of request and see what pages they are coming from? I am getting all HTTP requests sent from my PC. I am trying to see if there is a way in which I can just find out the main request. Like if a page has images on it, when the images request is sent, is there a way to see if the images are coming from another page using just HTTP requests. I don't know if I explained this well enough, so please ask any questions. I don't know if there is a way to do this, but I hope there is. Thanks!
If you're using Windows: Fiddler.

Tamper with first line of URL request, in Firefox

I want to change first line of the HTTP header of my request, modifying the method and/or URL.
The (excellent) Tamperdata firefox plugin allows a developer to modify the headers of a request, but not the URL itself. This latter part is what I want to be able to do.
So something like...
GET http://foo.com/?foo=foo HTTP/1.1
... could become ...
GET http://bar.com/?bar=bar HTTP/1.1
For context, I need to tamper with (make correct) an erroneous request from Flash, to see if an error can be corrected by fixing the url.
Any ideas? Sounds like something that may need to be done on a proxy level. In which case, suggestions?
Check out Charles Proxy (multiplatform) and/or Fiddler2 (Windows only) for more client-side solutions - both of these run as a proxy and can modify requests before they get sent out to the server.
If you have access to the webserver and it's running Apache, you can set up some rewrite rules that will modify the URL before it gets processed by the main HTTP engine.
For those coming to this page from a search engine, I would also recommend the Burp Proxy suite: http://www.portswigger.net/burp/proxy.html
Although more specifically targeted towards security testing, it's still an invaluable tool.
If you're trying to intercept the HTTP packets and modify them on the way out, then Tamperdata may be route you want to take.
However, if you want minute control over these things, you'd be much better off simulating the entire browser session using a utility such as curl
Curl: http://curl.haxx.se/

Resources