How do I generate a 403 error when someone tries to access a particular page - http

I may be barking up completely the wrong tree here but what I would like to do is protect my .js pages by having them return a 403 Forbidden http error status page if someone tries to access them directly via http. I use them to support my index.html page but would like for them to remain hidden.
The helpdesk guys at my ISP basically say they don't know if it's possible but it may be something you could do with a web.config file (which is not something I have used before).
Any help at all would be gratefully received - I am a bit out of my comfort zone with this one

I would like to […] protect my .js pages by having them return a 403 Forbidden http error status page if someone tries to access them directly via http.
Please note that if you include some resource, for example a script via the <script>-tag in HTML or an image via the <img>-tag, the browser does nothing else than simply run another HTTP request to get that resource. The whole communication already happens over HTTP.
While a browser may include additional details in its HTTP request when requesting additional resources, like the Referer-header, it definitely is not required to do so. So if you look out for the Referer-header, be advised that you may lock out other valid clients which do not send the Referer-header in their requests.
Also note that this will not give you any protection whatsoever. One can simply construct HTTP headers when requesting things, so “faking” requests your server would allow (because it thinks they are correct) is not a problem at all. And even without that; every resource you tell the client to use to make your website work will be downloaded by the client. And after that, the client can do whatever he wants with it. It can cache them on the hard disk, or allow the user to quickly look at it without having to run another request.
So if you want to do this for protecting your code, then just forget about it, and make it easier for everyone by just not adding a non-optimal protection. Code you put on the web can be made difficult to read, but if you want the user to see the end result, then you also give out your code in the same step.

In php you can do this with:
header("HTTP/1.0 403 Forbidden");

Related

HTTP to HTTPS issues

I have a question, I am a bit confused, I don't really understand why this is happening.
I have a website which works well over http. When I force redirect to https something happens. Even if I replace all my urls in my code, only GET request will work. Anybody has any idea why is this happening?
I also have admin part of the website. it works to login into the admin but it doesn't work to make any requests on it. I am trying to post or delete but I receive a 401 err, even if I am logged in and set the token right...
So bottom line is:
On Https, the website works, it shows all the resources from the db, I can login in the Admin but I can not post or delete.
On Http everything works.
I am in a huge need of advice or ideas.
thanks.
From my experience you cannot serve mixed content, that's my first suggestion is to call all your scripts/dependencies without the prefix; ie: script src="https://blahblah" to "script src="//blahblah"; you're going to make sure you are sticking consistently to one serving source; so that's the first thing I'd check (also look at console logs, they often give hints as to what failed);
Secondly I am unsure of the response or how the server handles traffic from non https, possibly there's a rule in htaccess or some form of redirection trying to force the call via https so http fails? these are all steps in debugging right you need to troubleshoot and play process of eliminations; first though I'd make sure we are serving everything from // or https; when on http I would look at console logs for clues but even more so I would force a redirect to use https exclusively (as most sites do now)
Check for mixed content issues first though, this is something that can have a multitude of solutions based on the many variations of what could be causing this issue.

Is it bad practise to serve an error 403 page for an application-level policy?

Say I have a website that allows anyone to log in through oauth or similar, but only allows certain uses to create or modify content. Should they somehow make a request for page for creating a new post, I'll do a check and redirect them if they don't have the appropriate permissions.
It is considered acceptable to redirect to the "403 Error" page in this situation? There was no actual HTTP response with a 403 status code, there was no database- or server- level query that was failed - just my business logic. Am I misappropriating the idea of HTTP status codes if I serve an error 403 page with a specific explanatory message?
You are free to do so, but I think if you want to expose an API you would use an actual 403 response because they carry meaning that will be nicely handled by the client.
If you want to display a page to the client and will be using redirect, you will lose this meaning of the "403".
Isn't it better to just redirect them to an explanation page without including the "403" code. Or better yet, redirect them to a more helpful place, like the sign up page if that is what they have to do to make a post, or back to the original page with a floating message.
We want to help the user get closer to their goals instead of confusing them with technical error codes.
There is often a lot of discussion about this very topic and it comes down to the following choices:
a 5xx? Of course not. This is not a server error.
a 400? Not really, it wasn't a malformed request.
a 401? Probably not, 401 is generally for authorization in general, not application-level permissions. If your user has already logged in but has the wrong role, and you want to let the user know, then use something else.
a 404? Perhaps, as the server can't find the resource for this particular user, but if you want to tell the user "well such a resource is available but you can't have it because you lack permissions" then go with something else.
a 403? Actually, this one makes a lot of sense. Here is the definition from the RFC
403 Forbidden The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.
In your question you mention your intention to redirect the user. If you are making a RESTFUL web service then just return the 403. If you are doing an entire web app, you can control the 403 and redirect....

Can the actual index page be determined when requesting a directory?

When requesting http://example.com (using something like cURL), is there anyway to determine what the actual page server side is? Is it /index.php /index.html /index.asp?
This is a completely client side question.
Definitely no guaranteed way, there may not even be a default page at all on the server. Although there's usually some sort of page, script, or template associated with a given URL, it can be buried under several layers framework that make it not useful information anyway. You might be able to glean some extra info from the http response headers. But that's pretty much all you get on the client side.

Are there any safe assumptions to make about the availability of a URL?

I am trying to determine if there is a way to check the availability of a potentially large list of urls (> 1000000) without having to send a GET request to every single one.
Is it safe to assume that if http://www.example.com is inaccessible (as in unable to connect to server or the DNS request for the domain fails), or I get a 4XX or 5XX response, then anything from that domain will also be inaccessible (e.g. http://www.example.com/some/path/to/a/resource/named/whatever.jpg)? Would a 302 response (say for whatever.jpg) be enough to invalidate the first assumption? I imagine sub domains should be considered distinct as http://subdomain.example.com and http://www.example.com may not direct to the same ip?
I seem to be able to think of a counter example for each shortcut I come up with. Should I just bite the bullet and send out GET requests to every URL?
Unfortunately, no you cannot infer anything from 4xx or 5xx or any other codes.
Those codes are for individual pages, not for the server. It's quite possible that one page is down and another is up, or one has a 500 server-side error and another doesn't.
What you can do is use HEAD instead of GET. That retrieves the MIME header for the page but not the page content. This saves time server-side (because it doesn't have to render the page) and for yourself (because you don't have to buffer and then discard content).
Also I suggest you use keep-alive to accelerate responses from the same server. Many HTTP client libraries will do this for you.
A failed DNS lookup for a host (e.g. www.example.com) should be enough to invalidate all URLs for that host. Subdomains or other hosts would have to be checked separately though.
A 4xx code might tell you that a particular page isn't available, but you couldn't make any assumptions about other pages from that.
A 5xx code really won't tell you anything. For example, it could be that the page is there, but the server is just too busy at the moment. If you try it again later it might work fine.
The only assumption you should make about the availability of an URL is that "Getting an URL can and will fail".
It's not safe to assume that a sub domain request will fail when a parent one does. Namely because inbetween your two requests your network connection can go up, down or generally misbehave. It's also possible for the domains to be changed in between requests.
Ignoring all internet connection issues. You are still dealing with a live web site that can and will change constantly. What is true now might not be true in 5 minutes when they decide to alter their page structure or change the way the display a particular page. Your best bet is to assume any get will fail.
This may seem like an extreme view point. But these events will happen. How you handle them will determine the robustness of your program.
First don't assume anything based on a single page failing. I have seen many cases where IIS will continue to serve static content but not be able to serve any dynamic content.
You have to treat each host name as unique you cannot assume subdomain.example.com and example.com point to the same IP. Or even if they do there is no guarentee that are the same site. IIS again has host headers that allows you to run multiple sites using a single IP Address.
If the connection to the server actually fails, then there's no reason to check URLs on that server. Otherwise, you can't assume anything.
In addition to what everyone else is saying, use HEAD requests instead of GET requests. They function the same, but the response doesn't contain the message body, so you save everyone some bandwidth.

Is it safe to redirect to the same URL?

I have URLs of the form http://domain/image/⟨uuid⟩/42x42/some_name.png. The Web server (nginx) is configured to look for a file /some/path/image/⟨uuid⟩/thumbnail_42x42.png, and if it does not exist, it sends the URL to the backend (Django via mod_wsgi) which then generates the thumbnail. Then the backend emits a 302 redirect to exactly the same URL that was requested by the client, with the idea that upon this second request the server will notice the thumbnail file and send it directly.
The question is, will this work with all the browsers? So far testing has shown no problems, but can I be sure all the user agents will interpret this as intended?
Update: Let me clarify the intent. Currently this works as follows:
The client requests a thumbnail of an image.
The server sees the file does not exist, so it forwards the request to the backend.
The backend creates the thumbnail and returns 302.
The backend releases all the resources, letting the server share the newly generated file to current and subsequent clients.
Having the backend serve the newly created image is worse for two reasons:
Two ways of serving the same data must be created;
The server is much better at serving static content. What if the client has an extremely slow link? The backend is not particularly fast nor memory-efficient, and keeping it in memory while spoon-feeding the client can be wasteful.
So I keep the backend working for the minimum amount of time.
Update²: I’d really appreciate some RFC references or opinions of someone with experience with lots of browsers. All those affirmative answers are pleasant but they look somewhat groundless.
If it doesn't, the client's broken. Most clients will follow redirect loops until a maximum value. So yes, it should be fine until your backend doesn't generate the thumbnail for any reason.
You could instead change URLs to be http://domain/djangoapp/generate_thumbnail and that'll return the thumbnail and the proper content-type and so on
Yes, it's fine to re-direct to the same URI as you were at previously.

Resources