This is about classic ASP.
I have found something Serious: After a few seconds I visit some pages (ASP, txt, html) in my web site my broadband provider will visit the same pages.
Do you know any ASP code to stop that?
I know I can put some ASP code to block the broadband provider's IP, but I do not know all its IPs.
I will be much appreciated for any suggestions.
Guess what would happen if you block the IP range of you internet provider, your own access to your webpages will be blocked too.
You ISP could be accessing your pages automatically by a robot or something in order to create a cache.
In that case, what you can probably do is to scan the User-Agent of the visitors and only serve pages to those who are accessing your page through an actual browser such as; Chrome or Firefox and such.
But keep in mind that you will be blocking access to legit robots like Search Engine Crawlers such as Googlebot.
Anyway if you have like an admin section which you only want to be accessed by you yourself, I would suggest that you secure it by Sessions, Cookies and/or SSL connection.
Good Luck...
Related
If I set up a simple web server online (eg nginx), and generate a very large random string (such that it is unguessable), and host that endpoint on my domain, eg
example.com/<very-large-random-string>
would I be safe in say, hosting a webapp at that endpoint with no authentication to store my personal information (like a scratch-pad or notes kind of thing)?
I know google docs does this, is there anything special one has to do (again, eg for nginx) to prevent someone from getting a list of all available pages?
I guess I'm asking is there any way for a malicious actor to find out about the existence of such a page, preferably irrespective of what web-server I used.
I'd be pretty alarmed if my online bank started using this system, but it should give you a basic level of security. Bear in mind that this is security through obscurity, which is rather frowned upon and will immediately turn into no security whatsoever the moment someone discovers the hidden URL.
To prevent this from happening, you will need to take a few precautions:
Install an SSL certificate on your server, and always access the url via https, never via http (otherwise the URL path will be sent in plain view and visible to everyone along the way).
Make sure your secure document contains no outgoing links. This includes not only hyperlinks (<a href="...">) but also embedded images, stylesheets, scripts, media files and so on. Otherwise the URL will be leaked to other domains via the Referer request headers.*1
(A bit of a no-brainer, but) make sure there are also no inbound links to this page. Although they aren't so common now, web hosts used to generate automatic "web stats" pages showing the traffic to each web domain. Some content management systems generate a site map automatically. This would be just as bad.
Disable directory browsing on your server. In other words, make sure that someone who visits the directory level above your hidden directory isn't presented with a list of subdirectories.
Bear in mind that the URL will always be visible in your address bar and browser history, and possibly in other places like your browser's cookie jar. Your browser will probably provide the rest of the URL by auto-complete when someone types the domain into your address bar.
*1: Actually, your browser will only send a Referer header when you access other https pages, but still...
I've come across a medical provider website that serves its pages over aspx. This provider has new client forms within this same aspx page. I contacted the vendor that built the website asking why they aren't using https. They assured me they are using https encryption within the iframes.
My question: Is this response total BS?
It seems to me that a very simple way to hack this website would be to spoof the site using my own aspx page that redirects over to me. Without https, the browser has no idea of the security, so nobody would be able to tell if they were on my website or the actual one.
This is all HIPAA protected info (in the US) that's transmitted, so there are laws about how it must be protected. It seems that the contractor is being pretty negligent, but maybe I'm missing something.
FYI, I'm not posting the website on purpose because I don't want to invite hacking something I think is insecure.
Without knowing how the iframes are used, it's hard to assess the security issues the site may have.
But it sounds like they may gather the new clients info on insecure forms and then posting them to an https endpoint. As Troy Hunt explains in this article, this is not a secure practice.
Obviously as you already allude to, without https, a man-in-the-middle attack could easily post the complete form to an attacker site without the user knowing as the integrity and/or origin of the page are not guaranteed.
Even if they are serving the form in an iframe over https, if the containing page is served over http, the iframe can be replaced by a MiM attack.
First let me explain the problem:
I have a little portal that any user from the internet can access.This portal is responsible only for the user authentication against a DB.If user is validated than the portal shows a list of links where they redirect to multiple web sites(this sites are in various coding languages like PHP,ASP.net and Java). If the user input the url off a web site directly on the browser, the user can access the website.I want the multiple sites can only be viewed if the page request came from the portal and not directly inputed n browser.
I have local server with IIS 6, and the portal and websites are in this server.
Can anyone help?
Thanks in advance.
Gabe
If possible , host those applications as Virtual Directories under your authentication application and restrict access to only authenticated users, this shall solve the problem.
I don't know if you are able to do this, but you could try this with an ISA/Forefront server.
You can configure this to do the redirecting for you if someone enters the website url. This way users will need to authenticate themselves and you can let ISA or FF handle the authentication part.
This is implemented a lot for OWA, but can also be used for other purposes (I've done this for several SharePoint solutions).
Of course you do need an extra server, licences and all that stuff.
Don't know how you can pull something like this with only IIS. Perhaps with some IIS Modules, but I haven't got any experience with that kind of stuff, so don't know for sure.
I notice that some sites are coping the content of one of my client's sites using automated agents. I want to detect their requests and show them a captcha code to prevent them from coping the site content.
Is there anyway to detect them?
This is a complex problem and a game of cat and mouse. To make it slightly difficult:
Ban the IPs that are hitting the site repeatedly, a normal user would not need ALL the pages
Ban public proxies, list is available on googleing
Any request from banned IPs/Proxies should be redirected to captcha page
Typically an "automated agent" would be accessing a lot of data in a short period...more than a typical user. You would need to setup something to track ip addresses of all users and see if there is any such ip that stands out and block them.
Of course, this is made more difficult as there are proxies and dynamic ips etc...
I have a directory with my media files and I need no to display them on other sites.
Server doesn't support .htaccess, because it uses nginx.
How can I enable hotlink protection for my files??
Thank you.
Easiest way would be to check for the Referer header in HTTP request. Basically if that header does not have URL from your site, then this could be hot linking.
This has following problems:
Referrer header can be forged -> hot linking works
All user agents do not necessarily send the Referrer header -> legitimate user might not get the content.
You could also set a cookie when user is browsing your site, and check for existence of that cookie when user is accessing the streaming content.
The details may be dated, but Igor gives an example of referrer mapping for image hotlink protection that might be useful here: http://nginx.org/pipermail/nginx/2007-June/001082.html
If you decide to go the referrer route.
If you are using memcached you could also store store client IP addresses for a time and only serve up your streaming media if an unexpired client IP is found in the cache. The client IP gets cached during normal browsing ensuring that the person viewing your streaming content has also recently been visiting your site.
On my hostgator site, they used nginx as a proxy to Apache(nginx+apache). maybe that will help you. Also if you have access to the logs, if you see a lot of traffic that way from a ip I would investigate, and if it points to a site, then block the other web server. Php's file_get_contents doesn't get stopped by htaccess or anything else I know besides blocking the ip.