I read about "HTTP persistent connection" but somehow I don't seem to understand what does persistent mean in this context.
Could you'll elaborate?
It means the server doesn't close the socket once it's finished pushing out the response (so the length of the response has to be otherwise indicated, via headers or chunking), so the client can make other requests on the same socket. A web page often requests several other pieces (images, CSS, scripts, ...) on the same server as the page itself, so reusing the socket for some of those further requests to the same server can reduce overall latency compared to closing the original socket and opening new ones for all the follow-on requests.
All the discussion till now has been from the browser side of things. The browser first request the actual page, and it parses the page and finds out all other resources that it needs before it can render that page. The browser requests these resources and other dependent resources one by one. So maintaining a persistent connection is very efficient here, as the overhead of creating and destroying connections is avoided.
Now from web server side of things, a persistent connection would be one that allows it to "push" content to the web browser. Now HTTP doesn't support this. So, there are few workarounds with javascript where the page is basically refreshed after a while.
You can see this being trick being used by many web based email providers which continuously keep checking in the background for new mails. This gives a feeling that when a new mails arrives, the server "pushes" the new mail notification to the web browser. But in fact, its actually the web browser which keeps on checking the server for any new mail.
Also another point that I would like to state is that we actually don't see any page refresh that's because of another trick which allows only specific parts of the page to be refreshed by the request. (HINT: AJAX)
I think this is a switching for http or https for website browser. If you have old https:// and you are now using http for browser .htaccess file then this problem should created via yoast plugins one page crawl page. don't worry about it is not important error. For hackers this is a way to hack your website if your ssl connection is empty they should attach there page or domain to your ssl connection
e.b http://www.example.com and when you brows https://www.example.com in browser there are some other link with open your site domain.
Solution for this always use your full address for website: to protect hackers against your website use ssl and https:/ page for your website.
Then this problem have never scene in any test site or page.
Related
Is there any way to recognize (by process http packet or filtering tcp connections) does several requests belong to one opening url or another?
Try to explain in more detail.
When we open any page in browser it also initializes different requests to download images, resources, scripts. I d like to get know that some scope of requests was invoked by opening site (call it main site).
I can get referer property but in that case how to distinguish request to resorce from request to different site link on which was clicked on main site. In both cases referer will be the same.
I suspect that this problem could not be resolved, but I hope that I'm mistaken. Or you can offer some workaround.
If you are in control of the site, set a cookie or a URL parameter and check if it exists in subsequent requests.
I'm using Response.Redirect to serve media files, but don't want people to see the direct url to the files nor the subdomain (host). Is it possible to fake a 'get', and hide host and referer?
Use a Server.Transfer to transfer the request processing to another page.
When you use the Transfer method, the state information for all the
built-in objects are included in the transfer. This means that any
variables or objects that have been assigned a value in session or
application scope are maintained. In addition, all of the current
contents for the Request collections are available to the .asp file
that is receiving the transfer.
Server.Transfer acts as an efficient replacement for the
Response.Redirect method. Response.Redirect specifies to the browser
to request a different page. Because a redirect forces a new page
request, the browser makes two requests to the Web server, so the Web
server handles an extra request. IIS 5.0 introduced a new function,
Server.Transfer, which transfers execution to a different ASP page on
the server. This avoids the extra request, resulting in better overall
system performance, as well as a better user experience.
Since the browser doesn't make another request, the url is totally hidden from the browser, but it still gets the file that will be served by your redirect url.
What you want is not possible - for a simple reason: To have the client download the file directly from another source, you need to communicate the information about the location to the client in some way: If the client doesn't know the location, it can't download from there.
Whatever you try in the way of obfuscation, if it is decodable for the client browser, it is decodable for a human being armed with firebug.
I'm thinking about my website architecture that's using https.. I now have a CDN server hosting images , css and more static files.
The website itself is using HTTPS for securing sensitive costumer data. Will using the static images , loaded by for example 'http://cdn.example.com/images/test.jpg' on a website 'https://www.example.com' popup a "Loading insecure data" message?
So loading external NOT SECURED data on a SECURED website.
Will this be causing a popup warning "Loading insecure data, continue?"?
Thx!
Yes.
If a page is loaded over HTTPS then every resource it uses should also be loaded over HTTPS.
Otherwise a man-in-the-middle could replace images with misleading ones (or ones that exploit buffer overflow issues in browsers to execute code) and scripts with ones that do different things (such as leak data to the third party).
You have to load every resource over https to get rid of that warning. You can either move the resources to your server that supports encryption, or link to an external resource over https.
If you really want to load http content in https, you can follow this method using a backend handler in charge of downloading and exposing the required content with self forged links including a hash. The security issue is then fixed and you get the content accessible through https.
Dealing with HTTP content in HTTPS pages
I did this recently.
I have a raspberry pi loaded with nginx, and PHP.
I us Https to handle requests from the web to the PHP code which in turn sends http requests to my local network to assemble the page. Works well.
A browser sends a GET request for a static web page to a server. The server sends back HTTP OK response with the HTML page in the HTTP body. Looking at the Content-Length field or looking for the terminating chunk or some other delimiter for some other encoding the browser can know if it has received the web page and subsequently all its embedded objects (images etc.). Is it correct to say that in this case the browser always knows when a web page has completely loaded and that it will see no further network traffic?
Now if the page is dynamic (lets say facebook or gmail), where you might receive notifications or parts of the page gets updated using AJAX or javascript running in the background, here also the browser should know when the page has loaded. What if the server is pushing some updates to the client. Is it possible in this scenario for the browser to know when it has received the full update?
So, is there any scenario in which a browser doesn't know when it has fully received the data (static or dynamic) it has requested from a web server or push-based updates the server is forwarding to it?
I can only imagine (for the static case) the one scenario when Content-Length is not set. It's not mandatory to send it for the server.
Potentially, of course, in a page containing scripts, one could also have other scenarios where the script loads bits and pieces one by one with delays (including the AJAX scenario you mentioned). This way the browser would not know in advance either. In such a case it would know "for the moment" that the page has loaded completely, but the next action from the script would invalidate that assertion again.
You do not need AJAX to get in a situation where not all elements in the page are loaded even after the page itself has been loaded. A little javascript is all that you need (been a while since I last worked with JS, there might be some syntax errors)
<img id="dyn_image" src="/not_clicked.gif">
<input type="button" onclick="javascrit:document.get("dyn_image").src="/clicked.gif">
There are cases when the server uses some kind of push technology, for example Comets. In this case a request (generally Ajax request) is sent, without receiving any response (obvoiusly no HTTP headers as well), but leaving the TCP connection open. This may take long time, but still may be considered as a sub-case of Ajax calls.
The other case is HTML5's WebSocket technology. In a WebSocket the server side can push data to the client side without explicit request from the client side.
These two can be combined, so the answer to your question is: yes, there can be cases when you cannot predict that the network traffic is over or not. The common (in all cases) is that the client side must leave a channel open to the server.
I'm working on a web site which contains sections that need to be secured by SSL.
I have the site configured so that it runs fine when it's always in SSL, I see the SSL padlock in IE7/IE8/FireFox/Safari/Chrome
To implement the SSL switching, I created a class that implemented IHTTPModule and wired up HTTPApplication.PreRequestHandlerExecute.
I go through some custom logic to determine whether or not my request should use SSL, and then I redirect. I have to deal with two scenarios:
Currently in SSL and request doesn't require SSL
Currently not in SSL but request requires SSL
I end up doing the followng (where ctx is HttpContext.Current and pathAndQuery is ctx.Request.Url.PathAndQuery)
// SSL required and current connection is not SSL
if (requestRequiresSSL & !ctx.Request.IsSecureConnection)
ctx.Response.Redirect("https://www.myurl.com" + pathAndQuery);
// SSL not required but current connection is SSL
if (!requestRequiresSSL & ctx.Request.IsSecureConnection)
ctx.Response.Redirect("http://www.myurl.com" + pathAndQuery);
The switching back and forth now works fine. However, when I go into SSL mode, FireFox and IE8 warns me that my request isn't entirely encrypted.
It looks like my module is short circuiting my request somehow, would appreciate any thoughts.
I would suspect, that when you determine which resources require encryption, and which not, you do not include the images, or some header and footers as well, or even CSS files, if you use any.
As you always throw away SSL for such a content, it may happen that part of the page (main html) requires SSL, but the consequential request for an image on this page does not.
The browser is warning you, that some parts of the page were not delivered using SSL.
I will check if the request is for HTML, and only then drop the SSL if needed. Otherwise, keep it the way it is (most probably images and such are referenced with relative paths, than a full blown url).
I.e., if you have:
<html>
<body>
Some content...
<img src="images/someimage.jpg">
</body>
</html>
and you request this page using SSL, but your evaluation of requestRequiresSSL does not take into account the images as secured resources, it will form a http, not https request, and you will see the warning.
Make sure when you request a resource and evaluate requestRequiresSSL, to check the referrer and if this is an image:
// SSL not required but current connection is SSL
if (!requestRequiresSSL && ctx.Request.IsSecureConnection && !isHtmlContent)
ctx.Response.Redirect("http://www.myurl.com" + pathAndQuery);
Just figure out how to determine isHtmlContent (if you do not serve images from a database, etc., but from a disk location), just check the the resource filename (.aspx, .asmx, .ashx, .html, etc.).
That way, if the connection is encrypted, but the resource itself is not html, and no set for "encryption", you are not going to drop the encryption.
I highly recommend using this (free / open source) component to do what you're trying:
http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx
Any content that is not normally handled by .Net (such as regular html and most graphic files) will not execute the httpmodule because it doesn't go through .net
Your best bet is to just handle this at the IIS level. See the following for info on how to configure your server.
http://www.jameskovacs.com/blog/HowToAutoRedirectToASSLsecuredSiteInIIS.aspx
I highly recommend you this product:
http://www.e2xpert.com/web/Http-Https-Switch.aspx
It is professional and easy to use. It comes with a powerful configuration tool, by which just one click can finish the entire configuration for you.
Just use SSL throughout your site, for all pages and for all images/scripts/stylesheets. That just makes everything oh-so-simple. IE and Firefox will no longer complain, you will no longer have crazy modules trying to guess whether any given request should be redirected, etc.
For the average user it's nearly impossible for them to make a informed decision when the only thing Firefox vaguely tells them is, "Parts of the page you are viewing were not encrypted before being transmitted over the Internet." This is about as helpful as the "somethings wrong" engine light and in fact is telling them after their information has been transferred.
The least this message should be accompanied with is a list providing the URL, type of content (images, javascript, css) and what it means to the user. BTW I get this message when using GMail.
Until that happens, as others stated your code should work once you determine the unsecured elements. Then you can use Firebug (http://getfirebug.com) to check the content being delivered over the connection.