I'm building an asp.net web app with lots of ajax. How much of a performance cost is there in running the entire app in https? What about caching? Are scripts and css cached when served over https? If they're initially served from http, are they available in the https pages?
Thanks
"In my experience, servers that are heavy on dynamic content tend to be impacted less by HTTPS because the time spent encrypting (SSL-overhead) is insignificant compared to content generation time."
"Making lots of short requests over HTTPS will be quite a bit slower than HTTP, but if you transfer a lot of data in a single request, the difference will be insignificant."
So there should be no major performance hit to your server, but the request times will be longer as you need to handshake every request. BUT you should read for more information:
HTTP vs HTTPS performance
Related
As the title says,
I currently have a website with https on sensitive pages and cached http on the rest for speed.
I know HTTP2 is faster than HTTPS. What I don't know is whether HTTP2 is faster than regular un-encrypted HTTP?
Would I see performance improvements if I encrypted everything with SSL and enabled HTTP2, compared to using HTTP without encryption but with caching?
It completely depends on your site.
However, saying that, there is practically zero noticeable speed penalty for using HTTPS nowadays - unless you are on 20 year old hardware or serving huge content (e.g. you are a streaming site like Netflixs or YouTube). In fact even YouTube switched to HTTPS for practically all their users: https://youtube-eng.blogspot.ie/2016/08/youtubes-road-to-https.html?m=1
There is a small initial connection delay (typically 0.1 of a second) but after that there is practically no delay, and if on HTTP/2 then the gains that m will give to most sites will more than make up for tiny, unnoticeable, delay that HTTPS might add.
In fact, if you have some of your site on HTTPS already, then either you are using HTTPS resources on some of your site (e.g. common CSS that both HTTP and HTTPS pages use) and already experiencing this delay, as you need to connect over HTTPS to get those even when on HTTP, or you are making them available over both and making your HTTPS users download them again when they switch.
You can test the differences with a couple of sample sites on my blog here to give you some indication of differences: https://www.tunetheweb.com/blog/http-versus-https-versus-http2/ - which is a response to the https://www.httpvshttps.com website that I feel doesn't explain this as well as it should.
I have one wordpress website, in that am getting lot of errors day by day. So I like to change my normal HTTP to HTTPS. So please can you explain it is very useful and secure for my website.
If your website does not have https in the web link this means that you do not have installed SSL. Most of the modern web browsers treat any website without SSL installed as insecure. This may be one of the reasons for your website's issues.
I found the following answers to the question of the difference between HTTP and HTTPS.
Difference between HTTP and HTTPS
To know about the reasons for using SSL follow the link given below:
Reasons for using SSL
10,000ft view...
http is an unencrypted protocol for sending and retrieving data from servers in a web browser (among other uses). https is the same protocol but wrapper in SSL, a security tool that encrypts communications between the browser and server. This is what what banks and other websites use to ensure your data (like financial info) is protected when sent to your browser and cannot be read by someone on the same network.
Checkout articles like this and google the topic for more info.
Hope this helps.
I am trying to solve an issue with uploads to our web infastructure.
When a user uploads media to our site, it is proxied (via our Web Proxy tier) to a Java backend with a limited number of threads. When a user has a slow connection or a large upload, this holds one of the Java threads open a long period of time, reducing overall capacity.
To mitigate this I'd like to implement an 'upload proxy' which will accept the entire HTTP POST data of the upload and only when it has received all of the data it will proxy that POST to the Java backend quickly, pushing the problem of the upload thread being held open to a HTTP proxy.
Initially I found Apache Traffic Server has a 'buffer_upload' plugin, but it seems a bit bleeding edge and has no support for regex in URLs, although it would solve most of my issues.
Does anyone know a proxy product that would be able to do what I am suggesting (aside from Apache Traffic Server)?
I see that Nginx has fairly detaild buffer settings for proxying, but it doesn't seem (from docs/explanations) to wait for the whole POST before opening a backend connection/thread. Do I have this right?
Cheers,
Tim
Actually, nginx always buffers requests before opening a connection to the backend. It is possible to turn off response buffering using proxy_buffering or setting an X-Accel-Buffering response header for per-response buffering control.
I've recently come across a http web accelerator called Varnish. From what I've read, Varnish speeds up delivery of a website by optimizing every process of HTTP communication with the HTTP server using a reverse proxy configuration.
My question is that if you have a website that has its caching mechanism configured all the way down to static html files then how much more of an effect will Varnish have on this? Does a reverse proxy cut down the work that is performed by the HTTP server to process the request? If you have everything extensively cached on the server-side (HTTP headers, Etags, Expires Headers, Database Caching, Fragment and Page caching) then what more will a HTTP accelerator do to improve on this?
Firstly, we should differentiate between two different types of caching that go on in a normal web system: HTTP caching and server-side caching.
HTTP caching is controlled by HTTP headers, notably as you point out ETag and the various expiry mechanisms (including Expires and various aspects of Cache-Control). This is all covered in RFC 2616 (HTTP), section 13, and allows HTTP caches to return a response to an HTTP request from a client without having to go back to the origin server. In effect, the HTTP caching mechanism allows another machine between client and server to act as if it's the server, in certain cases. This is actually what varnish is doing, as we'll see in a minute; another common use that many people are familiar with is when ISPs provide an HTTP cache within their network, that can generally respond faster to their subscribers (and so improve perceived performance) than the origin servers outside their network.
Server-side caching includes database caching, and fragment and page caching, which are really all just ways of the web server avoiding doing some expensive operation (say, a database query, or rendering a particular piece of a template) by doing it once then keeping the result in a cache for a while.
I said earlier that varnish was an HTTP cache, which means that straight away it's able to be more efficient than a web server serving even a static file. Consider what a web server has to do:
parse the HTTP request
map the URI (and any relevant request headers, such as Accept-Encoding) onto a file
pull up information about the file to build the HTTP headers in the response; these are known as entity headers (RFC 2616 section 7.1, which include things such as Content-Length, Content-Type and the Expires and Last-Modified headers used in HTTP caching)
figure out what additional response headers (RFC 2616 section 6.2; these include ETag and Vary, both important parts of HTTP caching) and general header fields (RFC 2616 section 4.5) are needed
write the HTTP status line and headers out to the network
write the file's contents out to the network
By comparison, varnish is upstream of all of this, so all it has to do is:
parse the HTTP request
map the URI (and any relevant request headers) onto an entry in its internal cache
see if there's an entry; if there is, write it to the network; the HTTP headers will have been stored in the cache
If there isn't an entry, varnish has to do a little more work:
connect to a web server behind it that will run through all the steps 1-6 in the first list to generate a response
write the response to the network, including all the HTTP headers
store the response in its cache
In particular because the HTTP headers and entity body (the entire response) can be cached by varnish, if it can serve out of its cache it has less work to do. When you start generating the response dynamically in your server, the difference can become even more pronounced: say you have a page that takes 5 seconds to generate, but is the same for everyone hitting your site, varnish should be able to serve that in at most milliseconds out of the cache (plus whatever time it takes to get the response across the network to the HTTP client), and has a neat mechanism (the grace period) so it can keep on doing it while hitting the backend server once to refresh the cached version of the page.
Of course, you can introduce server-side caching to improve the speed with which your web server can process a request, but if you have a response you can cache in varnish it's generally going to be faster to do that. (There are various things that are hard to cache in varnish, particularly if you're using cookies or have pages that change depending on which user is looking at them. While it's possible to continue using varnish in these cases, unless you need really incredible speed, as far as I'm aware most people start optimising those cases using server-side caching and other techniques before hitting up varnish.)
(Note that varnish can also edit headers and indeed data going in and out of the cache, which complicates things. But the main points still stand, and even while editing things on the fly varnish can be incredibly fast.)
I am managing a web application that dynamically flips between http and https depending on the page. I want to get rid a ton of extra code used to flip between http and https but I want to understand any implications before I continue.
Is there any advantage to serving part of a site using http over https?
Of course there is some performance drop when using https, but it is not significant unless you have an extremely busy server. See
HTTP vs HTTPS performance
HTTP is not a secure protocol and anyone can intercept the transmitted data in cleartext (e.g. session cookies, passwords, credit card numbers, sexual fetishes). If you can, you should provide consistent HTTPS service throughout.
That said, by the design of the public/private key security, you can only use HTTPS on a server where you have complete and sole control over the IP address, since the client first looks up the IP address, then requests the secure protocol, and only then makes the HTTP query. That means that you cannot deploy HTTPS on virtual hosts (shared hosting).
(Since you already have a partial HTTPS solution, I imagine that's not a problem for you, though.)
The other downside is that the secure handshake and later encryption require computing resources, so that if you have bazillions of connections, you may feel quite a hit on your server performance. That's for you to consider, though.
Short form: If you have a dedicated IP address and enough computing resources, always and exclusively use HTTPS.
Using http is faster than https obviously since you do not have the ssl handshake overhead during connection establishment or the extra encryption/decryption delay.
If you only need parts of your web site to be secure e.g. just encrypt the login credentials, then it makes sense to have the code for the redirection so that the interaction after that is faster due to plain-text http.
If there are many areas of your site that need to be secure, then you could make measurements using https completely and see if the performance is significantly affected.
If you see no significant performance issues (or the performance is acceptable), then you could simplify your software design and remove the redirection logic between http<->https and use https everywhere.
One of the differences between HTTP and HTTPS is that with HTTPS, you loose the ability to have intermediaries (caches, proxies, etc) between the client and server do anything useful with requests and responses because the content is encrypted. From a security point of view, this is a good thing because it prevents intermediaries from snooping or tampering with traffic. On the other hand, you reduce the opportunities for dealing with things like scalability, performance and evolvability.