Previously I am using the http://domainname.com,
I got some security issue so the I moved to the https://domainname.com.
Previously the panel was loading very quickly after converted to https:// panel is very slow,
Is there any problem with the http and https.
Please give me some suggestion on this.
Thanks
The size of each transaction over SSL has an additional overhead for encryption, however the real killer is latency.
For HTTP traffic, there has to be 2 complete round trips for each request. But over SSL, there's at least 4. Although bandwidth has increased massively in recent years, latency has not changed much. The only practical solution is to be closer to the server.
Related
So I'm running a static landing page for a product/service I'm selling, and we're advertising using AdWords & similar. Naturally, page load speed is a huge factor here to maximize conversions.
Pros of HTTP/2:
Data is more compressed.
Server Push allows to send all resources at once without requests, which has MANY benefits such as replacing base64 inline images, sprites...etc.
Multiplexing over a single connection significantly improves load time.
Cons of HTTP/2:
1) Mandatory TLS, which slows down load speed.
So I'm torn. On one side, HTTP/2 has many improvements. On the other, maybe it would be faster to keep avoiding unnecessary TLS and continue using base64/sprites to reduce requests.
The total page size is ~1MB.
Would it be worth it?
The performance impact of TLS on modern hardware is negligible. Transfer times will most likely be network-bound. It is true that additional network round-trips are required to establish a TLS session but compared to the time required to transfer 1MB, it is probably negligible (and TLS session tickets, which are widely supported, also save a round-trip).
The evidence is that reducing load speed is definitely worth the effort (see the business case for speed).
The TLS session is a pain and it is unfortunate that the browser vendors are insisting on it, as there is nothing in HTTP2 that prevents plain text. For a low load system, were CPU costs are not the limiting factor, TLS essentially costs you one RTT (round trip time on network).
HTTP/2 and specially HTTP/2 push can save you many RTTs and thus can be a big win even with the TLS cost. But the best way to determine this is to try it for your page. Make sure you use a HTTP/2 server that supports push (eg Jetty) otherwise you don't get all the benefits. Here is a good demo of push with SPDY (which is that same mechanism as in HTTP/2):
How many HTTP requests does these 1000 kb require? With a page that large, I don't think it matters much for the end user experience. TLS is here to stay though... I don't think you should NOT use it because it may slow your site down. If you do it right, it won't slow your site down.
Read more about SSL not being slow anymore: https://istlsfastyet.com/
Mandatory TLS doesn't slow down page load speed if it's SPDY/3.1 or HTTP/2 based due to both supporting multiplexing request streams. Only non-SPDY or non-HTTP/2 based TLS would be slower than non-https.
Check out https://community.centminmod.com/threads/nginx-spdy-3-1-vs-h2o-http-2-vs-non-https-benchmarks-tests.2543/ clearly illustrates why SPDY/3.1 and HTTP/2 over TLS is faster for overall page loads. HTTP/2 allows multiplexing over several hosts at same time while SPDY/3.1 allows multoplexing per host.
Best thing to do is test both non-https and HTTP/2 or SPDY/3.1 https and see which is best for you. Since you have a static landing page it makes testing that much easier to do. You can do something similar to page at https://h2ohttp2.centminmod.com/flags.html where you setup both HTTP/2, SPDY and non-https on same server and be able to test all combinations and compare them.
Drupal 6.15 and memcache running on RHEL 5.4 server. Memcache miss percentage is 32%. I think is is high. What can be done to improve it?
Slightly expanded form of the comment below.
A cache hit ratio will depend on a number of factors, things like
Cache Size
Cache timeout
Cache clearing frequency.
Traffic
Using memcached is most beneficial when you have a high number hits on a small amount of content. That way the cache is built quickly and then used frequently giving you a high hit ratio.
If you don't get that much traffic, cache items will be stale so will need to be re cached.
If you have traffic going to a lot of different content then the cache can either get full, or go stale before it is used again.
memcached is only something you really need to use if you are having, or anticipating scalability issues. It is not buggy, but adds another layer of application which needs to be monitored and configured.
I'm new to web security.
Why would I want to use HTTP and then switch to HTTPS for some connections?
Why not stick with HTTPS all the way?
There are interesting configuration improvements that can make SSL/TLS less expensive, as described in this document (apparently based on work from a team from Google: Adam Langley, Nagendra Modadugu and Wan-Teh Chang): http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
If there's one point that we want to
communicate to the world, it's that
SSL/TLS is not computationally
expensive any more. Ten years ago it
might have been true, but it's just
not the case any more. You too can
afford to enable HTTPS for your users.
In January this year (2010), Gmail
switched to using HTTPS for everything
by default. Previously it had been
introduced as an option, but now all
of our users use HTTPS to secure their
email between their browsers and
Google, all the time. In order to do
this we had to deploy no additional
machines and no special hardware. On
our production frontend machines,
SSL/TLS accounts for less than 1% of
the CPU load, less than 10KB of memory
per connection and less than 2% of
network overhead. Many people believe
that SSL takes a lot of CPU time and
we hope the above numbers (public for
the first time) will help to dispel
that.
If you stop reading now you only need
to remember one thing: SSL/TLS is not
computationally expensive any more.
One false sense of security when using HTTPS only for login pages is that you leave the door open to session hijacking (admittedly, it's better than sending the username/password in clear anyway); this has recently made easier to do (or more popular) using Firesheep for example (although the problem itself has been there for much longer).
Another problem that can slow down HTTPS is the fact that some browsers might not cache the content they retrieve over HTTPS, so they would have to download them again (e.g. background images for the sites you visit frequently).
This being said, if you don't need the transport security (preventing attackers for seeing or altering the data that's exchanged, either way), plain HTTP is fine.
If you're not transmitting data that needs to be secure, the overhead of HTTPS isn't necessary.
Check this SO thread for a very detailed discussion of the differences.
HTTP vs HTTPS performance
Mostly performance reasons. SSL requires extra (server) CPU time.
Edit: However, this overhead is becoming less of a problem these days, some big sites already switched to HTTPS-per-default (e.g. GMail - see Bruno's answer).
And not less important thing. The firewall, don't forget that usually HTTPS implemented on port 443.
In some organization such ports are not configured in firewall or transparent proxies.
HTTPS can be very slow, and unnecessary for things like images.
I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.
I have a web application where the client will be running off a local server (i.e. - requests will not be going out over the net). The site will be quite low traffic and so I am trying to figure out if the actual de-compression is expensive in this type of a system. Performance is an issue so I will have caching set up, but I was considering compression as well. I will not have bandwidth issues as the site is very low traffic. So, I am just trying to figure out if compression will do more harm than good in this type of system.
Here's a good article on the subject.
On pretty much any modern system with a solid web stack, compression will not be expensive, but it seems to me that you won't be gaining any positive effects from it whatsoever, no matter how minor the overhead. I wouldn't bother.
When you measured the performance, how did the numbers compare? Was it faster when you had compression enabled, or not?
I have used compression but users were running over a wireless 3G network at various remote locations. Compression made a significant different to the bandwidth usage in this case.
For users running locally, and with bandwidth not an issue, I don't think it is worth it.
For cachable resources (.js, .html, .css) files, I think it doesn't make sense after the browser caches these resources.
But for non-cachable resources (e.g. json response) I think it makes sense.