regarding http compression with gzip or deflate in asp.net page - asp.net

actually i don't know anything about http compression. i search google and got many code to implement http compression for asp.net web site. i just need to know what is the advantage of using http compression. does it reduce the page size or does it help page to download on client pc much faster.
i use one http module which implement http compression the code as follows.
enter link description here
enter link description here
just go to the link and tell me if i use those technique then my page size will be reduce and load faster. looking for advise.
thanks

Piskvor is right it is a simple answer to get by searching it.
If you are using IIS 7.5 (maybe 7.0 as well I am not 100% sure) then it is simple to turn on HTTP Compression through the server. If you turn on static and dynamic compression then you will see your responses will then be compressed.
Benefits include much smaller transmission size - so it will be sent much faster.
There is a tradeoff for the smaller page size but a higher CPU overhead (as the webserver will have to do more work to send the file) but for modern web servers this is more than reasonable.
If you are not using IIs7+ then you will have to implement the solutions as specified.
Does that help at all?

Related

Simulating a remote website locally for testing

I am developing a browser extension. The extension works on external websites we have no control over.
I would like to be able to test the extension. One of the major problems I'm facing is displaying a website 'as-is' locally.
Is it possible to display a website 'as-is' locally?
I want to be able to serve the website exactly as-is locally for testing. This means I want to simulate the exact same HTTP data, including iframe ads, etc.
Is there an easy way to do this?
More info:
I'd like my system to act as closely to the remote website as possible. I'd like to run command fetch for example which would allow me to go to the site in my browser (without the internet on) and get the exact same thing I would otherwise (including information that is not from a single domain, google ads, etc).
I don't mind using a virtual machine if this helps.
I figured this was quite a useful thing in testing. Especially when I have a bug I need to reliably reproduce in sites that have many random factors (what ads show, etc).
As was already mentioned, caching proxies should do the trick for you (BTW, this is the simplest solution). There are quite a lot of different implementations, so you just need to spend some time selecting a proper one (according to my experience squid is a good solution). Anyway, I would like to highlight two other interesting options:
Option 1: Betamax
Betamax is a tool for mocking external HTTP resources such as web services and REST APIs in your tests. The project was inspired by the VCR library for Ruby. Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
Betamax comes in two flavors. The first is an HTTP and HTTPS proxy that can intercept traffic made in any way that respects Java’s http.proxyHost and http.proxyPort system properties. The second is a simple wrapper for Apache HttpClient.
BTW, Betamax has a very interesting feature for you:
Betamax is a testing tool and not a spec-compliant HTTP proxy. It ignores any and all headers that would normally be used to prevent a proxy caching or storing HTTP traffic.
Option 2: Wireshark and replay proxy
Grab all traffic you are interested in using Wireshark and replay it. This I would say it is not that hard to implement required replaying tool, but you can use available solution called replayproxy
Replayproxy parses HTTP streams from .pcap files
opens a TCP socket on port 3128 and listens as a HTTP proxy using the extracted HTTP responses as a cache while refusing all requests for unknown URLs.
Such approach provide you with the full control and bit-to-bit precise simulation.
I don't know if there is an easy way, but there is a way.
You can set up a local webserver, something like IIS, Apache, or minihttpd.
Then you can grab the website contents using wget. (It has an option for mirroring). And many browsers have an option for "save whole web page" that will grab everything, like images.
Ads will most likely come from remote sites, so you may have to manually edit those lines in the HTML to either not reference the actual ad-servers, or set up a mock ad yourself (like a banner image).
Then you can navigate your browser to http://localhost to visit your local website, assuming port 80 which is the default.
Hope this helps!
I assume you want to serve a remote site that's not under your control. In that case you can use a proxy server and have that server cache every response aggressively. However, this has it's limits. First of all you will have to visit every site you intend to use through this proxy (with a browser for example), second you will not be able to emulate form processing.
Alternatively you could use a spider to download all content of a certain website. Depending on the spider software, it may even be able to download JavaScript-built links. You then can use a webserver to serve that content.
This service http://www.json-gen.com provides mock for html, json and xml via rest. By this way, you can test your frontend separately from backend.

Is CDN helping server in terms of performance and RAM?

I'm planning to move my website files into a CDN system, i'm running 4 drupal websites and 1 wordpress. I was thinking to use Amazon Cloud Front.
I have a some questions:
Is the CDN system helping my server in terms of performance and RAM?
I'm using http://www.webpagetest.org to see the performance of the website, and the 83% of the requests comes from the images. The rest is between html, css, js and other. this is the other result
F = First Byte Time
A = Keep-alive Enabled
F = Compress Text
C = Compress Images
A = Cache static content
X = CDN detected
Is it possible, using amazon CloudFront, to put on cloud a website inside a sub-folder?
Basically I want to test it in a non-production site.
My server is a r310 quad core xeon 2.66 with 4gb of ram.
Thanks in advance
The answer should be much long to describe this I think, but in simple terms, a well-manaed CDN can help you to make your site faster.
4GB of RAM is not bad for a normal web site.
There are 3 main reasons to use a CDN that i can think about.
1. To deliver static content faster using nearby servers.
2. To avoid the browser from sending cookies to each GET request.
3. To take off some of the Apache load.
1 - I haven't used cloudfront but some akamai servers and they do make a difference. They simple gives content in a different and nearby server so file loading is relatively fast. But don't forget that this adds additional ip lookups if the user is loading site for the first time after a dns cache clean up.
2 - I think you know the cookie-less domain problem. If you host your site in example.com and images are in example.com/image.png like structure, browser should send the cookie info on each page request. They are usually ~100 bytes of data but when it comes to many assets, this is something worth considering. If you take off them to example-data.com domain, browser will not send the cookies to assets in this location. Faster pages.
3 - Your web server load is the other benefit. Your server will get less requests (mainly html requests) and images and other assets will be served from another server.

Enable dynamic compression in app within GBPS LAN?

I have a LAN of 1000 clients with speeds of 1 GBPS.
One application hosted in IIS 7.5.
Fact: A megabyte response is transferred between the server and the client in no more than 30 miliseconds. The connection is very fast.
Fact: Some clients have older PCs (windows xp, ie7, pentium4).
I think that dynamic compression is not needed in this case, becuase the problem is not the bandwidth but the clients computer performance.
Do you recommend to disable compression?
My pages have too much javascript. In every post I refresh the page with javascript, ajax and json. In some cases when the HTML is too big, the browser gets a little bit unresponsible. I think that compression is causing this problem.
any comments?
A useful scenario for compression is when you have to pay for the bandwith and would like to speed up the download of large pages, but this creates a bit of work for the client having to uncompress the data before serving it.
Turn it off.
You don't need it for serving pages over a high-speed LAN.
Definitely don't think you need the compression. But you are shooting in the dark here -- get yourself a http debugger such as the one included in google chrome and see what parts of the pages are slow.

Will HTTP Compression (GZip or deflate) on a low traffic site actually be beneficial?

I have a web application where the client will be running off a local server (i.e. - requests will not be going out over the net). The site will be quite low traffic and so I am trying to figure out if the actual de-compression is expensive in this type of a system. Performance is an issue so I will have caching set up, but I was considering compression as well. I will not have bandwidth issues as the site is very low traffic. So, I am just trying to figure out if compression will do more harm than good in this type of system.
Here's a good article on the subject.
On pretty much any modern system with a solid web stack, compression will not be expensive, but it seems to me that you won't be gaining any positive effects from it whatsoever, no matter how minor the overhead. I wouldn't bother.
When you measured the performance, how did the numbers compare? Was it faster when you had compression enabled, or not?
I have used compression but users were running over a wireless 3G network at various remote locations. Compression made a significant different to the bandwidth usage in this case.
For users running locally, and with bandwidth not an issue, I don't think it is worth it.
For cachable resources (.js, .html, .css) files, I think it doesn't make sense after the browser caches these resources.
But for non-cachable resources (e.g. json response) I think it makes sense.

Issues with HTTP Compression?

We are investigating the use of HTTP Compression on an application being served up by JBoss. After making the setting change in the Tomcat SAR, we are seeing a compression of about 80% - this is obviously great, however I want to be cautious... before implementing this system wide, has anyone out there encountered issues using HTTP Compression?
A couple points to note for my situation.
We have full control over browser - so the whole company uses IE6/7
The app is internal only
During load testing, our app server was under relatively small load - the DB was our bottleneck
We have control over client machines and they all get a spec check (decent processor/2GB RAM)
Any experiences with this would be much appreciated!
Compression is not considered exotic or bleeding edge and (fwiw) I haven't heard of or run into any issues with it.
Compression on the fly can increase CPU load on the server. If at all possible pre-compressing static resources and caching compressed dynamic responses can combat that.
It just a really good idea all the way around. It will add slight CPU load to your server, but that's usually not your bottleneck. It will make your pages load faster, and you'll use less bandwidth.
As long as you respect the client's Accept-Encoding header properly (i.e. don't serve compressed files to clients that can't decompress them), you shouldn't have a problem.
Oh, and remember that deflate is faster than gzip.

Resources