Any service can be used to load test a website on CDN? So that we can ensure our website still can run without problem even under high volume traffic for example DDoS.
I suppose the target service should able to generate huge amount concurrent connections and large bandwidth.
If there are any reference site or report, please guide me to.
Thanks all.
Check out http://loadimpact.com/ and https://www.blitz.io/
Or you can always use tools like siege and ab
Related
Sailsjs requires setup to handle scaling horizontally. There are multiple ways to do this. I'm not sure if I have done this correctly, due to poor performance during load testing. Please confirm if I understand and am doing the setup correctly.
I've created a load balancer on the Google platform for handling the distribution of requests across the instances. Much is spoken about of Nginx for distributing, but I understand Googles load balancer does all I need in this regard. Note, I use session affinity: Client IP.
I've set up config/session.js to use express-mysql-session, so MemoryStore is not used.
I haven't set up anything in config/sockets.js. My project doesn't use live chat etc with socket.io, all requests are to waterline for data from db. But if this is a issue, please refer me to a way to do this with Mysql db not redis (or memory).
I use pm2 as a way to keep it live and to distribute processing on a instance.
Those are the main factors I've found regarding horizontal scaling with sailsjs.
I am developing a browser extension. The extension works on external websites we have no control over.
I would like to be able to test the extension. One of the major problems I'm facing is displaying a website 'as-is' locally.
Is it possible to display a website 'as-is' locally?
I want to be able to serve the website exactly as-is locally for testing. This means I want to simulate the exact same HTTP data, including iframe ads, etc.
Is there an easy way to do this?
More info:
I'd like my system to act as closely to the remote website as possible. I'd like to run command fetch for example which would allow me to go to the site in my browser (without the internet on) and get the exact same thing I would otherwise (including information that is not from a single domain, google ads, etc).
I don't mind using a virtual machine if this helps.
I figured this was quite a useful thing in testing. Especially when I have a bug I need to reliably reproduce in sites that have many random factors (what ads show, etc).
As was already mentioned, caching proxies should do the trick for you (BTW, this is the simplest solution). There are quite a lot of different implementations, so you just need to spend some time selecting a proper one (according to my experience squid is a good solution). Anyway, I would like to highlight two other interesting options:
Option 1: Betamax
Betamax is a tool for mocking external HTTP resources such as web services and REST APIs in your tests. The project was inspired by the VCR library for Ruby. Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
Betamax comes in two flavors. The first is an HTTP and HTTPS proxy that can intercept traffic made in any way that respects Java’s http.proxyHost and http.proxyPort system properties. The second is a simple wrapper for Apache HttpClient.
BTW, Betamax has a very interesting feature for you:
Betamax is a testing tool and not a spec-compliant HTTP proxy. It ignores any and all headers that would normally be used to prevent a proxy caching or storing HTTP traffic.
Option 2: Wireshark and replay proxy
Grab all traffic you are interested in using Wireshark and replay it. This I would say it is not that hard to implement required replaying tool, but you can use available solution called replayproxy
Replayproxy parses HTTP streams from .pcap files
opens a TCP socket on port 3128 and listens as a HTTP proxy using the extracted HTTP responses as a cache while refusing all requests for unknown URLs.
Such approach provide you with the full control and bit-to-bit precise simulation.
I don't know if there is an easy way, but there is a way.
You can set up a local webserver, something like IIS, Apache, or minihttpd.
Then you can grab the website contents using wget. (It has an option for mirroring). And many browsers have an option for "save whole web page" that will grab everything, like images.
Ads will most likely come from remote sites, so you may have to manually edit those lines in the HTML to either not reference the actual ad-servers, or set up a mock ad yourself (like a banner image).
Then you can navigate your browser to http://localhost to visit your local website, assuming port 80 which is the default.
Hope this helps!
I assume you want to serve a remote site that's not under your control. In that case you can use a proxy server and have that server cache every response aggressively. However, this has it's limits. First of all you will have to visit every site you intend to use through this proxy (with a browser for example), second you will not be able to emulate form processing.
Alternatively you could use a spider to download all content of a certain website. Depending on the spider software, it may even be able to download JavaScript-built links. You then can use a webserver to serve that content.
This service http://www.json-gen.com provides mock for html, json and xml via rest. By this way, you can test your frontend separately from backend.
I'd like to be able to see the web pages I'm serving on my Classic ASP site and how much data is sent out in preparation to start using GZip compression on the server. Running Windows Server 2003.
Is there a tool/utility/script to be able to watch or log traffic and tell the bytes going ou?
Diodeus is right in saying that you need a web log analyzer.
My current webhost uses SmarterStats which is has a large range of customisable reports available and is very good for looking at things like traffic volume etc as it'll visualise it all in the browser for you.
If you are running your own server then you can get a free edition which can be used with just one website - http://www.smartertools.com/smarterstats/free-web-analytics-seo-software.aspx
You need a log analyzer for IIS. Webtrends used to be quite popular. I used it a dog's life ago. Most use Google Analytics these days, but it's a different beast and tracks traffic, not data transfer volume. You really need to look at the server logs for that.
I'm putting together my deployment plan for a major deployment next week (basically taking over a site).
I've never had to deploy to multiple web servers before.
Do I need to copy the files to each web server, or is there a tool which will do this for me?
I have to supply the IP address to some 3rd party vendors, which IP do I give them since there are four separate servers?
Please check this thread, hope this will help you: What method do you use to deploy ASP.Net applications to the wild?
I would of expected that there would be a load balancer which would spread the traffic between the servers. In which case you would give out the IP address of the external interface of the load balanacer.
For updates in this scenario I would typically take one server out of the loop for the load balancer then update that server, test it works ok then if you have 4 servers take another out and update/test that server. Then switch the load balancer so that the 2 updated servers are live and the other 2 are offline update/test those servers and then put them back into the loop so they're live and your update is complete with no downtime. Of course I'd typically do this during a period of low traffic where possible.
Whether you do this using some sort of automatic script or manually would depend on what systems you have in place and how often you would expect to make updates.
It's worth saying that Microsoft have since released a couple of tools to help with this:
http://www.iis.net/download/webdeploy
http://www.iis.net/download/WebFarmFramework
I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.