Are a page's images calculated as requests in Dynamic IP Restrictions? - iis-7

Our site has performance issues, and spambots make it worse, so we decided to configure Dynamic IP Restrictions to allow only 5 concurrent requests (per one request per IP). My concern is that a single page may do many concurrent requests as it contains many images (we have like 20 images per one page), so will these be blocked? Are images calculated as request in Dynamic IP Restrictions?

I found it, yes it's considered as request.

We had to switch on Dynamic IP Restrictions after a brute force attack on our website. We started with the default numbers.
After performing a 'hard' refresh (CTRL+F5) in my browser, our homepage was half covered in broken images! A request for a single ASPX can trigger thirty image requests and several CSS/JS file requests too. All happening within a few milliseconds, all from the same IP.
Your settings need to allow for this. Sadly this means the hackers get more of a chance too.

Related

localised ip assist + DDOS prevention + google billing

We are very new in Google Cloud and learning.
I have two question marks in my mind.
First is
Can I create localisation IP addresses for virtual instances? like I open web site with German IP range or another web site I want assign under Italian IP range.
Where is the best place to start or is it possible under cloud.
Second is
We had DDOS attack to under cloud and resources made peak while under attack, Will google charge extreme price for that peak time or will be normal billing.
Second question brings to third one,
We using cloudflare for domains, Is there stable way yo prevent DDOS attacks under google cloud?
I appreciate your time and answers.
To your first point, are you after finding the shortest path between your users and wherever you serve your content? If that's the case, you can simply put a load balancer in front of your backend services within Google Cloud, with a global public forwarding IP address, and the service itself will take care of redirecting the traffic to the nearest group of machines available. Here is an example of a HTTP(S) Load Balancer setup.
Or is localization what you are trying to achieve? In that case I'd rely on more standard forms of handling the language of choice like using browser settings (or user account settings if existing) or the Accept-Language header. This is a valuable resource from LocalizeJS.
Lastly if you are determined to having multiple versions of your application deployed for the different languages that you support, you could still have an intermediate service that determines the source of the request using IP-based lookups and redirect the user to the version of your choice. Said so, my feeling is that this is a more traditional behavior that in the world of client applications that are responsive and localized on the spot, the extra hop/redirect could get to annoy some users.
To your second point, there is a number of protections that are already built-in on some services within Google Cloud, in order to help you protect your applications and machines in different ways. On the DDoS front, you can benefit from policies and protections on the CDN side, where you get cache and scaling based preventive measures.
In addition to that, and if you have a load balancer put in front of your content, you can benefit from protections on layers 3, 4 and 7 of the OSI model. That includes typical HTTP, SYN floods, port exhaustion or NTP amplification attacks.
What this means is that in many of these situations, your infrastructure will not even notice many of these potential attacks, as they'll be alleviated before they reach your infrastructure (and therefore you will not be billed for that). Said so, I have heard and experienced situations in which these protections did not act in a timely fashion, or were triggered at all. In these scenarios, there is a possibility for your system to need to handle that extra load. However, and especially in events when the attack was obviously malicious and documented to be supposedly handled by Google Cloud, there is a chance to make a point with Google in order to get some support on the topic.
A bit more on that here.
Hope this is helpful.

Is there any IP range for a certain country?

We are in a business where we need to block visitors from certain areas or countries. We want to show 403 error page when visitors comes from that certain areas.
Now what we can do is, on every request, get the visitors IP address and get the country name for that IP using any third-party services like Telize or ipapi.co and if it from that country, stop and show the error page.
But the problem is, it will check for all others visitors and if we do a curl on every request, it will definitely slow down our website.
Is there any way we can get the country name from IP address without using any third-party service or curl request or anything that will not slow down our website?
We are using PHP & Symfony 3 framework on a VPS, and speed and performance are very important for us, in case it helps you.
At this moment we want to block visitors from Cameroon, is there any range of IP is assigned for Cameroon?
You can use the Maxmind GeoIP library for php.
The idea is that you download a database (which is just a file) containing geographical information for all the IPs in the world. Since the database is on your server, and you call it using the library, it won't slow down your server. Actually, getting the country code from an IP is so fast the performance impact will be negligible.
The database is updated regularly, so you can periodically re-download it to stay up-to-date. You can get details about the downloadable databases here.
You may generate the htaccess deny file for Cameroon IP ranges at https://www.ip2location.com/free/visitor-blocker, and block them at htaccess level, which will be much faster.

What's the max # of concurrent connections / HTTP requests per sec I should make to a given domain?

I'm downloading a full catalog worth of static image content (million+ images, all legal) from various webservers.
I want to download the images efficiently, but I'm considering what limits per domain I should place on the # of concurrent connections and time between connection attempts to avoid being blacklisted by DOS tools and other limiters.
The keyword I needed to look for was "webcrawler politness", that popped up some useful articles that answer the question quite well:
Typical politeness factor for a web crawler?
http://blog.mischel.com/2011/12/20/writing-a-web-crawler-politeness/

Which is the fastest way to load images on a webpage?

I'm building a new site, and during the foundation stage I'm trying to assess the best way to load images. Browsers have a limit of 2-6 items it can load concurrently (images/css/js). Through the grapevine I've heard various different methods, but no definitive answer on which is actually faster.
Relative URLs:
background-image: url(images/image.jpg);
Absolute URLs:
background-image: url(http://site.com/images/image.jpg);
Absolute URLs (with sub-domains):
background-image: url(http://fakecdn.site.com/images/image.jpg);
Will a browser recognize my "fakecdn" subdomain as a different domain and load images from it concurrently in a separate thread?
Do images referenced in a #import CSS file load in a separate thread?
The HTTP 1.1 spec suggests that browsers do not open more than two connections to a given domain.
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy.
So, if you are loading many medium sized images, then it may make sense to put them on separate FQDNs so that the 2 connection limit is not the bottleneck. For small images, the need of a new socket connection to each FQDN may outweigh the benefits. Similarly, for large images, the client network bandwith may be the limiting factor.
If the images are always displayed, then using a data uri may be fastester, since no separate connection is required, and the images can be included in the stream in the order they are needed.
However, as always with optimizing for performance, profile first!
See
Wikipedia - data uri
For lots of small images, social media icons being a good example, you'll also want to look into combining them into a single sprite map. That way they'll all load in the same request, and you just have to do some background-positioning when using them.

Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.

Resources