I am writing a master thesis on security threats on the application level.
I am currently writing about identification of security threats on the application level, but I have an issue with DDoS attacks. I know I can follow network traffic to detect any unusual activity on my application. But that is not layer seven, right? I am confused by how I detect DDoS attack on my web server within layer 7, is there way to do so, and what ways to detect should I look into.
Related
Foreword: I don't really know much about security or encryption or how it works.
I am developing a small server for a game that uses ENet, which doesn't support higher level things such as security (e.g. SSL/TLS in the TCP world) to, as I think they said, maintain simplicity and embeddability. Supposing that this game requires at least some reasonable degree of security and authentication (e.g. logging in and such), what would be a good approach?
We are very new in Google Cloud and learning.
I have two question marks in my mind.
First is
Can I create localisation IP addresses for virtual instances? like I open web site with German IP range or another web site I want assign under Italian IP range.
Where is the best place to start or is it possible under cloud.
Second is
We had DDOS attack to under cloud and resources made peak while under attack, Will google charge extreme price for that peak time or will be normal billing.
Second question brings to third one,
We using cloudflare for domains, Is there stable way yo prevent DDOS attacks under google cloud?
I appreciate your time and answers.
To your first point, are you after finding the shortest path between your users and wherever you serve your content? If that's the case, you can simply put a load balancer in front of your backend services within Google Cloud, with a global public forwarding IP address, and the service itself will take care of redirecting the traffic to the nearest group of machines available. Here is an example of a HTTP(S) Load Balancer setup.
Or is localization what you are trying to achieve? In that case I'd rely on more standard forms of handling the language of choice like using browser settings (or user account settings if existing) or the Accept-Language header. This is a valuable resource from LocalizeJS.
Lastly if you are determined to having multiple versions of your application deployed for the different languages that you support, you could still have an intermediate service that determines the source of the request using IP-based lookups and redirect the user to the version of your choice. Said so, my feeling is that this is a more traditional behavior that in the world of client applications that are responsive and localized on the spot, the extra hop/redirect could get to annoy some users.
To your second point, there is a number of protections that are already built-in on some services within Google Cloud, in order to help you protect your applications and machines in different ways. On the DDoS front, you can benefit from policies and protections on the CDN side, where you get cache and scaling based preventive measures.
In addition to that, and if you have a load balancer put in front of your content, you can benefit from protections on layers 3, 4 and 7 of the OSI model. That includes typical HTTP, SYN floods, port exhaustion or NTP amplification attacks.
What this means is that in many of these situations, your infrastructure will not even notice many of these potential attacks, as they'll be alleviated before they reach your infrastructure (and therefore you will not be billed for that). Said so, I have heard and experienced situations in which these protections did not act in a timely fashion, or were triggered at all. In these scenarios, there is a possibility for your system to need to handle that extra load. However, and especially in events when the attack was obviously malicious and documented to be supposedly handled by Google Cloud, there is a chance to make a point with Google in order to get some support on the topic.
A bit more on that here.
Hope this is helpful.
Cant find this issue anywhere...
Using ASP.NET 3.5, I have 3 web servers, in a web farm, using ASP.NET State Server (on a different Server).
All pages Uses session (they read and do update the session)
Issue: my pages are prone to DDOS attack, it so easy to attack, just go to any page, and HOLD down 'F5' key for 30-60 seconds, and the request will pile up in all web servers.
I read, that if you make multiple call to session each will LOCK the session, hence the other request has to wait to get the same user's session, this waiting, ultimately causes DDOS.
OUR solution has been pretty primitive, from preventing (master page, custom control) to call session and only allow the page to call, to adding javascript that disable's F5 key.
I just realize ASP.NET with session is prone to such DDOS attacks!!
Anyone faced similar issue? any global/elegant solution? please do share
Thanks
Check this:
Dynamic IP Restrictions:
The Dynamic IP Restrictions Extension for IIS provides IT Professionals and Hosters a configurable module that helps mitigate or block Denial of Service Attacks or cracking of passwords through Brute-force by temporarily blocking Internet Protocol (IP) addresses of HTTP clients who follow a pattern that could be conducive to one of such attacks. This module can be configured such that the analysis and blocking could be done at the Web Server or the Web Site level.
Also, Check this:
DoS Attack:
Most sites/datacenters will control (D)DOS attacks via hardware not software. Firewalls, routers, load balancers, etc. It is not effeicent or deesirable to have this at the application level of IIS. I don't want bloat like this slowing down IIS.
Also DDOS preventation is a complex setup with even deadicated hardware boxes just to deal with it with different rules and analysis for them that take a lot of processing power.
Look at your web enviornment infrastucuture and see the setup and see what your hardware provides as protection and if it is a problem look at dedicated hardware solutions. You should block DDOS attacks as soon as possible in the chain, not at the end at the webserver level.
Well, for the most elegant solution; it has to be done on network level.
Since it is "nearly" impossible to differentiate a DDOS attack from a valid session traffic, you need a learning algorithm running on the network traffic; most of the enterprise level web applications need a DDOS defender on network level. Those are quite expensive and more stable solutions for DDOS. You may ask your datacenter, if they have a DDOS defender hardware and if they have, they can put your server traffic behind the device.
Two of the main competitors on this market :
http://www.arbornetworks.com/
http://www.riorey.com/
We had the same issue at work. Its not solved yet but two workarounds we were looking at were:
Changing the session state provider so that it doesn't lock
session. If your application logic would allow this...
Upgrading the session state server so that it was faster (SQL 2016 in-memory session state for example). This makes it a little harder for users to cause issues and means your app should recover faster.
Azure, Rackspace and Amazon do handle UDP, but GAE (the most similar to Azure) does not.
I am wondering what are the expected benefits of this restriction. Does it help fine-tuning the network? Does it ease the load balancing? Does is help to secure the network?
I suspect the reason is that UDP traffic does not have a defined lifetime nor a defined packet to packet relationship. This makes it hard to load balance and hard to manage - when you don't know how long to hold the path open you end up using timers, this is a problem for some NAT implementations too.
There's another angle not really explored here so far. UDP traffic is also a huge source of security problems, specifically DDoS attacks.
By blocking all UDP traffic, Azure can more effectively mitigate these attacks. Nearly all large bandwidth attacks, which are by far the hardest to deal with, are Amplification Attacks of some sort and most often UDP based. Allowing that traffic past the border of the network greatly improves the likelihood of service disruption, regardless of QoS sureties.
A second facet to that same story is that by blocking UDP they prevent people from hosting insecure DNS servers and thus prevent Azure from being the source of these large scale amplification attacks. This is actually a very good thing for the internet overall, as I'd think the connectivity of Azure's data centers are significant. To contrast this I've had servers in AWS send non stop UDP attacks to our datacenter for months on end, and could not successfully get the abuse team to respond to it.
The only thing that comes to my mind is that maybe they wanted to avoid their cloud being accessed through an unreliable transport protocol.
Along with scalability, reliability is one of the key aspects in Azure. For example Sql Azure and Azure Storage data is always replicated in at least three places and roles with at least two instances have a 99.95% uptime in their SLA.
Of course, despite its partial unreliability, UDP has its use cases, some of them enumerated in the comments from the feature voting site, but maybe those use cases are not a target for the Azure platform.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm about to start writing a web app (Asp.Net/IIS7) which will be accessible over the internet. It will be placed behind a firewall which accepts http and https.
The previous system which we are going to replace doesn't let this web server talk directly to a database, but rather have it making highly specialized web service calls (through a new firewall which only allows this kind of calls) to a separate app server which then go to the DB to operate on the data.
I have worked on many systems in my day, but this is the first one which has taken security this seriously. Is this a common setup? My first thought was to use Windows Authentication in the connectionstring on the web server and have the user be a crippled DB-user (can only view and update its own data) and then allow DB access through the inner firewall as well.
Am I Naïve? Seems like I will have to do a lot of mapping of data if we use the current setup for the new system.
Edit: The domain of this app is online ordering of goods (Business to business), Users (businesses) log in, input what they can deliver at any given time period, view previous transaction history, view projected demand for goods etc. No actual money is exchanged through this system, but this system provides the information on which goods are available for sale, which is data input to the ordering system
This type of arrangement (DMZ with web server, communicating through firewall with app server, communicating through firewall with db) is very common in certain types of environment, especially in large transactional systems (online corporate banking, for example)
There are very good security reasons for doing this, the main one being that it will slow down an attack on your systems. The traditional term for it is Defence in Depth (or Defense if you are over that side of the water)
Reasonable security assumption: your webserver will be continually under attack
So you stick it in a DMZ and limit the types of connection it can make by using a firewall. You also limit the webserver to just being a web server - this reduces the number of possible attacks (the attack surface)
2nd reasonable security assumption: at some point a zero-day exploit will be found that will get to your web server and allow it to be compromised, which could lead to to an attack on your user/customer database
So you have a firewall limiting the number of connections to the application server.
3rd reasonable security assumption: zero-days will be found for the app server, but the odds of finding zero-days for the web and app servers at the same time are reduced dramatically if you patch regularly.
So if the value of your data/transactions is high enough, adding that extra layer could be essential to protect yourself.
We have an app that is configured similarly. The interface layer lives on a web server in the DMZ, the DAL is on a server inside the firewall with a web service bridging the gap between them. In conjunction with this we have an authorization manager inside the firewall which exposes another web service that is used to control what users are allowed to see and do within the app. This app in one of our main client data tracking systems, and is accessible to our internal employees and outside contractors. It also deals with medical information so it falls under the HIPAA rules. So while I don’t think this set up is particularly common it is not unheard of, particularly with highly sensitive data or in situations where you have to deal with audits by a regulatory body.
Any reasonably scalable, reasonably secure, conventional web application is going to abstract the database away from the web machine using one or more service and caching tiers. SQL injection is one of the leading vectors for penetration/hacking/cracking, and databases often tend to be one of the more complex, expensive pieces of the overall architecture/TOC. Using services tiers allows you to move logic out of the DB, to employ out-of-process caching, to shield the DB from injection attempts, etc. etc. You get better, cheaper, more secure performance this way. It also allows for greater flexibility when it comes to upgrades, redundancy or maintenance.
Configuring the user's access rights seems like a more robust solution to me. Also your DataAccess layer should have some security built in, too. Adding this additional layer could end up being a performance hit but it really depends on what mechanism you're using to move data from "WebServer1" to "WebServer2." Without more specific information in that regard, it's not possible to give a more solid answer.