Opensource lightweight HIDS for use on production servers - networking

Requirement
I want to secure my production VMs on AWS, these VMs host critical web applications and can see around 500 Mbps traffic during peak hours. I already using mod_security WAF but I am not very happy with it.
Here is what I am thinking:
What if I can use snort in a lightweight configuration to monitor only HTTP traffic (this would be behind SSL termination) and use opensource XSS and SQLi rules to add an additional layer of protection ? The number of rules will be > 100.
By the time traffic hits my VMs it will be unencrypted. Moreover as I am using snort as on the same host, there wont be much of a semantic gap ( WAF has an edge over IPS since it builds richer app layer context and can detect layer 7 attacks more accurately). Is this understanding correct ?
I can spare around 200Mb of memory and can take 10% overhead on CPU performance.
Is snort the best bet here ? I looked at Suricata which seems to be easier on CPU but hard on memory. Please let me know if this makes sense at all. I want to stick to open source solutions.

Related

Need to setup a RMTP stream from our server with multicast

I have a client with a 1-2 thousand viewer audience, with everyday streams, same concurrent number of viewers.
Ive got a server set up for their website etc, but am in the process of figuring out the best way to stream with OBS onto that server, and than re-distribute that stream to clients (as an embed on the website).
Now from the calculations i did, running that kind of concurrent viewers is very problematic, as it forces you into a 10gbit link - which is very expensive, and i would ideally like to fit within 1-2gbps, if possible.
A friend of mine recommended to look into "Multicast" which supossedly uses MUCH less bandwith than regular live streaming options. Is multicast doable? Ive had a NGINX live stream set up on my server by a friend before, but never looked into the config and if multicast is supported within that. Are there any other options? What would you recommend?
Also, the service of that live stream isnt a high profit / organisation type of deal, so any pre-made services just dont make sense, as it would easily cost 40+ dollars per stream, which is just too much for my client.
Thank you for any help!
Tom
Rather than Multicast, P2P is more practical solution on Internet, to save money not bandwidth.
Especially for H5 browser, it's possible to use WebRTC DataChannel to transport P2P data.
But Multicast does not work on internet routers.
Multicast works by sending a single stream across the network to edge points where clients can 'join' the multicast to get an individual stream for them.
It requires that the network supports multicast protocols and the edges align with your users.
It is typically used when an operator has their own IP network for service like IPTV, rather than for services over the internet.
For your scenario, you would usually use an organ server and a CDN - this will usually reduce the load on your own server as the video will be cached on the network and multiple user can access the same 'chunks' of the video.
You can see and AWS example for on demand video here - other vendor and cloud providers have solutions too so this is just an example:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-cloudfront-route53-video-streaming.html
You can find more complex On Demand and Live tutorial also but they are likley more that you need: https://aws.amazon.com/cloudfront/streaming/
Exploring P2P may be an option also as Winton suggests - some CDN may also leverage P2P technology internally.

Choosing between software load balancer and hardware load balancer

I wonder if there are any situations where one would prefer software load balancer over hardware load balancer or vice-versa. I've played around with f5, A10, Nginx, and HAproxy briefly, and the only marginal difference I was able to notice was the price, apart from slightly better API documentation etc. So my question is:
Are there any particular use cases where one would prefer Software load balancers over hardware load balancers or vice-versa?
Feel free to quote your experience, where you preferred one over the other and, rationale you used to make that decision.
PS: I have read 5 reasons to prefer S/W load balancers over H/W load balancers and didn't find explanations there very propelling.
EDIT: Regarding my use case, I'll be needing lot of load balancers to secure/load-balance tons of apps. Therefore the design decision should be such, as to cope up with exponentially increasing number of apps behind it (Should be easily scalable). I'm not looking for 10 or 50 app load balancer but at tons of thousands of apps behind load balancers solution. Also it would be great if you can specifically point out at features which outweigh in H/W over S/W or vice-versa. For example with H/W load balancer FPGA services one can do SSL offloading and can acheive an order of X performance gain given that one has more than Y number of apps behind it etc.
There isn't going to be a single answer to this question as it will always depend on your application requirements and your compliance obligations. Companies like F5, A10, Citrix offer services that expand well past basic load balancing and offer features lb just cannot touch.
If you're JUST looking for lb services and maybe some SSL bridging or offloading here are some benefits:
Hardware: Offer hardware accelerated SSL offloading and bulk encryption due to the use of FPGA services. This is also dependent on what cipher suites you plan to use. With hardware you're usually placing them in front of 100's of applications or you're using it because they may be certified firewalls and you need additional requirements for compliancy.
Software: If you just need basic LB, HAProxy/Nginx are an easy choice for basic lb services and even some SSL services. Support is mixed if you're not paying for it, having to rely only on community examples.
However, if you have mixed environments and maybe already have 1 vendor in play, that can help decide. All of the hardware vendors offer virtual appliances and have automation tools to help with elastic environments so really it ends up being "Will you only ever need LB services or will you end up having to tack on more later"?
The F5/A10/Citrix ADC's in cloud still offer more features in a single platform than having to spin up segregated services (think firewall/load balancing/Web firewall/global load balancing/fraud prevention/analytics/access management).
Updated 6/21/2017:
Hardware: People are buying hardware solutions not to proxy 1 or 2 applications but 100 or 200, or even 1000 or 2000 applications in their data centers (on site or collocated). For these cases it's about performance and services beyond lb. It includes security needs and app protection that are not baked into HAproxy and Nginx.
ADC Vendors Software Solutions: You have 3 options because F5/A10/Citrix also sell virtual appliances allowing you to run the same software in Azure/AWS/Google or in VMWare.... you get the idea. This becomes unique because you can have hardware in your co-location and virtual appliances in your cloud solution and its the same vendor and the bonus for your admins, the same support escalation point.
HAproxy/Nginx Softare: This goes back to the original statement, if you're talking LB solution only and price is a concern, this is your way to go. The feature sets are more limited than the ADC/Security solutions above, but they do LB justfine. It can become a bit cumbersome managing 100's of apps so you'll have to rely on your dev team a bit more to make sure they're isolating environments OR are REALLY good at automation.
The decision comes down to will you only need load balancers? If yes, then HAproxy/Nginx. If you need more features to load balance AND protect your app, then ADC software solutions are the way to go.
If you need reliable performance and cannot justify dedicating one vm per host to achieve it, then hardware ADC's are the way to go.
For transparency, I work on the DevCentral team at F5 so I would love to say go hardware, but if you don't need it don't do it. But its going to come down to your application requirements.
The follow up question is what is your application and requirements for a load-balancer?
Generally hardware LB's have a fixed performance and hardware acceleration to assist with SSL offload. Software or virtual performance can fluctuate with an increased load and then you can run into bugs with performance, but it's easier to deploy and scale.
Other questions to look into is, will you need to modify or redirect traffic based on content? For example, rewriting or filtering traffic? If yes, then you may need a full proxy LB.

Why some internet providers close certain ports?

We published the game on russian server and 1% of people couldn't connect to server on 46xx port through raw TCP while they can load it's HTML page (through HTTP). Most of such people live in Germany, Israel....
Why is it so? What's the politics decisions lay behind it? We discovered that their such ports (which are free on IANA) are closed. Does it mean that such people cannot run Steam (and, then, play all games which you can buy through it), play WoW and many other modern games which use TCP through 4xxx ports?
Thank you.
ISPs have been known to filter certain ports for various reasons. Users should complain loudly to them (or switch) in order to send a signal that such is not to be tolerated. You can encourage them to do so but of course that doesn't solve your problem (or really answer your question).
Common reasons are:
- trying to block bittorrent traffic
- limit bandwidth usage (largely related to previous reason)
- security (mistaken)
- control (companies often don't want employees goofing off)
The easiest thing for you to do is run your game over port 443 (perhaps as an alternate). That's HTTPS and so will not generally be blocked. However, because HTTPS is encrypted, there's no way to inspect the stream to know if its web traffic or something else and thus you can run any data stream (encrypted or not) that you wish over it.
That's precisely correct. In fact every public web site would by default block all ports except the ones they expect to be running some traffic they would want to.
This is the reason many applications often try to encapsulate their programs to use port 80 which can't be blocked as long as some one wants http traffic to run.
They simply don't want any application that they haven't approved to run through their servers. If you have a sensitive server in public you surely won't want any one to use your machine for any apps that you don't allow. A common reason is applications that eat up bandwidth such as bittorent, edonkey, gnutella as well as streaming, voip and other high bandwidth consuming apps

What are the benefits of not supporting UDP in cloud network? (thinking of Windows Azure case)

Azure, Rackspace and Amazon do handle UDP, but GAE (the most similar to Azure) does not.
I am wondering what are the expected benefits of this restriction. Does it help fine-tuning the network? Does it ease the load balancing? Does is help to secure the network?
I suspect the reason is that UDP traffic does not have a defined lifetime nor a defined packet to packet relationship. This makes it hard to load balance and hard to manage - when you don't know how long to hold the path open you end up using timers, this is a problem for some NAT implementations too.
There's another angle not really explored here so far. UDP traffic is also a huge source of security problems, specifically DDoS attacks.
By blocking all UDP traffic, Azure can more effectively mitigate these attacks. Nearly all large bandwidth attacks, which are by far the hardest to deal with, are Amplification Attacks of some sort and most often UDP based. Allowing that traffic past the border of the network greatly improves the likelihood of service disruption, regardless of QoS sureties.
A second facet to that same story is that by blocking UDP they prevent people from hosting insecure DNS servers and thus prevent Azure from being the source of these large scale amplification attacks. This is actually a very good thing for the internet overall, as I'd think the connectivity of Azure's data centers are significant. To contrast this I've had servers in AWS send non stop UDP attacks to our datacenter for months on end, and could not successfully get the abuse team to respond to it.
The only thing that comes to my mind is that maybe they wanted to avoid their cloud being accessed through an unreliable transport protocol.
Along with scalability, reliability is one of the key aspects in Azure. For example Sql Azure and Azure Storage data is always replicated in at least three places and roles with at least two instances have a 99.95% uptime in their SLA.
Of course, despite its partial unreliability, UDP has its use cases, some of them enumerated in the comments from the feature voting site, but maybe those use cases are not a target for the Azure platform.

P2P network games/apps: Good choice for a "battle.net"-like matching server

I'm making a network game (1v1) where in-game its p2p - no need for a game server.
However, for players to be able to "find each other", without the need to coordinate in another medium and enter IP addresses (similar to the modem days of network games), I need to have a coordination/matching server.
I can't use regular web hosting because:
The clients will communicate in UDP.
Therefore I'll need to do UDP Hole Punching to be able to go through the NAT
That would require the server to talk in UDP and know the client's IP and port
afaik with regular web hosting (php/etc) I can only get the client's IP address and can only communicate in TCP (HTTP).
Options I am currently considering:
Use a hosting solution where my program can accept UDP connection. (any recommendations?)
UDPonNAT seems to do this but uses GTalk and requires each client to have a GTalk account for this (which probably makes it an unsuitable solution)
Any ideas? Thanks :)
First, let me say that this is well out of my realm of expertise, but I found myself very interested, so I've been doing some searching and reading.
It seems that the most commonly prescribed solution for UDP NAT traversal is to use a STUN server. I did some quick searches to see if there are any companies that will just straight-up provide you with a STUN hosting solution, but if there even were any, they were buried in piles of ads for simple web hosting.
Fortunately, it seems there are several STUN servers that are already up and running and free for public use. There is a list of public STUN servers at voip-info.org.
In addition, there is plenty more information to be had if you explore SO questions tagged "nat".
I don't see any other choice than to have a dedicated server running your code. The other solutions you propose are, shall we say, less than optimal.
If you start small, virtual hosting will be fine. Costs are pretty minimal.
Rather than a full-blown dedicated server, you could just get a cheap shared hosting service and have the application interface with a PHP page, which in turn interfaces with a MySQL database backend.
For example, Lunarpages has a $3/month starter package that includes 5gb of space and 50gb of bandwidth. For something this simple, that's all you should need.
Then you just have your application poll the web page for the list of games, and submit a POST request in order to add their own game to the list.
Of course, this method requires learning PHP and MySQL if you don't already know them. And if you do it right, you can have the PHP page enter a sort of infinite loop to keep the connection open and just feed updates to the client, rather than polling the page every few seconds and wasting a lot of bandwidth. That's way outside the scope of this answer though.
Oh, and if you're looking for something absolutely free, search for a free PHP host. Those exist too! Even with an ad-supported host, your app could just grab the page and ignore the ads when you parse the list of games. I know that T35 used to be one of my favorites because their free plan doesn't track space or bandwidth (it limits the per-file size, to eliminate their service being used as a media share, but it shouldn't be a problem for PHP files). But of course, I think in the long run you'll be better off going with a paid host.
Edit: T35 also says "Free hosting allows 1 domain to be hosted, while paid offers unlimited domain hosting." So you can even just pay for a domain name and link it to them! I think in the short term, that's your best (cheapest) bet. Of course, this is all assuming you either know or are willing to learn PHP in order to make this happen. :)
There's nothing that every net connection will support. STUN is probably good, UPnP can work for this.
However, it's rumored that most firewalls can be enticed to pass almost anything through UDP port 53 (DNS). You might have to argue with the OS about your access to that port though.
Also, check out SIP, it's another protocol designed for this sort of thing. With the popularity of VOIP, there may be decent built-in support for this in more firewalls.
If you're really committed to UDP, you might also consider tunneling it over HTTP.
how about you break the problem into two parts - make a game matcher client (that is distinct from the game), which can communicate via http to your cheap/shared webhost. All gamers who wants to use the game matching function use this. THe game matcher client then launches the actual game with the correct parameters (IP, etc etc) after obtaining the info from your server.
The game will then use the standard way to UDP punch thru NAT, etc etc, as per your network code. The game dont actually need to know anything about the matcher client or matcher server - in the true sense of p2p (like torrents, once you can obtain your peer's IPs, you can even disconnect from the tracker).
That way, your problems become smaller.
An intermediate solution between hosting your own dedicated server and a strictly P2P networking environment is the gnutella model. In that model, there are superpeers that act like local servers, having known IP addresses and being connected to (and thus having knowledge of) more clients than a typical peer. This still requires you to run at least one superpeer yourself, but it gives you the option to let other people run their own superpeers.

Resources