Add latency and restrict bandwidth to a cgroup with tc - networking

I would like to use the net_cls controller of Linux cgroups to restrict bandwidth of a cgroup AND add latency to a network link.
However, while bandwidth works fine, running this tutorial, I wonder how I can also add latency to that process. I know, that there is the netem controller but I do not know how this interacts with the htb controller that is speciefied in the tutorial. I am not able to attach any latency to that cgroups.
What is the correct way to go here?

Related

Is it "okay" to host a small wordpress blog on one AWS EC2 Instance without load balancers/beanstalk?

This is a very simple question for those with the knowledge, but I'm a newbie.
In essence, I just need to know if it would be considered okay to run a small, approx. 700 visitors/day bitnami wordpress blog on just one t2.medium EC2 instance (without any auto-scaling, beanstalk).
Am at risk of it crashing? What stats should I monitor or be aware of to be aware of potential dangers? Sorry for the basic nature of these questions, but this is new.
tl;dr: It might be "okay", but it's not ideal.
If your question is because of:
Initial setup time - Load-balancing and auto-scaling will be less expensive (more time-efficient) over time.
Cost - Auto-scaling spins down instances that aren't being used to reduce cost.
Minimal setup for a great user experience - The goal of a great AWS setup is to ensure that capacity matches demand
Am at risk of it crashing?
Possibly, yes. If you average 700 visitors, then the risk is traffic spikes if all visitors hit at the same. It also depends on what your maximum visitors are, which could vary widely from the average (or not)
What stats should I monitor or be aware of to be aware of potential dangers?
Monitor the usage on high traffic days (ie. public holiday sales)
Setup billing alerts
Setup the right metrics:
See John Rotenstein's SO answer:
CPU Utilization is not always the right measure to use -- your
application might only be able to handle a limited number of
connections, it might be squeezed on RAM and the types of requests
might vary too.
You can use normal monitoring tools, or you can write something that
pushes metrics to Amazon CloudWatch, so that you go beyond the basic
CPU and Network metrics that CloudWatch normally provides. You could
even use the Load Balancer's Latency metric to trigger scaling when
the application slows down (custom code required).
I'd start with:
Two or more instances - to deal with instance redundancy (an instance going down)
Several t2.small rather than one t2.medium can work out to be more cost-efficient, and more cost efficient than EC in some use cases.
Add auto-scaling - automatically spin up or down instances based on minimum and maximum counts
Load balancing - to re-route users from unhealthy to healthy instances. And also to keep all of the spun up instances all working as evenly as possible (rather than a single instance handling 80% of the workload while the others bludge).
You can always reduce your instances after time with monitoring.
In my opinion, with 700 visitors a day, the safer option would be to run a load balanced/auto-scaling environment on Elastic Beanstalk with at least 2 instances. The problem with running just one instance is that yes you are at a great risk of crashing in case you get an increase in traffic or when the instance goes down and with just one running you will not have a fallback. You can easily set up CloudWatch monitoring on NetworkIn, NetworkOut to get a sense of the number of requests your site is receiving and serving, and setup CPU Usage monitoring as well. The trade-off with running a load balanced environment over a single instance environment is that the cost might significantly increase as you introduce other things into your environment such as a load balancer. Also if you introduce a load balancer consider reducing the instance size to maybe a t2.small, could aid in reducing the cost.
It actually depends. This question range is wide. You have multiple options here.
You can use only ec2 instance for that much amount of visitors or even more if your application allows. You can also consider caching if your app need it.
You may add instance in an autoscaling group. So that if by any chance you need more resources you can increase them horizontally.
You can add load balancers lateron also. You just need to add user data in your launch configuration attached to autoscaling group. So when your instance get up it should automatically register itself in your load balancer.
For monitoring, you can check for the request metrics in cloudwarch for ELB. You have to keep an eye on your CPU and trigger the scale out policy once it reaches a particular threshold.

How do I configure OpenSplice DDS for 100,000 nodes?

What is the right approach to use to configure OpenSplice DDS to support 100,000 or more nodes?
Can I use a hierarchical naming scheme for partition names, so "headquarters.city.location_guid_xxx" would prevent packets from leaving a location, and "company.city*" would allow samples to align across a city, and so on? Or would all the nodes know about all these partitions just in case they wanted to publish to them?
The durability services will choose a master when it comes up. If one durability service is running on a Raspberry Pi in a remote location running over a 3G link what is to prevent it from trying becoming the master for "headquarters" and crashing?
I am experimenting with durability settings in a remote node such that I use location_guid_xxx but for the "headquarters" cloud server I use a Headquarters
On the remote client I might to do this:
<Merge scope="Headquarters" type="Ignore"/>
<Merge scope="location_guid_xxx" type="Merge"/>
so a location won't be master for the universe, but can a durability service within a location still be master for that location?
If I have 100,000 locations does this mean I have to have all of them listed in the "Merge scope" in the ospl.xml file located at headquarters? I would think this alone might limit the size of the network I can handle.
I am assuming that this product will handle this sort of Internet of Things scenario. Has anyone else tried it?
Considering the scale of your system I think you should seriously consider the use of Vortex-Cloud (see these slides http://slidesha.re/1qMVPrq). Vortex Cloud will allow you to better scale your system as well as deal with NAT/Firewall. Beside that, you'll be able to use TCP/IP to communicate from your Raspberry Pi to the cloud instance thus avoiding any problem related to NATs/Firewalls.
Before getting to your durability question, there is something else I'd like to point out. If you try to build a flat system with 100K nodes you'll generate quite a bit of discovery information. Beside generating some traffic, this will be taking memory on your end applications. If you use Vortex-Cloud, instead, we play tricks to limit the discovery information. To give you an example, if you have a data-write matching 100K data reader, when using Vortex-Cloud the data-writer would only match on end-point and thus reducing the discovery information by 100K times!!!
Finally, concerning your durability question, you could configure some durability service as alignee only. In that case they will never become master.
HTH.
A+

How to limit Bandwidth for file transfer/udp to a percentage with mikrotik

The scenario is a network when multimedia file transfer is very common.
I have some web applications in that network and I want to create a rule maybe in the Mikrotik router in order to avoid the webapplication slow down when a file transferring is occurring.
Is that possible to avoid and how?
May be creating a limit udp bandwidth rule.
Your description of problem is too overall but maybe this will indicate you solution.
If you want to slow down some connections you should use queue,
when you are using queues you can try to configure BURST, this feature allows to limit long time connections. Usefull in advance configurations of the queues are mark-packet and mark-connections
Sometimes is better to use something like ratelimit in webserver but it all depends on the situation.

Rerouting Application Network Traffic at the Data Link Layer

Consider the following situation:
You have an application you are tesing, but in order to test the networking functionality of said program, you are required to run multiple instances of it and have them communicate with one another.
Possible solutions are:
- Run software on individual machines connected by WAN or LAN.
- Run the software on virtual machines, all on the same computer.
I do not want to use either of these methods (the reasoning is irrelevant). I want to know if there is a way that I can reroute network transmissions from the test application (ideally in any programmming language) in a way such that I can run multiple instances of the same software on one computer, and have them behave as if they were the only instance running on that computer.
In other words, I want to be able to code the application so that each instance listens on the same "listening" port (since only one instance will be running on each computer when in production). Then, I want to know if I can reroute the network requests at a lower level then the application so that they do not interfere with eachother (clash over the same port number).
Essentially, I want to build a virtual environment which only redirects the network calls (whereas a virtual machine takes far more resources, and has way more involved). Is this possible, and how might I approach this problem?
Thank you!
UPDATE: This is a more accurate idea of what I want to accomplish:
Basically, I want to program another application which TRANSPARENTLY redirects bind requests to available ports, and manages which applications are bound where... So from the applications perspective, all the instances are bound to port 1000, but in reality, this other application is automatically managing which instance is bound where, and avoiding potential conflicts. I feel like this could be accomplished with Windows Hooks, but I'm not sure how you could implement this?
As far as I know, there is no sane way to multiplex the same port on the same network device. At the very minimum, you will need to choose on of the following:
Run each instance of your program on a different port
Create multiple virtual network interfaces
The first choice is easy and may be the one I would choose. The second one is more towards what you are looking for but it would be a true PITA to set up - you can look into VirtualBox and its host-only networks for inspiration. If you are writing things on linux you might look into pipes and chrooting but you'll be spending more time setting up this environment than writing your software.

Sending broadcast with Chrome Extensions

I'm coding an extension for a customer, one of the requirements is that the extension also works offline because internet services are not that reliable, my customer's business can't stop but can deal with "stale" data, thats a nice tradeoff I guess.
Therefore, I want to code some kind of distributed cache as an extension to synchronize local data among the N nodes that will be connected running the same application and thus synchronize with the real database, hosted on the internet.
In order to achieve that I imagined that I would need to make a network broadcast and listen to incoming broadcasts, then every node that starts to run my application will broadcast it's IP address and become available as a new node for the distributed cache, failover is very important here.
I googled some possibilities I initially thought but none of them will work, I guess. The first was to do it just with HTTP, the second was to use Google Native Client to write C++ code that could run network code and thus do the broadcast, but it has limitations. Right now I'm thinking to use Java Applets but I don't really know if they have some limitations related to networking or if Chrome Extensions has any limitation with Java Applets.
Any ideas on how to do it? Using some of the stuff I suggested or another approach?
You could create an NPAPI extension, which would not be restricted by Chrome at all.

Resources