I am working on a POC to check if we can actually duplicate redis traffic to multiple destinations coming from single host.
Any help will be appreciated.
After going through allot of utilities and documentation. The only thing that comes close to what I wanted to achieve is
https://github.com/mudasirmirza/redis-migrate-tool
The only caveat is that, there should be enough memory to run 1 BGSAVE.
Related
My questions concerns gRPC clients using the C core, specifically C++
I've been debugging one of our servers, and I've noticed a certain two-client flow worked when the clients were launched from separate processes (two separate console windows) but not from within an automated test case (which runs within a single process). The flow in question involves two "clients" (channels, basically) which are alive at the same time and concurrently issuing requests to the same server
Digging further, I discovered that in the working flow the server received requests from two different ip:port combinations: 127.0.0.1:xxxx and 127.0.0.1:yyyy. In the failing scenario however, both requests come from the same ip:port.
I create a separate channel for every client, so this behavior confused me. I have a couple of questions
Does gRPC share ports between channels in the same process like this? If not, then I have to imagine there's a bug in my code
If yes to (1), is there any way to avoid this port reuse?
I do see the "grpc.so_reuseport" option in the channel's metadata, and note that it is enabled by default. This seems more related to servers than clients (though perhaps I'm making an arbitrary distinction), but I'll disable it and try things out
EDIT: The so_reuseport option doesn't do anything, but I am on Windows so I should have expected that anyways :/ I also found a related question without any answers here
EDIT 2: The discussion on this question seems promising. Will try it out and report back
This is mentioned/explained in the question I link in edit 2, but the answer is to use
auto metadata = grpc::ChannelArguments();
metadata.SetInt(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL, 1);
and pass it to grpc::CreateCustomChannel
I want to preface this by saying that I've never taken a networking class but I'm learning on the job. Things like TCP/IP networking I have a pretty basic grasp of and if you think this will hinder my attempt at this let me know.
The task I have at hand is thus: I have an Open Stack network with a bunch of nodes that can communicate with each other, all running CentOS virtual machines (just for simplicity's sake) with applications running on top of them. The task is basically to find a way to monitor the ping of every node and report whenever some kind of message (probably through http) that reports what happened. The logic of checking for the actual latency problems isn't what I'm struggling with, its the best structure to complete this task.
I'm thinking of using Nagios and setting up a distributed monitoring system. Basically my plan is to instal nagios on each node after writing my plugin (unless its already offered or exists) and it would simply ping everything else in the network once its setup and the other nodes ping it once the fact that it has joined the network is detected. I'm not sure exactly how scalable this is because if the number of nodes increase a lot would having every node pinging every other node actually be a good thing? Could it actually end up being a lot of stress on the network?
Is this a bad idea? I know a more efficient solution would be something where as long as every node is being checked (not necessarily have to have every node connected to by every other node) is more efficient. Visualizing it as a graph with a couple of points, it would be a bidirectional graph with just one path connecting each point rather than every possible point having edges between each other. But I don't know if this is the level I should be thinking about it or not.
In short, what I'm asking is: How would one go about setting up a ping monitoring system between a bunch of Open Stack nodes?
Let me know if this question makes sense. Thanks.
Still not entirely sure what you're trying to accomplish with this setup, but the Nagios setup you're describing sounds messy and likely won't cover what you need. I'd look at building packetbeat into the provisioning of each of your hosts, and then shipping that data off to Elasticsearch. That way you can watch your actual application-level traffic and response times. https://www.elastic.co/products/beats/packetbeat
This question already has answers here:
How do you detect a VPN or Proxy connection? [closed]
(7 answers)
Closed 2 years ago.
I know it is popular question, and I read all topics about it. I want to put point for me in this question.
Goal: Detect proxy if user use it
Reason: If user use proxy does not show geo adv. I need to know bool result.
Decision:
1. Use database of proxy IPs (for ex: MaxMind);
2. Check header Connection: keep-alive because cheap proxy does not use persistent connection. But all modern browsers use it;
3. Check other popular headers;
4. Use JS to detect web-proxy by compare browser host and real host.
Questions:
1. Advise database, I read about MaxMind, but some people wrote it is not effective.
2. Check Connection-header. Is it okey?
3. May be I missed something?
PS/ Sorry for my english... I learn it.
Option 1 you suggested is the best option. Proxy detection can be time consuming and complicated.
As you mentioned maxmind and your concern for effectiveness, there are other APIs available like GetIPIntel. It's free and very simple to use. They go beyond simple blacklists and use machine learning and probability theory algorithms to determine a probability value and makes things very accurate.
Option 2 you mentioned doesn't hurt to implement unless you get a lot of false positives. Option 3-4 should not be used alone because it's very easy to get around it. All browser actions can be automated and just because someone is using a proxy, it does not mean they're not using a real browser.
The best way is definitely to use an API. You could use the database from MaxMind but then you need to keep downloading that database and making sure the data is kept up to date by them. And as you said there are questions about the accuracy of MaxMind data.
Personally I would recommend you try https://proxycheck.io which full disclosure is my own site, you get full access to everything for free, premium proxy detecting and blocking with 1,000 daily queries.
You can evaluate IP2Proxy database which is updated daily. It detects open proxy, web proxy, Tor and VPN. https://www.ip2location.com/database/px2-ip-proxytype-country
Check connection header is inaccurate for proxy types such as VPN.
Check headers is easily being defeated. A new generation of proxy will attempt to workaround older generation of detection methods.
Based on our experience, the best method in proxy detection is based on accurate blacklist.
What is the right approach to use to configure OpenSplice DDS to support 100,000 or more nodes?
Can I use a hierarchical naming scheme for partition names, so "headquarters.city.location_guid_xxx" would prevent packets from leaving a location, and "company.city*" would allow samples to align across a city, and so on? Or would all the nodes know about all these partitions just in case they wanted to publish to them?
The durability services will choose a master when it comes up. If one durability service is running on a Raspberry Pi in a remote location running over a 3G link what is to prevent it from trying becoming the master for "headquarters" and crashing?
I am experimenting with durability settings in a remote node such that I use location_guid_xxx but for the "headquarters" cloud server I use a Headquarters
On the remote client I might to do this:
<Merge scope="Headquarters" type="Ignore"/>
<Merge scope="location_guid_xxx" type="Merge"/>
so a location won't be master for the universe, but can a durability service within a location still be master for that location?
If I have 100,000 locations does this mean I have to have all of them listed in the "Merge scope" in the ospl.xml file located at headquarters? I would think this alone might limit the size of the network I can handle.
I am assuming that this product will handle this sort of Internet of Things scenario. Has anyone else tried it?
Considering the scale of your system I think you should seriously consider the use of Vortex-Cloud (see these slides http://slidesha.re/1qMVPrq). Vortex Cloud will allow you to better scale your system as well as deal with NAT/Firewall. Beside that, you'll be able to use TCP/IP to communicate from your Raspberry Pi to the cloud instance thus avoiding any problem related to NATs/Firewalls.
Before getting to your durability question, there is something else I'd like to point out. If you try to build a flat system with 100K nodes you'll generate quite a bit of discovery information. Beside generating some traffic, this will be taking memory on your end applications. If you use Vortex-Cloud, instead, we play tricks to limit the discovery information. To give you an example, if you have a data-write matching 100K data reader, when using Vortex-Cloud the data-writer would only match on end-point and thus reducing the discovery information by 100K times!!!
Finally, concerning your durability question, you could configure some durability service as alignee only. In that case they will never become master.
HTH.
A+
Assuming you have two bunch of servers, the first one dedicated to front and the other one, dedicated to process information from the fronts. What is the best way to transfer data from the fronts to the process servers;
I tried different techs on small amount of data:
tried to dump data into files and retrieve them from the process servers... that's ok very secure because you never lose your data, but it uses a lot of disc write capacity.
also tried sockets very cool
But sincerely I still don't know what is the best way to treat a huge data stream between servers.
Can someone point me in the right direction?
I would say the best option is to use a persistent queue, like RabbitMQ. That way if the receiving servers go down then your transfer is not lost and the transfer will simply continue when the receiving server pulls the data off the queue