Test harness software for networking failures - networking

During integration testing it is important to simulate various kinds of low-level networking failure to ensure that the components involved properly handle them. Some socket connection examples (from the Release It! book by Michael Nygard) include
connection refused
remote end replies with SYN/ACK but never sends any data
remote end sends only RESET packets
connection established, but remote end never acknowledges receiving packets, causing endless retransmissions
and so forth.
It would be useful to simulate such failures for integration testing involving web services, database calls and so forth.
Are there any tools available able to create failure conditions of this specific sort (i.e. socket-level failures)? One possibility, for instance, would be some kind of dysfunctional server that exhibits different kinds of failure on different ports.
EDIT: After some additional research, it looks like it's possible to handle this kind of thing using a firewall. For example iptables has some options that allow you to match packets (either randomly according to some configurable probability or else on an every-nth-packet basis) and then drop them. So I am thinking that we might set up our "nasty server" with the firewall rules configured on a port-by-port basis to create the kind of nastiness we want to test our apps against. Would be interested to hear thoughts about this approach.

bane is built for this purpose, described as:
Bane is a test harness used to test your application's interaction with other servers. It is based upon the material from Michael Nygard's "Release It!" book as described in the "Test Harness" chapter.
(Edit 2021): a more developed tool for testing behavior with different network issues is toxiproxy
Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supporting deterministic tampering with connections, but with support for randomized chaos and customization. Toxiproxy is the tool you need to prove with tests that your application doesn't have single points of failure.

Take a look at the dummynet.

You can do it with iptables, or you can do it without actually sending the packets anywhere with ns-3, possibly combined with your favourite virtualisation solution, or you can do all sorts of strange things with scapy.

Related

How to acces the data from a website 50-100 times a second using raspberry Pi?

I want to fetch the data of a stock. Since the data changes very fast, is there any way to pull the data like 50-100 times a second from trading websites?
And can we implement that using a raspberry Pi 4 8gig model.
RasPi4 should be more than adequate for this task. Both the ethernet and WiFi hardware is capable of connections at these speeds. (Unless you’re running a bunch of other stuff on it.) Consider where your bottlenecks may be, likely ISP or other network traffic). Consider avoiding WiFi in favor of cat5e or cat6. Consider hanging this device off your router (edge) to keep lan traffic lower and consider QOS settings if you think this traffic may compete with other lan traffic.
This appears to be a general question with no specific platform in mind. For stocks, there are lots of platforms to choose from.
APIs for trading platforms often include a method to open a stream. Instead of a full TCP conversation for each price check, a stream tells the server to just keep on sending data. There are timeout mechanisms of course, but it is good to close that stream gracefully (It’s polite since you’re consuming server resources at a different scale. I’ve seen some financial APIs monitor and throttle stream subscribers who leave sessions open.).
For some APIs/languages you can find solid classes already built on GitHub. Although, if simply pulling and reading a stream then the platform API doc code snippets should be enough to get you going.
Be sure to find out what other overhead may be implicated. For example, if an account or API key is needed to open a stream then either a session must be opened first or the creds must be passed with the stream being opened. The API docs will say. If you’re new to this sort of thing, just be a detective and try to infer what is needed. API docs usually try to be precise and technically correct with the absolute minimum word count.
Simply checking the steam should be easy. Depending on how that steam can be handled by your code/script, it may be harder to perform logic on the stream while it is being updated. That’s usually a thread issue or a variable scope issue depending on the script/code. For what you’re doing I would consider Python or PowerShell depending on your skill-set and other design parameters.

Port seems to be shared between to separate grpc channels in same process

My questions concerns gRPC clients using the C core, specifically C++
I've been debugging one of our servers, and I've noticed a certain two-client flow worked when the clients were launched from separate processes (two separate console windows) but not from within an automated test case (which runs within a single process). The flow in question involves two "clients" (channels, basically) which are alive at the same time and concurrently issuing requests to the same server
Digging further, I discovered that in the working flow the server received requests from two different ip:port combinations: 127.0.0.1:xxxx and 127.0.0.1:yyyy. In the failing scenario however, both requests come from the same ip:port.
I create a separate channel for every client, so this behavior confused me. I have a couple of questions
Does gRPC share ports between channels in the same process like this? If not, then I have to imagine there's a bug in my code
If yes to (1), is there any way to avoid this port reuse?
I do see the "grpc.so_reuseport" option in the channel's metadata, and note that it is enabled by default. This seems more related to servers than clients (though perhaps I'm making an arbitrary distinction), but I'll disable it and try things out
EDIT: The so_reuseport option doesn't do anything, but I am on Windows so I should have expected that anyways :/ I also found a related question without any answers here
EDIT 2: The discussion on this question seems promising. Will try it out and report back
This is mentioned/explained in the question I link in edit 2, but the answer is to use
auto metadata = grpc::ChannelArguments();
metadata.SetInt(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL, 1);
and pass it to grpc::CreateCustomChannel

Sending broadcast with Chrome Extensions

I'm coding an extension for a customer, one of the requirements is that the extension also works offline because internet services are not that reliable, my customer's business can't stop but can deal with "stale" data, thats a nice tradeoff I guess.
Therefore, I want to code some kind of distributed cache as an extension to synchronize local data among the N nodes that will be connected running the same application and thus synchronize with the real database, hosted on the internet.
In order to achieve that I imagined that I would need to make a network broadcast and listen to incoming broadcasts, then every node that starts to run my application will broadcast it's IP address and become available as a new node for the distributed cache, failover is very important here.
I googled some possibilities I initially thought but none of them will work, I guess. The first was to do it just with HTTP, the second was to use Google Native Client to write C++ code that could run network code and thus do the broadcast, but it has limitations. Right now I'm thinking to use Java Applets but I don't really know if they have some limitations related to networking or if Chrome Extensions has any limitation with Java Applets.
Any ideas on how to do it? Using some of the stuff I suggested or another approach?
You could create an NPAPI extension, which would not be restricted by Chrome at all.

I want to build a decentralized, reddit-like system using P2P. What existing p2p library should I base it on?

I want to build a decentralized, reddit-like system using P2P. Basically, I want to retain the basic capabilities of reddit, but make it decentralized, to make it more robust and immune to censorship. This will also allow people to develop different clients to match the way they want to browse it.
Could you recommend good p2p libraries to base my work on? They should be open-source, cross-platform, robust and easy to use. I don't care much about the language, I can adapt.
Disclaimer: warning, self-promotion here !!!
Have you considered JXTA's latest release? It is probably sufficient for what you want to do. Else, we are working on a new P2P framework called Chaupal, but it is not operational yet.
EDIT
There is also what I call the quick-and-dirty UDP solution (which is not so dirty after all, I should call it minimal).
Just implement one server with a public address and start listening for UPD.
Peers located behind NATs contact the server which can read how their private IP address has been translated into a public IP address from the received datagrams.
You send that information back to the peer who can forward it to other peers. The server can also help exchanging this information between peers.
Then peers can communicate directly (one-to-one) by sending datagrams to these translated addresses.
Simple, easy to implement, but does not cover for lost datagrams, replays, out-of-order etc... (i.e., the typical stuff that TCP solves for you at the IP stack level).
I haven't had a chance to use it, but Telehash seems to have been made for this kind of application. Peer2Peer apps have a particular challenge dealing with the restrictions of firewalls... since Telehash is based on UDP, it's well suited for hole-punching through firewalls.
EDIT for static_rtti's comment:
If code velocity is a requirement libjingle has a lot of effort going into it, but is primarily geared towards XMPP. You can port off parts of the ICE code and at least get hole-punching. See the libjingle architecture overview for details about their implementation.
Check out CouchDB. It's a decentralized web app platform that uses an HTTP API. People have used it to create "CouchApps" which are decentralized CouchDB-based applications that can spread in a viral nature to other CouchDB servers. All you need to know to write CouchApps is Javascript and learn the CouchDB API. You can read this free online book to learn more: http://guide.couchdb.org
The secret sauce to CouchDB is a Master-to-Master replication protocol that lets information spread like a virus. When I attended the first CouchConf, they demonstrated how efficient this is by throwing a "Couch Party" (which is where you have a room full of people replicating to the person next to them simulating an ad hoc network).
Also, all the code that makes a CouchApp work is public by default in special entities known as Design Documents.
P.S. I've been thinking of doing a similar project, but I don't have a lot of time to devote to it at the moment. GOD SPEED MY BOY!

Network or Transport Layer Fuzzing

How do I go about executing a fuzzing strategy to stress a network stack, specifically at the third and fourth layers (network and transport)? I've looked at frameworks to generate fuzzers, like SPIKE, but it seems to me that they are mostly focused on the application layer and above? Is there any well known techniques out there to fuzz well-known protocols in these layers, say, TCP?
Thanks.
Look at Scapy. It allows you to fuzz at the network and transport layers. The fuzz function will fuzz anything you didn't explicitly specify in the IP or TCP layers (you can apply it separately to each). This gives you a range of abilities from just randomly generating ip addresses and port pairs to making and sending nonsense packets.
You may also want to look at Fragroute. This will twist TCP/IP into using all sorts of evasions techniques, but could potentially unveil otherwise hidden bugs/vulnerabilities in your network stack.
Furthermore, if your organization doesn't object, you could set up a Tor exit node and capture traffic from it. I've found it useful for testing correct TCP connection state tracking. Though your end of the connections is well-known and unchanging, there's a huge variety of servers as well as fun network congestion issues. It's basically an endless source of traffic. Be sure to check with your higher ups as your org may object to being a potential source of malicious traffic (even though there is a strong precedent of non-liability). I've gotten around that issue by running it/capturing at home, then bringing in the pcaps.
If you want to fuzz the IP, UDP, or TCP route your packets from your high level services via loopback to a process that reads them, fuzzes them, and forwards them. You need a driver that lets you talk to raw sockets and you need to read/learn what the applicable RFCs say for those protocols.
There is an easy way to do this. Just as Justdelegard recommends, Scapy is probably the best thing to use, in general.
Take a look at Releasing ICMPv4/IP fuzzer prototype by Laurent Gaffié. His Python code, which incidentally he has reposted in more readable fashion at pastebin.com, imports from scapy and uses some methods he defines to do a couple of types of fuzzing. IP and ICMP packets are handled in his sample code. So, this sounds exactly like what you are seeking.
Right now, there seems to be a lot of companies using Tcl/Expect to do custom automated testing of networks. SIP, H.323, layer 2 & 3 protocols, etc.
So if Scapy does not meet your needs, you might be able to make or find something written in Tcl using Expect to do the job. Or, you may wish to do some things in Python, using Scapy - and other things in Tcl, using Expect.
Tcl has long been used for network test and management applications. There was a book on how to use Tcl to do SNMP-based network management way back in the 1990's.
Syntax of Tcl is decidedly odd but the libraries are very powerful. It comes with a framework-like ability to define behavior of custom network behavior atop sockets, similar to what you can do with the standard libraries for the Python programming language.
Unlike Python and other scripting languages, there is an extremely powerful tool for Tcl programs named Expect (see expect man page).
Expect has a handy capability. It can auto generate a Tcl test script. The generated script makes calls to Expect functions. When doing this recording, it functions as a passive man-in-the-middle, recording both sides of the conversation. Kind of the way that you record Macros while you do some editing in MS Word or in Emacs.
Then afterward, you can edit the automatically-generated Expect script to fine tune it, make it behave differently, or creation multiple variations of it. It is very handy for creating regression tests. You should be able to use this to kickstart writing higher layer protocol tests, should you need some. Beats starting from scratch.
I think you can use Tcl/Expect to test standard TCP applications (FTP, HTTP, SMTP, etc.) that use string based commands. It works well for testing character based applications like TELNET that read input from stdin and generate output to stdout too.

Resources