Network or Transport Layer Fuzzing - networking

How do I go about executing a fuzzing strategy to stress a network stack, specifically at the third and fourth layers (network and transport)? I've looked at frameworks to generate fuzzers, like SPIKE, but it seems to me that they are mostly focused on the application layer and above? Is there any well known techniques out there to fuzz well-known protocols in these layers, say, TCP?
Thanks.

Look at Scapy. It allows you to fuzz at the network and transport layers. The fuzz function will fuzz anything you didn't explicitly specify in the IP or TCP layers (you can apply it separately to each). This gives you a range of abilities from just randomly generating ip addresses and port pairs to making and sending nonsense packets.
You may also want to look at Fragroute. This will twist TCP/IP into using all sorts of evasions techniques, but could potentially unveil otherwise hidden bugs/vulnerabilities in your network stack.
Furthermore, if your organization doesn't object, you could set up a Tor exit node and capture traffic from it. I've found it useful for testing correct TCP connection state tracking. Though your end of the connections is well-known and unchanging, there's a huge variety of servers as well as fun network congestion issues. It's basically an endless source of traffic. Be sure to check with your higher ups as your org may object to being a potential source of malicious traffic (even though there is a strong precedent of non-liability). I've gotten around that issue by running it/capturing at home, then bringing in the pcaps.

If you want to fuzz the IP, UDP, or TCP route your packets from your high level services via loopback to a process that reads them, fuzzes them, and forwards them. You need a driver that lets you talk to raw sockets and you need to read/learn what the applicable RFCs say for those protocols.
There is an easy way to do this. Just as Justdelegard recommends, Scapy is probably the best thing to use, in general.
Take a look at Releasing ICMPv4/IP fuzzer prototype by Laurent GaffiƩ. His Python code, which incidentally he has reposted in more readable fashion at pastebin.com, imports from scapy and uses some methods he defines to do a couple of types of fuzzing. IP and ICMP packets are handled in his sample code. So, this sounds exactly like what you are seeking.
Right now, there seems to be a lot of companies using Tcl/Expect to do custom automated testing of networks. SIP, H.323, layer 2 & 3 protocols, etc.
So if Scapy does not meet your needs, you might be able to make or find something written in Tcl using Expect to do the job. Or, you may wish to do some things in Python, using Scapy - and other things in Tcl, using Expect.
Tcl has long been used for network test and management applications. There was a book on how to use Tcl to do SNMP-based network management way back in the 1990's.
Syntax of Tcl is decidedly odd but the libraries are very powerful. It comes with a framework-like ability to define behavior of custom network behavior atop sockets, similar to what you can do with the standard libraries for the Python programming language.
Unlike Python and other scripting languages, there is an extremely powerful tool for Tcl programs named Expect (see expect man page).
Expect has a handy capability. It can auto generate a Tcl test script. The generated script makes calls to Expect functions. When doing this recording, it functions as a passive man-in-the-middle, recording both sides of the conversation. Kind of the way that you record Macros while you do some editing in MS Word or in Emacs.
Then afterward, you can edit the automatically-generated Expect script to fine tune it, make it behave differently, or creation multiple variations of it. It is very handy for creating regression tests. You should be able to use this to kickstart writing higher layer protocol tests, should you need some. Beats starting from scratch.
I think you can use Tcl/Expect to test standard TCP applications (FTP, HTTP, SMTP, etc.) that use string based commands. It works well for testing character based applications like TELNET that read input from stdin and generate output to stdout too.

Related

read raw packets over network with C#?

I've got a proprietary BMS language that is sending it's info over a specific UDP port on the network. The existing interface is not very well made or maintained, and functions poorly.
I have access to the stack for the code, and don't mind creating some interpretation functionality
My question is what is the best way that I should be receiving these raw packets in my program to be interpreted? I'm not finding any good documentation on how to do this, and I wanted to try and do it in a reasonably appropriate way.
Do I basically need to make my program constantly sniff a specific port? and will this be cumbersome to the network or program to be doing this?
You tagged this BACnet. Why don't you try Wireshark, with a capture filter "udp port 47808" and see if wireshark exposes the packets in a way that makes sense to you. (or have you done this). If it is bacnet, then normal UDP sockets, bound to port 47808 is the way to go. Note, that 47808-47823 are the most common BACnet "default" ports. Use cports or something to see exactly what port(s) your application is bound to.
You could use a packet-capture library - but that has security connotations, so instead you can probably (for most part) get away with using a .NET 'UdpClient'.
But! The real challenge is the breaking-down & interpretation of the BACnet packets, which is the hard part.
There is (now!/finally) a NuGet package for BACnet - not that I've used it, but that might be one of the best choices for your case.
But I also suggest you experiment with the (advanced & free) VTS (Visual Test Tool) too.
You could also try using the BACnet stack that YABE uses too.

bind zmq dealer to multiple repliers

I want to create a proxy server which routes incoming packets from REQ type sockets to one of the REP sockets on one of the computers in a cluster. I have been reading the guide and I think the proper structure is a combination of ROUTER and DEALER on the proxy server. Where the ROUTER passes messages to the dealer to be distributed. However, I cannot figure out how to create this connection scheme. Is this the correct architecture? If so how to I bind a dealer to multiple addresses. The flow I envision is like this REQ->ROUTER|DEALER->[REP, REP, ...] where only one REP socket would handle a single request.
NB: forget about packets -- think in terms of "Behaviour", that's the key
ZeroMQ is rather an abstract layer for certain communication-behavioral patterns, so while terms alike socket do sound similar to what one has read/used previously, the ZeroMQ-world is by far different from many points of view.
This very formalism allows ZeroMQ Formal-Communication-Patterns to grow in scale, to get assembled in higher-order-patterns ( for load-balancing, for fault-tolerance, for performance-scaling ). Mastering this style of thinkign, you forget about packets, thread-sync-issues, I/O-polling and focus on your higher-abstraction-based design -- on Behaviour -- rather than on underlying details. This makes your design both free from re-inventing wheel & very powerful, as you re-use a highly professional tools right for your problem-domain tasks.
DEALER->[REP,REP,...] Segment
That said, your DEALER-node ( in fact a ZMQsocket-access-node, having The Behaviour called a "DEALER" to resemble it's queue/buffering-style, it's round-robin dispatcher, it's send-out&expect-answer-in model ) may .bind() to multiple localhost address:port-s and these "service-points" may also operate over different TransportClass-es -- one working over tcp://, another over inproc://, if that makes sense for your Design Architecture -- ZeroMQ empowers you to use this transparently abstracted from all the "awfull&dangerous" lower level gritty-nitties.
ZeroMQ also allows to reverse .connect() / .bind()
In principle, where helpfull, one may reverse the .bind() and .connect() from DEALER to a known target address of the respective REP entity.
You leave a couple details out that are important to determining the correct architecture.
When you say "from REQ type sockets to one of the REP sockets on one of the computers in a cluster", how do you determine which computer gets the message? Is it addressed to a specific computer? Does a computer announce its availability before it can receive a message? Does each message just get passed to the next one in line in a round-robin fashion? (if it's not the last one, you probably don't want a DEALER socket)
When you say "how do I bind a dealer to multiple addresses", it's not clear what you mean by "addresses"... Do you mean to say that the proxy has a unique IP address that it uses to communicate with each computer in the cluster? Or are you just wondering how to manage the connection to multiple different peers with the same socket? The former is a special case, the latter is simple.
I'm going to work with the following assumptions:
You want a worker computer from the cluster to announce its availability for work before it receives any work, and any computer in the cluster can handle any job. A faster worker, or a worker working on a smaller job, will not have to wait behind some slow worker to finish their job and get a new job first.
The proxy/broker uses a single ip interface to communicate with all workers.
If those are true, then what you want will be closer to this:
REQ->ROUTER|ROUTER->[REQ, REQ, ...]
A worker will create a request to the backend router socket to announce its availability, and await a reply with work. Once it is finished, it will create a new request with the finished work, which again announces its availability. The other half of the pattern you've already worked out.
This is the Simple Pirate Pattern from the ZMQ guide. It's a good place to start, but it's not very robust. This is in the Reliable Request-Reply Patterns section of the guide, and I suggest you read or reread that section carefully as it will guide you well. In particular, they keep refining this pattern into more and more reliable implementations and wind up with the Majordomo pattern, which is very robust and fault tolerant. You should see if you need all the features that provides or if you can scale it back a little. Either way, you should learn and understand what these patterns are doing and why before you make the choice to do something different.

How to send emails with an Arduino without using a computer?

I'm experimenting with my Arduino Mega. I also have an Arduino Ethernet Shield.
I need to send emails using them, without the help of a computer (or any other device; like a smartphone, etc.). Though I could find several articles, I couldn't find any acceptable solution...
How can I do it? As I'm not asking this to be used for any special application, you can
make any assumption about missing details.
From the discussion above in comments it sounds like you either need code from someone who has just done it for you or you need to take the time to learn about the components and find or make the components.
They wouldn't make an Ethernet shield for this platform if it was only useful for non-standard packets. So someone somewhere has created some level of an IP stack.
Backing up though, in order to send mail you need to learn the Simple Mail Transfer Protocol (SMTP). Almost all Internet protocol definitions are defined using something called RFCs (Request for Comments). So if you google SMTP RFC you will find RFC 2821.
IETF is Internet engineering task force. There will be many copies of these documents on many websites. And due to the age of the Internet and these protocols in many cases you will find that one RFC has been created to replace a prior one. Version numbers are not used, but it is kind of like HTML 1.0 then HTML 2.0 and so on. I recommend even though the RFC says that it completely replaces RFC xyz, go find RFC xyz and read it. I go back as far as I can find learn that one then work my way forward.
Many/most protocols that ride on top of TCP (TCP is yet another protocol defined in an RFC, more on that later) are ASCII based, makes it very easy to, for example, Telnet to learn/experiment with the protocol, you can probably use Telnet to learn SMTP.
Most protocols are some sort of a half duplex thing, make a connection and often the server sends you a string, you see that string and then you send some sort of hello string, the server responds with some sort of OKAY or fail status. For SMTP, you then do some sort of I am mailing from this email address, server says OKAY, you say I want to mail this person or this list of people, for each email address you get an okay or fail. Eventually, you tell the server you are ready to send the body of the message, you do that, end the message with the defined termination. Then either the server says okay or fail or maybe there is some more handshaking.
The protocols in general though have this back and forth. Usually you are sending strings with commands and usually the server side sends back a short okay or error. Sometimes, if they want, they send back more detail on the error, but always start with the few bytes that indicate okay or error. The protocols generally have a flow, you must do this first then this then that.
You should learn sockets programming, sometimes called Berkeley sockets. You can write programs that are mostly portable across unixes but also across to Windows using Windows sockets if that is your platform of choice. You need to learn the protocol first, and it is better on your desktop/laptop and not embedded, you can get it done faster there. You do NOT have to learn to fork or thread to use sockets. The examples may show that as it is easy to show it that way, but you can write complete applications using polling only, it is half duplex send something, wait, send something, wait. For these simple learning programs, a little time up front to learn sockets, from there, it is all learning the protocols.
Now that was the very easy part, the hard part is the TCP/IP stack. I do not recommend attempting that without gaining a lot more experience taking baby steps on your way there. For example, learn to respond to ARP first (yet another RFC protocol, address resolution protocol) then ping (ICMP echo, one subset of the ICMP protocols) then IP basics (sniffing packets) then receive and generate UDP packets. TCP is a whole other level above that, more handshaking. It is not fixed packet size, it is streaming, do not have your code operate on packets, it is a stream of bytes, like working with a serial port.
Doing your own TCP stack is very much a non-trivial thing, I don't recommend it, you need to find someone that has done a TCP/IP stack for this platform for the Ethernet shield and just use it, whatever RTOS or environment they use, use it. Then take your desktop/laptop based experience with the protocol and apply that.
From the discussion above, if you don't want to learn the protocols, etc., I think you need to google around looking at Arduino Ethernet shield examples and see if anyone has done something that sends emails.

Reliable udp broadcast libraries?

Are there any libraries which put a reliability layer on top of UDP broadcast?
I need to broadcast large amounts of data to a large number of machines as quickly as possible, and generally it seems like such a problem must have already been solved many times over, but I wasn't able to find anything except for the Spread toolkit, which has a somewhat viral license (you have to mention it in all materials advertising the end product, which I'm not sure our customer will be willing to do).
I was already going to write such a thing myself (because it would be extremely fun to do!) but decided to ask first.
I looked also at UDT (http://udt.sourceforge.net) but it does not seem to provide a broadcast operation.
PS I'm looking at something as lightweight as a library - no infrastructure changes.
How about UDP multicast? Have a look at the PGM protocol for which there are several commercial and open source implementations.
Disclaimer: I'm the author of OpenPGM, an open source implementation of said protocol.
Though some research has been done on reliable UDP multicasting, I haven't yet used anything like that. You should take into consideration that this might not be as trivial as it first sounds.
If you don't have a list of nodes in the target network you have no idea when and to whom to resend, even if active nodes receiving your messages can acknowledge them. Sending to a large number of nodes, expecting acks from all of them might also cause congestion problems in the network.
I'd suggest to rethink the network architecture of your application, e.g. using some kind of centralized solution, where you submit updates to a server, and it sends this message to all connected clients. Or, if the original sender node's address is known a priori, then just let clients connect to it, and let the sender push updates via these connections.
Have a look around the IETF site for RFCs on Reliable Multicast. There is an entire working group on this. Several protocols have been developed for different purposes. Also have a look around Oracle/Sun for the Java Reliable Multicast Service project (JRMS). It was a research project of Sun, never supported, but it did contain Java bindings for the TRAM and LRMS protocols.

How are network protocols implemented?

I know that a protocol is a set of rules that governs communication between two computers on a network, but how are thoses rules implemented for the computer? Is a protocol basically a piece of code or, in other words, software?
Protocols are generally built upon each other. At the risk of sounding pedantic, here's an example of a protocol and where/how it's implemented:
Application Protocol - the way a particular application talks to another instance of itself or a corresponding server; this is implemented in the application code or a shared library
TCP (or UDP, or another layer) - the way that information is sent at the binary level and split up into usable chunks, then reassembled at the destination; this is usually implemented as part of the operating system, but it is still software code
IP - the way that information (having already been split or truncated by something like TCP or UDP) makes its way from one place to another by routing over one or more "hops"; this is always software code, but is sometimes implemented in the OS and sometimes implemented in the network device (your LAN card, for example)
base-T (ethernet), token ring, etc - Here we are physically getting into how the hardware talks to one another; ie, which wire corresponds to a particular type of signal; this is always implemented in hardware
electricity /photons - the laws that govern (or at least define) how electrons (or photons) flow over a conductive material or over the air; this is usually implemented in hardware ;)
In a sense, these are all "protocols" (a set of rules or expected behaviors that allow communication to take place), and they're built on one another.
Bear in mind that (aside from electricity) this is not an exhaustive list of the sort of protocols that exist at any of these layers!
Edit Thanks to dmckee for pointing out that electricity isn't the only physical process used in networking ;)
Networking protocols are not pieces of code or software, they are only a set of rules. When software uses a specific networking protocol, then the software is known as an implementation. There can be many different software implementations of the same protocol (i.e. Windows and UNIX have different TCP/IP implementations). It is possible to understand networking protocols without any knowledge of programming.
EDIT: How are they implemented? Here's a paper on taking an abstract specification of a protocol and implementing it into C. You'll see that less-strict protocols leave out certain details that programmers have to guess on, which makes some implementations incompatible with others.
A network protocol is basically like a spoken language. It is implemented by code that sends and receives specially prepared messages over the network/internet, much like the vocal chords you need to speak (the network and hardware) and a brain to actually understand what someone said (the protocol stack/software).
Sometimes protocols are implemented directly on the hardware [for speed reasons] (like the Ethernet protocol for LANs) - but it is always software/code required to do something useful with a protocol.
This might be interesting for you:
The OSI Model
Protocol (Computing)
Software implements the rules defined in the protocol, some protocols are formal defined and some informal.
a protocol is a set of rules governing the communication between two entities.
in the computer/programming context, a protocol is a set of rules governing the communication between two programs.
in the computer network context, a protocol is a set of rules governing the communication between two programs, well, over network.
in computers, in the end everything is embodied in code...
Protocols are basically set of rules. The way to implement them is to first of all make a state machine diagram as it completely tells that what is going to be the current state and how the state is going to change on the basis of input and what output actions are going to be performed.
Your answer is a very short one:
BY READING THE RFC.
The main networking problem is to share data between computers. All the networking protocols try to solve is a little part of that major problem. Some of them (the protocols) are implemented as software, some others as hardware. In short, protocols like algorithms, can be implemented it in many programming languages.
Back to the TCP, it is implemented by the operating system.

Resources