Malformed Action frames in ns-3 mesh network simulation - networking

I have built a simulation model of the IEEE 802.11s based mesh network in ns-3 version ns3.28. The network consists of one moving node moving randomly and three static nodes. The pcap file generated from the model when viewed in Wireshark shows malformed Action frames.
I have attached the picture below. The same is the case for beacon frames. Can anyone describe why are these packets malformed?

I don't know how to fix it in the code but I think I know whats going on. The header is supposed to be a IE_MESH_PEERING_MANAGEMENT (117) with sub-type PEER_CLOSE (3) and Tag length 7 (see here). The reason for close is set to REASON11S_MESH_CONFIRM_TIMEOUT (57).
Somehow it was serialized in the wrong order. You should open a bug with ns3 for them to fix this.

Related

How does a computer know what data to reassemble?

When a computer X sends data through a network to computer Y the data goes down through the OSI layer. This is ok. I understand. But once the data is put on the media as eletric signals then how does the computer Y know what to reassmble, given the headers and trailers of the data model generated in OSI, once it is put on the electric media at layer 1 does not exist any more?
The physical layer is just 1's and 0's as you say - the trick is that there is a pattern that tells the receiver that this is the start of a packet. This is usual referred to as 'Framing'.
Once the receiver knows that, it simply reads in as many bits as its needs for the Layer 2 header and it then has that and so on.
The headers are clear in a typical OSI or networking diagrams, e.g. (https://www.ciscopress.com/articles/article.asp?p=2738463):
So the way the first two layers work on the receiver is:
layer 1 just recognises whether the signal is a one or a zero and creates the stream of ones and zeros.
layer 2 reads this stream and when it recognises the start pattern it then know the following bits are the header and so on and hence it can identify the frames.
You can see examples of start and stop patterns online e.g. (http://sinauonline.50webs.com/Cisco/Cisco%20Exploration%20Sem1Chap7.html):

Zigbee network sniffing and converting values

I am currently using ember desktop to run zigbee traces on some embeded devices. I have the network keys and device keys so all the data is fine, im just a bit of a noob when it comes to reading the data.
one of the traces i run returns a value for some data and comes back as int24_0: 0x000201 another is the same int24_0: 0x0000D1.
Does anyone know how to read this data of how i can see or even convert this int24 value to a readable value.
thanks
This depends on a lot of criteria: the status of your receptor, the Zigbee version (are you 3.0.0 or smaller ?) ..
Here you have a link which summary all Zigbee basics:
Basics Zigbee
I also attached the section dedicated to the frame reading (link in the first page):
Zigbee frame description

classification of user browsing activity using machine learning

if you record all IP traffic (using wireshark or similar program) while browsing the internet, you'll find many packets sent not as part of of your browsing activity.
my question is:
if you wish to classify the packets (sent from your PC) into two groups:
1) packets sent as part of your browsing activity
2) all other packets
how would you use machine learning to solve this issue?
you can assume the packet-payload can't be used for this purpose because it's either encapsulated or encrypted, so only packet-headers can be used, e.g. TCP window size, TCP flag bits, packet length and packet directions.
Sounds like a binary classification problem.
There are three basic approaches you might use:
Collect packages you can manually label by "browsing activity" and "others" and train binary classifier on top (like SVM etc.)
Collect just packages which are "browsing activity" and train one-class classifier on top (like one class SVM)
Just collect all the data you can and try to cluster it into two clusters, there is a (very small unfortunately!) chance that the division found will be the one you are looking for
In each of the above cases you will need to prepare set of features to represent your data. So either a constant set of some features, or you might try to simply use packet header as a raw text and traing some text-based model, like some convolutional neural network etc.

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

what's the difference between point-to-point links and unicasting?

I read them from the COMPUTER NETWORKs. but i cannot tell from them.
Do they refer the same thing?
point-to-point links means source and destination module (may be through some intermediate machine)
see from the words, it means one point and another.(Just two point)
So, it seems the same meaning with Unicasting.
What's the difference?
Unicasting is a transport-layer action that happens over a link.

Resources