I've been trying to solve this issue during my lesson and I guess I need some help. Let's say you have been given sum of a TCP packet -> 1001 0110 0111 1101, how would you calculate the checksum from it and what will happen when it is processed at the reciever.
Thank you for your help!
Related
so I was wondering, what value goes into the checksum header on the receiving end?
For example, if I am sniffing http data, and I receive a packet, how is the value in the checksum header calculated? I am pretty sure I got how to calculate checksum but I don't understand why the value is as it is.
basically you get the sum of the binary string while splitting the string into groups of 1 byte and lines of 2 bytes, and then operate a 1 complement thingy on that sum, and that's your checksum. and to verify, the receiving end calculates the sum by himself, and adds the checksum to the sum, if everything is 1 then the packet was sent with no errors and 0 is the opposite. but if that's the case shouldn't I see an "ff ff" checksum value? why does it look like "34 ef" instead?
I apologize if this is a stupid question but I just couldn't find the answer as much as I tried looking. Thanks!
I have a data of about 128 bytes arriving at a serila port. I need to remove the first 3 bits of this data befor processing. Could anyone please point me in the right direction?
All suggestion would be apreciated. Many thanks in advance.
I'm trying to read data stored on an exhaust valve actuator device. To activate the readout a set of PWM pulses at a specified frequency and sequence of duty cycles has to be applied on the device's PWM-IN pin:
frequency 300 Hz ±5
1. Duty cycle 80 ±5 (10 periods)
2. Duty cycle 50 ±5 (10 periods)
3. Duty cycle 10 ±5 (10 periods)
After the last duty cycle the device transmits 95 bytes of data serially on the PWM-IN and the PWMOUT line. For data transmission, a baud rate of 9600Baud/s ±5% is used. A start and a stop bit is added to every byte, complying with an RS232 8N1 interface.
Using a NI USB-6341 I can generate the PWM pulses but I tried to read the serial data using a digital input (Change detection) without any success. I don't know if the problem is timing...rate...samples or something else.
How can I decode this digital waveform into the required bytes of data?
Serial data should be like this in Hex (95byte):
5555 5555 5555 5555 5555 8540 0101 0286
1001 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 B0
Labview Code:
Serial data
As far as I can see from what you've posted, you can successfully activate the output of serial data from your device and you can capture this data as a digital waveform (to confirm this, please post a plot of the digital waveform where we can see each high/low transition). What you are stuck on is how to decode this waveform into the bytes of data that it represents. I can think of three ways you could go about this:
Decode the data by analysing the recorded waveform. Standard serial data transmission is encoded in a straightforward way: you have a start bit, eight data bits and a stop bit. Divide up the serial data portion of the waveform into chunks representing one byte each, find whether the waveform is high or low during each of the data bits, and combine these into a byte. Unless the clock of the transmitting device is very unstable I think you should be able to do this by finding the rising edge of each start bit and looking at fixed time offsets from that point until you reach the next start bit.
Connect the data line directly to a serial port input. Before spending lots of time trying to decode the data in software, I would definitely try seeing whether an ordinary serial port will recognise it! If the PWM output is at 'TTL level' i.e. 'space' is 0 volts and 'mark' is +5 volts you can use a 'TTL-USB' converter device, otherwise you may need a simple level shifting circuit to adapt the output to the right levels. You'll probably get some garbage characters due to the PWM signal before the data message but hopefully it shouldn't be hard to distinguish those from the data you want.
Replay the recorded waveform into a serial port input. If receiving the serial data directly doesn't work well, i.e. if the serial receiver gets confused by the PWM signals, then you could discard the PWM section from your captured waveform and replay it out of a different digital port - I think this is supported by the USB-6341. Connect that port to a TTL-level serial input configured to receive and you should get a reliably decoded version of the data you want.
If you need more help with the software side of any of these, please ask a more specific question about the issue you're having. If you need help with the hardware interfacing it's probably best on Electrical Engineering.
You can take a look at LabVIEW's software UART examples but I'm afraid your hardware won't be of much use with them. You can read here for more details. Unless you are doing this as a learning experience and you have a lot of time to kill, I would never even look at the idea of decoding the serial stream with your DIO card. It might sound straight forward enough but it won't be.
To be honest, I think it's quite twisted that you have to send a PWM command and the valve gives back on serial over the same link. If you already have a serial port, why not using it for communicating both ways? This looks like a homework assignment from an evil teacher.
If you want to stick to LabVIEW (I don't see why you should, I'm just assuming that's what you want), the easiest thing to do to solve your problem is to get a cheap USB-to-UART (RS232, TTL or whatever your evil valve uses) and use one of the flow control lines for the PWM. I see no reason for you not to be able to recycle your PWM LabVIEW code.
For this particular task, in my opinion, the easiest and most reliable solution would be to use any microcontroller of your liking. Even if you never used one, you can probably have a working solution within a day or two (what you have to do is nothing out of the ordinary for a microcontroller and you have tons of code and tutorials).
This question is on a test review and I'm not really sure of the answer.
TCP packets are being sent from a client to a server. The MMS is equal to 1460 bytes, and each TCP packet is sent with the maximum capacity. How many TCP packets can be sent before the sequence number field in the TCP header will wrap around?
How much time in seconds will this take on a 1 Mbit/s link?
How much time in seconds will this take on a 1Gbit/s link?
Is there some sort of formula used to figure this out?
Thanks!
Each TCP segments contains 1460 bytes, and sequence number in TCP header is 4 bytes=32 bits so there need to be send 2^32 bytes (because sequence number measure bytes and not bits) in order to sequence number field to wrap around.
In order to calculate the delay you need to consider:
Transmission time - time it takes to push the packet's bits onto the link.
Propagation time - time for a signal to reach its destination.
Processing delay - time routers take to process the packets header.
Queuing delay - time the packet spends in routing queues.
In your questions the transmission time is 1 Mbit/s and 1Gbit/s, and I assume the other delays are 0; so the time it will take to send 2^32 bytes= 8*2^32 bits on:
1 Mbit/s link:
8*2^32 / 10^6 = 34359 seconds
1Gbit/s link:
8*2^32 / 10^9 = 34 seconds
Hope this help you
I am wondering when I doing some example on Wirshark, where TCP sequence number come from
An initial sequence number is generated randomly on handshake, and the number is incremented accordingly for packets sent.
I'm not sure but probably from the system included stack implementation of the OSI model???
http://packetlife.net/blog/2010/jun/7/understanding-tcp-sequence-acknowledgment-numbers/