Why SOME/IP-TP required? - networking

If the message size is larger than the MTU, I know that fragmentation occurs in the IP layer. Why still need SOME/IP-TP?
I couldn't find the reason why SOME/IP-TP is needed even in AUTOSAR Spec.

Related

Does IPsec anti replay service also takes care related to reordering of packets?

I am working for a anti replay window service for my project and while going through the document I can see that window size is maintained i.e top and bottom, could someone please help me in understanding that sequence number here has anything to do with ordering of packets, my understanding is any packet within the window range the receiver gets, it check(section A2.2. of RFC 4303:), does the integrity check and if it valid mark the index and fwd. Is my understanding correct ?

OnOffApplication with TCP retransmission

I was doing some experiments.
And I used OnOffApplication to generate the traffic.
However things didn't seem right.
And i use
MaxBytes to send the amount of traffic that I want.
And the traffic is heavy.
So there will be some packets being dropped.
And it seems OnOffApplication doesn't care about the dropped packets. ( I'm not sure. It's my guess)
It only send the packets until it reaches MaxBytes , and doesn't care about whether the packet is received or not.
Is my guess right?
And, if my guess is right, then is there any alternative choice that I can use.
To generate traffic that each flow has a certain size, and have to re-transmit until all packets in the same flow is received.
My code is in below
OnOffHelper source ("ns3::TcpSocketFactory", Address (InetSocketAddress(r_ipaddr, port)));
source.SetAttribute ("OnTime", RandomVariableValue (ConstantVariable (1)));
source.SetAttribute ("OffTime", RandomVariableValue (ConstantVariable (0)));
source.SetAttribute ("DataRate", DataRateValue (DataRate(linkBw)));
source.SetAttribute("PacketSize",UintegerValue (packetSize));
source.SetAttribute ("MaxBytes", UintegerValue (tempsize*1000));
From the application point of view, OnOff is only a packet generator. It sends packets with specific characteristics (rate, max number etc). It does not track them. That's by design.
If you use TCP though, then the socket will track and make sure that any lost segments are re-transmitted.
The application will generate the MaxBytes in terms of load, but the actual packets transmitted on the wire (or the air) may differ due to the fact that TCP (by design) does not respect the message boundaries, as it is a bytestream oriented protocol. So it may boundle data packets together, or packet segments, with re-trasnmitted segmets etc.

How do you read without specifying the length of a byte slice beforehand, with the net.TCPConn in golang?

I was trying to read some messages from a tcp connection with a redis client (a terminal just running redis-cli). However, the Read command for the net package requires me to give in a slice as an argument. Whenever I give a slice with no length, the connection crashes and the go program halts. I am not sure what length my byte messages need going to be before hand. So unless I specify some slice that is ridiculously large, this connection will always close, though this seems wasteful. I was wondering, is it possible to keep a connection without having to know the length of the message before hand? I would love a solution to my specific problem, but I feel that this question is more general. Why do I need to know the length before hand? Can't the library just give me a slice of the correct size?
Or what other solution do people suggest?
Not knowing the message size is precisely the reason you must specify the Read size (this goes for any networking library, not just Go). TCP is a stream protocol. As far as the TCP protocol is concerned, the message continues until the connection is closed.
If you know you're going to read until EOF, use ioutil.ReadAll
Calling Read isn't guaranteed to get you everything you're expecting. It may return less, it may return more, depending on how much data you've received. Libraries that do IO typically read and write though a "buffer"; you would have your "read buffer", which is a pre-allocated slice of bytes (up to 32k is common), and you re-use that slice each time you want to read from the network. This is why IO functions return number of bytes, so you know how much of the buffer was filled by the last operation. If the buffer was filled, or you're still expecting more data, you just call Read again.
A bit late but...
One of the questions was how to determine the message size. The answer given by JimB was that TCP is a streaming protocol, so there is no real end.
I believe this answer is incorrect. TCP divides up a bitstream into sequential packets. Each packet has an IP header and a TCP header See Wikipedia and here. The IP header of each packet contains a field for the length of that packet. You would have to do some math to subtract out the TCP header length to arrive at the actual data length.
In addition, the maximum length of a message can be specified in the TCP header.
Thus you can provide a buffer of sufficient length for your read operation. However, you have to read the packet header information first. You probably should not accept a TCP connection if the max message size is longer than you are willing to accept.
Normally the sender would terminate the connection with a fin packet (see 1) not an EOF character.
EOF in the read operation will most likely indicate that a package was not fully transmitted within the allotted time.

Is there a good way to frame a protocol so data corruption can be detected in every case?

Background: I've spent a while working with a variety of device interfaces and have seen a lot of protocols, many serial and UDP in which data integrity is handled at the application protocol level. I've been seeking to improve my receive routine handling of protocols in general, and considering the "ideal" design of a protocol.
My question is: is there any protocol framing scheme out there that can definitively identify corrupt data in all cases? For example, consider the standard framing scheme of many protocols:
Field: Length in bytes
<SOH>: 1
<other framing information>: arbitrary, but fixed for a given protocol
<length>: 1 or 2
<data payload etc.>: based on length field (above)
<checksum/CRC>: 1 or 2
<ETX>: 1
For the vast majority of cases, this works fine. When you receive some data, you search for the SOH (or whatever your start byte sequence is), move forward a fixed number of bytes to your length field, and then move that number of bytes (plus or minus some fixed offset) to the end of the packet to your CRC, and if that checks out you know you have a valid packet. If you don't have enough bytes in your input buffer to find an SOH or to have a CRC based on the length field, then you wait until you receive enough to check the CRC. Disregarding CRC collisions (not much we can do about that), this guarantees that your packet is well formed and uncorrupted.
However, if the length field itself is corrupt and has a high value (which I'm running into), then you can't check the (corrupt) packet's CRC until you fill up your input buffer with enough bytes to meet the corrupt length field's requirement.
So is there a deterministic way to get around this, either in the receive handler or in the protocol design itself? I can set a maximum packet length or a timeout to flush my receive buffer in the receive handler, which should solve the problem on a practical level, but I'm still wondering if there's a "pure" theoretical solution that works for the general case and doesn't require setting implementation-specific maximum lengths or timeouts.
Thanks!
The reason why all protocols I know of, including those handling "streaming" data, chop up the datastream in smaller transmission units each with their own checks on board is exactly to avoid the problems you describe. Probably the fundamental flaw in your protocol design is that the blocks are too big.
The accepted answer of this SO question contains a good explanation and a link to a very interesting (but rather heavy on math) paper about this subject.
So in short, you should stick to smaller transmission units not only because of practical programming related arguments but also because of the message length's role in determining the security offered by your crc.
One way would be to encode the length parameter so that it would be easily detected to be corrupted, and save you from reading in the large buffer to check the CRC.
For example, the XModem protocol embeds an 8 bit packet number followed by it's one's complement.
It could mean doubling your length block size, but it's an option.

Maximum size of data which can be fetched using web service in Flex

I want to know the size limit of data which can be fetched in case of HTTP/Webservice/RO.
Any file size limits are not flex specific, but instead relate to the protocols in question - which (AFAIK), there are none.
However, it's worth noting that if you send a particularly large packet size to the client, you will notice that the UI freezes while the packet is deserialized into memory within the client.
I haven't tested large responses, but there are limits with sending large requests. At least with RemoteObject, the entire object must be loaded into memory so loading 2gigs would get an OutOfMemoryError.

Resources