CAN Protocol- Message Object (MObs)
As per my Knowledge these are the Buffers that will contain most recent message.
A very less information is available on Internet.
Please can anyone explain me in detail what exactly are Message Objects?
How these can be used in Programs?
Thanking in anticipation
Message objects are structured dependant on the processor type. So what you have to do is get the datasheet of your processor and see how is stores it's can messages and message box configuration.
This means the following: Each message object is a structure composed of the message's current data and the message configuration. The configuration refers to the message id filters.
Depending on the type of message you want to store in that message object you configure the filter for a range of ids and the processor will store them for you when they are received from the wire. In case you use for transmit the filters are not used.
Of course the structure might also contain flags to confirm that a message was sent, or that you want to cancel the message transmission, of if a message object is configured for transmission or reception.
If you have the datasheet we can find out more on what do you have in that Message Object.
Messages sent on a CAN bus, from what I've read seem to be referred to as "Frames".
There are 2 types messages:
Remote frames - from what I've seen so far these are used by ECU's to request Data frames from micro controllers on other ECU's on the bus
Data frames - replies to a remote frame with the the current state of that ECU, sending these can also be used to imitate a "command" from one ECU to another with e.g. the RF receiver for unlocking the door will (when triggered) send a data frame to the Door lock system (usually on a different bus connected to the can bus by a gateway ECU, vehicle specific) and the data will contain the requested state.
This link may assist you as a start point in learning more about CAN protocols/frames/bus
http://hem.bredband.net/stafni/developer/CAN.htm
Depending on the protocol, hardware and OS you're working with you may find SocketCan very useful as you can use it to create raw CAN frames: http://python-can.readthedocs.org/en/latest/socketcan.html
Related
As a TCP connection is a stream, data traveling through the network is fragmented, buffered etc. during the transmission so there is no guarantee that the data will be delivered in the same packets to the destination application as was sent by source. I want to receive the data at the destination in exactly the same chunks as they were sent by the source app. Or in short term, how to properly implement TCP message framing in INET.
The trick is that you have to create a buffer at the application side and put all received data chunks into that buffer. There is a special queue class in INET (ChunkQueue) that allows you to queue received data chunks and merge those chunks automatically into a bigger chunk that was originally sent by the application layer at the other side.
Practically you feed all the received data into the queue and you can ask the queue whether there is enough data in the queue to build up (one or more) chunk(s) that was sent by the application layer. You can use the queue's has() method to check whether at least one application layer data chunk is present and then you can get that out with the pop() method.
Even better, there is a special callback interface (TcpSocket::ReceiveQueueBasedCallback) that can automatically put all the received chunks into a queue for you.
If you implement that interface in your application, you just have to implement the socketDataArrived(TcpSocket *socket) method and check the queue content regularly (each time data arrives) to see whether enough data is present to be able to deliver the original chunks to the application. There is an example of this in the Ldp protocol implementation in INET:
void MyApp::socketDataArrived(TcpSocket *socket)
{
auto queue = socket->getReceiveQueue();
while (queue->has<MyAppMessage>()) {
auto header = queue->pop<MyAppMessage>();
processMyAppMessageFromTcp(header);
}
}
We understand that there are port tear-down during transactions and different ports may be used when sending messages over to the counterparties. When a node goes down, the messages are still sent but they are being queued in the MQ, is there a recommended way how could we monitor these transactions/messages?
Unfortunately, you can't currently monitor these messages.
This is because Artemis does not store its queued messages in a human-readable/queryable format. Instead, the queued messages are stored in the form of a high-performance journal that contains a lot of information that is required in case the message queue's state needs to be restored from a hard bounce.
I approached this by finding the documents here: https://docs.corda.net/node-administration.html#monitoring-your-node
where it illustrates Corda flow metrics visualized using hawtio.
I just needed to download and startup hawt.io and connect it to any ( or the specified node PID ) net.corda.node.Corda and by going to the JMX tab we could see the messages in queue.
I have a GUI application that sends/recv over tcp to a server.
Sometimes, we get junk data while doing a tcp recv from the server. While reading these nulls or invalid data, the client application crashes sometimes.
Is there a good way to validate this data? - other than catching this exception.
I dont want the GUI application to crash because of bad data sent by the server.
TCP has a checksum that it uses to validate the data received; that is done by the operating system (or sometimes the network hardware, if you have nice hardware). If the contents are not correct, with a very high probability, the data that was sent was incorrect. I just state that because I'm not totally sure that you were aware of this fact.
If you need to validate the data, you will have to validate the data. Write a function that parses your data, and returns a meaningful value only if there's meaningful data. Make your GUI aware of this.
Your question is kind of self-answering... you can't say "I want to be fault-tolerant, but I don't want to care about faults" ("other than catching this exception"), and based on the lack of description of the data you'd expect, I'd say you don't really care about the form of the data.
Imagine that I have server and client talking via WebSocket. Each of time sends another chunks of data. Different chunks may have different length.
Am I guaranteed, that if server sends chunk in one call, then client will receive it in one message callback and vice-versa? I.e., does WebSocket have embedded 'packing' ability, so I don't have to care if my data is splitted among several callbacks during transmission or it doesn't?
Theoretically the WebSocket protocol presents a message based protocol. However, bear in mind that...
WebSocket messages consist of one or more frames.
A frame can be either a complete frame or a fragmented frame.
Messages themselves do not have any length indication built into the protocol, only frames do.
Frames can have a payload length of up to 9,223,372,036,854,775,807 bytes (due to the fact that the protocol allows for a 63bit length indicator).
The primary purpose of fragmentation is to allow sending a message that is of unknown size when the message is started without having to buffer that message.
So...
A single WebSocket "message" could consist of an unlimited number of 9,223,372,036,854,775,807 byte fragments.
This may make it difficult for an implementation to always deliver complete messages to you via its API...
So whilst, in the general case, the answer to your question is that the WebSocket protocol is a message based protocol and you don't have to manually frame your messages. The API that you're using to use the protocol may either have message size limits in place (to allow it to guarantee delivery of messages as a single chunk) or may present a streaming interface to allow for unlimited sized messages.
I ranted about this back during the standardisation process here.
WebSocket is a message-based protocol, so if you send a chunk of data as the payload of a WebSocket message, the peer will receive one separate WebSocket message with exactly that chunk of data as payload.
I am writing a 2D multiplayer game consisting of two applications, a console server and windowed client. So far, the client has a FD_SET which is filled with connected clients, a list of my game object pointers and some other things. In the main(), I initialize listening on a socket and create three threads, one for accepting incoming connections and placing them within the FD_SET, another one for processing objects' location, velocity and acceleration and flagging them (if needed) as the ones that have to be updated on the client. The third thread uses the send() function to send update info of every object (iterating through the list of object pointers). Such a packet consists of an operation code, packet size & the actual data. On the client I parse it, by reading first 5 bytes (the opcode and packet size) which are received correctly, but when I want to read the remaining part of the packet (since I now know the size of it), I get a WSAECONNABORTED (error code 10053). I've read about this error, but can't see why it occurs in my application. Any help would be appreciated.
The error means the system closed the socket. This could be because it detected that the client disconnected, or because it was sending more data than you were reading.
A parser for network protocols typcally needs a lot of work to make it robust, and you can't tell how much data you will get in a single read(), e.g. you may get more than your operation code and packet size in the first chunk you read, you might even get less (e.g. only the operation code). Double check this isn't happening in your failure case.