Is it possible to verify the received slave data in i2C protocol - microcontroller

I recently came across one question related to the I2C protocol,
usually we read the data from the I2C slave devices and use it for the further calculations on the master side,
but can I be sure if the data I got is the data I wanted or is that corrupted while transmitting to the bus?
is there any possibilities to do so in I2C protocol?

In many cases, no, you can't be certain from software. You have to verify the design by hardware (with an oscilloscope) and then make sure nothing changes.
However, some slave devices provide checksums or check bits on certain transactions that can be used to detect some faults.
Reading a known value from the ID register at startup is usually good enough for most applications.
Remember that if your data is getting corrupted then maybe your addresses are too, and that will cause ACKs to not be returned when you expect them.
Many things you can do from software will not detect an error instantly though, so you need to think about how important or harmful it could be if an incorrect value is used. There are various standards that help you plan for this, such as ISO 14971 "Application of risk management to medical devices".

Related

Network design: one-to-many high bandwidth transmissions that excludes users

If StackOverflow is the wrong Exchange for this question, please help direct me to the correct one.
Short Version
What is the best design for a networking application in which one user transmits a constant, high-bandwidth stream of data to many other addresses? The solution must not require the uploader to duplicate the packets for each recipient and preferably will not transmit to users that have not been accepted by the transmitter.
Long Version
A friend and I have written an application that enables someone to transmit data in real time to one or more recipients that he wants to receive the data. I have designed the high-level application protocol to use UDP and to encode the data so that each packet can be lost without hurting the use of the rest. This solution requires managing sockets with each user and sending each packet to every user.
The problem here is that the stream can be very high bandwidth. The user can modify the settings for how high quality the data he is sending should be, and can end up sending 6 Mbps to each user. It is unfeasible to expect a user to pay his ISP enough to be allowed to upload such a stream to the preferred minimum of four other users at a time.
We need a way for the transmitter to send a packet exactly once and have each user receive a copy.
We have looked at multicasting. It may be what we need to use in the end, but we are concerned about the fact that anyone can join any group. It would be preferable to not allow users we do not want to see the data to not be allowed to join in. There is also the problem that if multiple transmitters happen to use the same group, viewers may find that they are receiving multiple streams' worth of data when they only want one.
My searching has revealed something IBM published over a decade ago called Explicit Multicast (Xcast) that looks perfect, but I have yet to find any information to determine whether this technology is commonly supported. Also, I have not yet seen whether it supports datagrams.
Does anyone know the best way to design an application that meets our needs?
Please keep in mind that we have no funds to support our project. Solutions need to be free.
Edit
In the summary above, I hinted at but failed to explicitly state that this is for a real-time application. The motivating drive behind the application is to keep the clients/recipients as close together in time as is possible. If packets are lost or arrive too late to be used in keeping the server and clients in phase, they need to be disregarded. That is why I designed the application protocol on top of UDP with independent data in each packet. Even if a client receives only one packet out of 300 for a given time step, it will use what it did get.
I think that I_am_Helpful's recommendation may be a good step in the right direction (or possibly the destination). I need to do some experimentation to determine whether using a system like Spread will work. However, I do not think I can budget more than additional 17 ms in transmission time.
If you can think of a system that enables sending unreliable datagrams to a specific group of users (like Spread) for a real-time application (unlike Spread, see p. 3), please let me know about it.
We need a way for the transmitter to send a packet exactly once and
have each user receive a copy.
In my limited knowledge, I would say that Reliable Multicasting appears to be one of the viable option for broadcasting in the group. I would like to mention that there are some of the possible Java API's* which could help you achieving the same :
JGroups Java API
The Spread Toolkit -> Spread consists of a library that user applications are linked with, a binary daemon which runs on each computer that is part of the processor group, and various utility and demonstration programs.
Appia
*NOTE : I have never worked with these API's.
It would be preferable to not allow users we do not want to see the
data to not be allowed to join in.
They do provide this feature, e.g., Spread supports thousands of groups with different sets of members. It also provides a range of reliability, ordering and stability guarantees for messages. JGroups can be used to create groups of processes whose members can send messages to each other. It also has facilities like group creation and deletion(Group members can be spread across LANs or WANs).
There is also the problem that if multiple transmitters happen to use
the same group, viewers may find that they are receiving multiple
streams' worth of data when they only want one.
When you could easily create multiple groups in the same network(using Spread,etc.), then, I believe that would no longer be an issue. It is your responsibility to declassify users into different groups.
I hope the given information helps. Good LUCK.
Via multicast you achieve exactly you want: sending each packet once, but authentication seems to be a concern for you.
One possible solution could be simetric cryptography, where the same key is used to encrypt and decrypt. Via TCP your clients connect to a server and fetch the multicast IP Address of the transmission and its associated key, then they join the multicast group and decrypt the incomming transmission.
If you accept a more flexible solution, you could have a server which sends a transmission in real time to a set of distributed servers. Your clients connect to one of these distributed servers via unicast, and after authentication is done, they are inluded in a list of receivers. Each distributed server sends each new transmission packet to each registered client via UDP. in ordinary situations your clients would have the same experience as if it was delivered in a multicast group, but the servers will spend far more bandwidth. Multiple transmission at a time will be allowed, so it could be good for you, and you can have more control, as clients can send signals to the servers, like PAUSE, and etc.

Is it more effective to obtain real-time sensor information using TCP or UDP

I am working on a project which requires sensor information to be obtained from multiple embedded devices so that it may be used by a master machine. The master currently has classes which contain backing fields for each sensor. Data is continuously read on each sensor and a packet is then written and sent to the master to update that sensor's backing field. I have little experience with TCP/UDP so I am not sure which protocol would work better with this setup.
I am currently using TCP to transfer the data because I am worried about data on our rotary encoders being received out of order. Since my experience with this topic is limited, I am not sure if this is this a valid concern.
Does anyone with experience in this area know any reasons that I should prefer one approach over the other?
How much you care about getting know a packet was delivered?
How much you care about getting know a delivered packet was 100% correct?
How much you care about the order of packet delivery?
How much you care about the peer is currently connected?
If the answers were "I care a lot", you'd prefer to keep on using TCP because it ensure all four points.
The counterpart is that UDP could be more lightweight and fast to handle if you manage small packets.
Anyway, it's not so easy choose this or that. Just try.
And read this brief explanation: http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/
I'm no expert but it seems this might be relevant:
Do you can about losing data?
If so, use TCP. Error recovery is automatic.
If not, use UDP. Lost packets are not re-sent. I also believe ordering here is not guaranteed.

Have PLC Controller Listen/Send Custom TCP Packets?

I would like to be able to communicated with PLC controllers, so that I can send and receive custom commands on the PLC.
My idea of being able to do this was to have a TCP listener on the PLC that could read TCP incoming packets on a specific port, and execute routines based on the commands in the packets. It could also send information back via TCP/IP.
This would allow me to write software in multiple languages such as C#, PHP, JavaScript, etc. so that software can be used on any platform such as Windows, iOS, Android, etc. to issue commands to the PLC. This would also mean you do not need the PLC software (which can be costly) to view or control the PLC.
I am not a PLC programmer, so I do not know if PLC has the capability of sending and receive custom TCP packets. I would like to know that a) if it is possible b) how feasible it would be to do this and c) what exactly I should research so that I can accomplish this.
Thanks.
It sounds a bit like reinventing the wheel. You want to make something like KepServerEX?
http://www.kepware.com/kepserverex/
There are also two things to consider - one is the ability to interface with the PLC to share data (ie: for a custom HMI) and the other is to program the PLC. For the latter you still need the control software from the manufacturer unless you're willing to reverse engineer and re-write it from the ground up.
Keep in mind, also, that PLCs don't work the same way that other software does. There are no functions or procedures or classes or objects or even really any "commands", per se. A PLC is a system which executes a continuous fixed program of mostly raw logic rules and calculations. A typical interface to an HMI involves reading and writing directly to/from logic bits and word data (ie:hardware memory locations) which represent the current state of the machine. OPC already does this just fine so I'm not quite sure what you're going for.
If you're looking for a cheap/free alternative to a full commercial package, something here may work for you :
http://www.opcconnect.com/freesrv.php
If I understand correctly, when referred to "Run/Stop" you mean for the PLC to 'Start' or 'Stop' scanning the code and updating its I/O. If this is the situation, it would be perfectly suitable to add a Scan_If_On bit (which will be written by a TCP Command) in parallel connection with the "Start" bit controlled by the HMI.
This way, there will be 2 forms of "Starting" the process controlled by the PLC. HMI and TCP.

Designing a protocol for commands and data over serial

I need (to design?) a protocol for communication between a microprocessor-driven data logger, and a PC (or similar) via serial connection. There will be no control lines, the only way the device/PC can know if they're connected is by the data they're receiving. Connection might be broken and re-established at any time. The serial connection is full-duplex. (8n1)
The problem is what sort of packets to use, handshaking codes, or similar. The microprocessor is extremely limited in capability, so the protocol needs to be as simple as possible. But the data logger will have a number of features such as scheduling logging, downloading logs, setting sample rates, and so on, which may be active simultaneously.
My bloated version would go like this: For both the data logger and PC, a fixed packet size of 16 bytes with a simple 1 byte check sum, perhaps a 0x00 byte at the beginning/end to simplify recognition of packets, and one byte denoting the kind of data in the packet (command / settings / log data / live feed values etc). To synchronize, a unique "hello/reset" packet (of all zero's for example) could be sent by the PC, which when detected by the device is then returned to confirm synchronization.
I'd appreciate any comments on this approach, and welcome any other suggestions as well as general observations.
Observations: I think I will have to roll my own, since I need it to be as lightweight as possible. I'll be taking bits and pieces from protocols suggested in answers, as well as some others I've found... Slip,
PPP and HLDC.
You can use Google's Protocol Buffers as a data exchange format (also check out the C bindings project if you're using C). It's a very efficient format, well suited to such tasks.
Microcontroller Interconnect Network (MIN) is designed for just this purpose: tiny 8-bit microcontrollers talking to something else.
The code is MIT licensed and there's embedded C and also Python implementations:
https://github.com/min-protocol/min
I wouldn't try to invent something from scratch, perhaps you could reuse something from the past like ZMODEM or one of its cousins? Most of the problems you mention have been solved, and there are probably a number of other cases you haven't even though of yet.
Details on zmodem:
http://www.techfest.com/hardware/modem/zmodem.htm
And the c source code is in the public domain.

Reliable udp broadcast libraries?

Are there any libraries which put a reliability layer on top of UDP broadcast?
I need to broadcast large amounts of data to a large number of machines as quickly as possible, and generally it seems like such a problem must have already been solved many times over, but I wasn't able to find anything except for the Spread toolkit, which has a somewhat viral license (you have to mention it in all materials advertising the end product, which I'm not sure our customer will be willing to do).
I was already going to write such a thing myself (because it would be extremely fun to do!) but decided to ask first.
I looked also at UDT (http://udt.sourceforge.net) but it does not seem to provide a broadcast operation.
PS I'm looking at something as lightweight as a library - no infrastructure changes.
How about UDP multicast? Have a look at the PGM protocol for which there are several commercial and open source implementations.
Disclaimer: I'm the author of OpenPGM, an open source implementation of said protocol.
Though some research has been done on reliable UDP multicasting, I haven't yet used anything like that. You should take into consideration that this might not be as trivial as it first sounds.
If you don't have a list of nodes in the target network you have no idea when and to whom to resend, even if active nodes receiving your messages can acknowledge them. Sending to a large number of nodes, expecting acks from all of them might also cause congestion problems in the network.
I'd suggest to rethink the network architecture of your application, e.g. using some kind of centralized solution, where you submit updates to a server, and it sends this message to all connected clients. Or, if the original sender node's address is known a priori, then just let clients connect to it, and let the sender push updates via these connections.
Have a look around the IETF site for RFCs on Reliable Multicast. There is an entire working group on this. Several protocols have been developed for different purposes. Also have a look around Oracle/Sun for the Java Reliable Multicast Service project (JRMS). It was a research project of Sun, never supported, but it did contain Java bindings for the TRAM and LRMS protocols.

Resources