Arduino Fail-Safe Mechanism - arduino

Suppose I am developing a fail-safe mechanism for Arduino (Or any other microcontroller). In other words a secondary microcontroller or a seperate board should get the responsibility when the primary controller fails.
Two possible mechanisms are as follows.
Method 1 - Client Server Mechanism
There are 2 identical systems which are powered separately.
The secondary system sends a request periodically and the primary system replies.
If the primary system fails to reply (several times) the secondary
system becomes in charge.
Method 2 - Heart Beat Mechanism
There are 2 identical systems which are powered separately.
The primary system sends a periodic heartbeat message.
If the heart beat is there the secondary node knows that the primary node is up.
When there is no heart beat the primary node is assumed to be dead. Secondary node gets the control.
Do you guys know any better mechanism to implement this?

Typically in commercial embedded systems, a watchdog timer would be utilized to reset the processor in the case that it fails to respond by periodically "kicking the dog". All AVR microcontrollers (and many if not most other brands as well) have an internal watchdog timer. Though a design with an independent, external watchdog timer is typically more robust and reliable. Like this:
For systems that require an even higher degree of fault tolerance, for instance aerospace applications, triple redundant or triple modular redundant architectures are used.
In a triple redundant system, three identical processing components perform the same task at the same time. The result is then sent to a voting circuit or what John von Neumann called a "majority organ" (Section 4.2.2). The output of the voting circuit is the majority opinion of the three processing components.
This allows for one of the processing components to fail without affecting the operation of the system. However, if the voting circuit fails, then the whole system fails as well. A triple modular redundant system does away with this single point of failure by implementing three voting circuits as well.
Eventually though, the three outputs will need to be combined into one result again leading to a single point of failure. Even if that point of failure is the human looking at three gauges, each monitoring the same temperature.
What you need to determine is just how fault-tolerant you need your system to be and what kind of mean time between failures (MTBF) your system can handle. Then design your redundancy system around that.

Related

Why are Distributed Systems considered complex?

I'm just getting into the concept of a Distributed System and its advantages and disadvantages. In the book I'm reading it discusses the complexity of a Distributed System and that they are inherently complex, it lists the following as potential reasons for complexity;
Heterogeneity
Asynchronous communication
Partial failures
What I am struggling to understand is what these concepts actually encompass (i.e what is a partial failure and what are the causes of a partial failure?), and how they are dealt with in modern systems? Does middleware successfully solve all three of these complexity issues within a system?
This question can be answered in many words, but I'll try to boil it down to essentials:
Heterogeneity is one of the main problems integration tries to solve. It is an inherent characteristic of most distributed systems and it refers to the fact that most often than not, when you have to integrate multiple systems, they will:
Be on different platforms, in different networks;
Differ in their capabilities in terms of integration;
Have disparities in data, even data referring to the same business domain;
Use and support different (sometimes even forgotten or unsupported) technologies and standards;
Have different owners (are controlled by different departments, companies).
All of the above add more and more complexity.
Asynchronous communication solves some problems of stateless communication but introduces whole other set of complexities, that can easily lead to problems when not implementation is not proper. This is mainly due to the fact that you only have guarantee that the message will be successfully received on the other end, but have no guarantee whatsoever when the operation will be processed, if ever. So it is much harder to carry out orchestration of interdependent asynchronous tasks, as opposed to synchronous tasks.
Partial failures - When you have processes that involve multiple interdependent write operations you need to ensure ACID transactions. Having to do so in scenarios when multiple systems are involved is even harder because you cannot achieve common transactional context as easily in heterogeneous distributed environment as you would if you were within the boundaries of a single system. Often you will need to implement opposite operations in services (or worse, implement two-phase commit), just to be able to compensate all prior writes in the process in case something goes wrong with one of the tasks.
Hope this clears things a bit!
The reason distributed systems are so complex is simple: time!
Perfect synchronization of state becomes impossible in distributed systems for the simple fact that some amount of time must pass between the point that a message leaves one server and that point it arrives at its intended destination. In addition to this, networks are a far more unreliable communication medium, meaning that message may never make it at all.
The lack of perfect time synchronization means that it's impossible to make absolute assumptions about the order of events. For instance, in a highly available distributed database, if two requests writing to the same resource arrive on two different servers nearly simultaneously, there's no way to determine the absolute order of those events. So, distributed systems must use approximations of logical time and conflict resolution to resolve these types of event order issues.
Partial Failure - In case of a transactionS involving many clients (#2or more),the scheduling technique being used many involve conflicting operations of a write and maybe a write, in the process of issuing a lock complexities arise like for in case of a deadlock. When the lock manager tries either to detect, avoid or prevent the system may partially fail leading to rollback of the whole process.

Watchdog monitoring for multi microcontroller - Embedded Systems

I am using 3 micro-controller on a board.
Main micro, gateway micro and safety micro;
name suggest the associated applications.
Internal watchdog exist for all three, but I need to have an external supervision so as not to have a buggy timer code nullifying the effect of internal watchdog. Also to keep the BOM cost low, so can use just 1 external watchdog.
Propose to use the following strategy:
Main microcontroller: We plan to have the internal watchdog and as well an external watchdog for this.
Safety Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
Gateway Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
One issue with this is - EMI or noise issues over line causing SPI corruption and hence false RESET from main micro.
Has anybody faced similar challenge? Any suggestions for this?
Many Thanks for your help!!!!
Not knowing the specifics of your application, it is not possible to give you a definitive answer. The way you would normally solve this sort of problem is to do a failure mode and effects analysis. Essentially you list out all the parts of your system and then brainstorm all the possible failure modes you think could happen. EMC would be one of them. You then estimate a probability that each failure mode will occur and assign a severity to it in the event that it does occur. Multiplying these out will allow you to identify the areas that carry greater impact and need extra protection. When all the failure modes have a severity x risk value below a threshold set by your application, you will have a 'valid' solution.
Not doing a thorough analysis like this means you may very well put all your effort into defending the front door while leaving the back door unlocked.

MPI_Isend /Irecv: Is it possible to access the sendbuffer on unused memory-locations in the meanwhile

I would like to speedup my MPI- Program with the use of asynchronous communication. But the used time remains the same. The workflow is as followed.
before:
1. MPI_send/ MPI_recv Halo (ca. 10 Seconds)
2. process the whole Array (ca. 12 Seconds)
after:
1. MPI_Isend/ MPI_Irecv Halo (ca. 0,1 Seconds)
2. process the Array (without Halo) (ca. 10 Seconds)
3. MPI_Wait (ca. 10 Seconds) (should be ca. 0 Seconds)
4. process the Halo only (ca. 2 Seconds)
Measurements showed that the communication and processing the Array-core nearly take the same time for common workloads. So asynchronism should nearly hide the communication time.
But it dosn't.
One fact - and I thinks this could be the problem - is that the sendbuffer is also the array the calculations are made on. Is it possible that MPI serializes the memory-access although communication ONLY accesses the Halo (with derived datatype) and the computation ONLY accesses the core (only reading) of the array???
Does anybody know if this is for sure the reason?
Is it maybe implementation-dependend (I'm using OpenMPI)?
Thanks in advance.
It isn't the case that MPI serializes the memory accesses in the user code (that's beyond the library's power to do, in general), and it is true that what exactly does happen is implementation specific.
But as a practical matter, MPI libraries don't do as much communication "in the background" as you might hope, and this is particularly true when using transports and networks like tcp + ethernet, where there's no meaningful way to hand off communication to another set of hardware.
You can only be sure that the MPI library is actually doing something when you're running MPI library code, eg in an MPI function call. Often, a call to any of a number of MPI calls will nudge an implementations "progress engine" that keeps track of in-flight messages and ushers them along. So for instance one thing you can quickly do is to make calls to MPI_Test() on the requests within the compute loop to make sure things start happening well before the MPI_Wait(). There is of course overhead to this, but this is something that's easy to try to measure.
Of course you could imagine the MPI library would use some other mechanism to run things behind the scenes. Both MPICH2 and OpenMPI have played with separate "progress threads" which execute separately from the user code and do this ushering along in the background; but getting that to work well, and without tying up a processor while you're trying to run your computation, is a genuinely difficult problem. OpenMPI's progress threads implementation has long been experimental, and in fact is temporarily out of the current (1.6.x) release, although work continues. I'm not sure about MPICH2's support.
If you are using infiniband, where the network hardware has a lot of intelligence to it, then prospects brighten a bit. If you are willing to leave memory pinned (for the openfabrics), and/or you can use a vendor-specific module (mxm for Mellanox, psm for Qlogic), then things can progress somewhat more rapidly. If you're using shared memory, than the knem kernel module can also help with intranode transport.
One other implementation-specific approach you can take, if memory isn't a big issue, is to try to use eager protocols for sending the data directly, or send more data per chunk so fewer nudges of the progress engine are needed. What eager protocols means here is that data is automatically sent at send time, rather than just initiating a set of handshakes which will eventually lead to the message being sent. The bad news is that this generally requires extra buffer memory for the library, but if that's not a problem and you know the number of incoming messages is bounded (eg, by the number of halo neighbours you have), this can help a great deal. How to do this for (eg) shared memory transport for openmpi is described on the OpenMPI page for tuning for shared memory, but similar parameters exist for other transports and often for other implementations. One nice tool that IntelMPI has is an "mpitune" tool that automatically runs through a number of such parameters for best performance.
The MPI specification states:
A nonblocking send call indicates that the system may start copying
data out of the send buffer. The sender should not modify any part of the
send buffer after a nonblocking send operation is called, until the
send completes.
So yes, you should copy your data to a dedicated send buffer first.

Lots of ports with little data, or one port with lots of data?

I've been checking out using a system called ROS (http://www.ros.org) for some work.
There are lots of different types of data that get sent between network nodes in ROS.
You define a struct of data that you want to send in a message, and ROS will handle opening a specific port between the two nodes that will only send that struct of data.
So if there are 5 different messages, there will be 5 different ports.
As opposed to this scenario, I have seen other platforms that just push all the different messages across one port. This means that there needs to be a sort of multiplexing/demultiplexing (done by some sort of message parsing on the receivers end).
What I wonder is... which is better from a performance perspective?
Do operating systems switch based on ports quickly, so that a system like ROS doesn't have to do too much work to work out what is in the message and interpreting it?
OR
Is opening lots of ports going to mean lots of slower kernel calls, and the cost of having to work out and translate message types end up being more then the time spent switching between ports?
When this scales to a large amount of data at high rates and lots of different messages types there will be lots of ports. So I imagine that when scaling each of these topologies that performance will be a big factor in selecting the way to work.
I should also point out that these nodes usually exist on one small network, or most of the time on the one machine in which networking is used as a force of inter-process communication. So the transmission time is only a very small factor in the overall system timing.
ROS being an architecture for robots may have one node for every sensor and actuator, so depending on the complexity of your system we may be talking about 20-30 nodes pushing small-ish (100bytes or so) data between 10-100Hz
It depends. I do not know the specifics of ROS but in networking it comes down to the following constraints:
Distance: speed of light is fast but over a distance it starts making a difference
Protocol Overhead: connection oriented vs. connection-less
On the OS side, maintaining a list of free ports isn't such much of an overhead - of course there is a cost to it but everything is relative: if you are talking about a distributed system with long distance links, then it is easy to argue that cycling through OS network ports ranks as lower concern compared to managing communication quality.
Without a more specific question, I'll stop here.
I don't have any data on this, but it seems plausible that multiple ports might be handled more efficiently by multi-core systems, as opposed to demultiplexing within the program.

Microcontroller + Verilog/VHDL simulator?

Over the years I've worked on a number of microcontroller-based projects; mostly with Microchip's PICs. I've used various microcontroller simulators, and while they can be very helpful at times, I often find myself frustrated. In real life microcontrollers never exist alone and the firmware's behavior is dependent on the environment. However, none of the sims I've used provide decent support for anything outside the microcontroller.
My first thought was to model the entire board in Verilog. But, I'd rather not create an entire CPU model, and I haven't had much luck finding existing models for the chips I use. Regardless, I really don't need, or want, to simulate the proc at that level of detail, and I'd like to retain the debugging facilities provided by a regular processor sim.
It seems to me that the ideal solution would be a hybrid simulator that interfaces a traditional processor simulator with a Verilog model.
Does such a thing exist?
I've used the Altera Nios II processor embedded on a FPGA. Altera provides a toolchain for simulating the CPU (with its software) together with your custom logic in a simulator. I suppose that a similar setup can be achieved by downloading a VHDL/Verilog core of your CPU (Did you try opencores ? They have lots of stuff there).
But keep in mind that it is going to be mind-bogglingly slow, so don't expect to simulate whole complex processes this way. The best you can hope for is simulating fine software-hardware interaction points to debug problems. If you need a deeper simulation, consider running it on a FPGA with built-in monitoring code.
For the "simulate the whole board" approach,
The Free Model Foundry has a large number of models, some in VHDL others in Verilog, that are available now.. but you'll need to pay to have new models created. These are very helpful in being sure the board is built correctly.
But I think the more common approach when debugging your PIC is to just build a board, then work on the firmware. In the chip world, (where the firmware is running on a microprocessor in a chip that hasn't gone to fab yet) people often resort to very expensive systems (or renting time on them) that allow compiling part of the design into an emulator while the rest of the design runs in the normal simulator environment. Without the barrier of an expensive mask set for the chip, the cost is just not justifiable for a Circuit board. Although I've heard of some creative applications of Simulink (Mathworks) with FPGA's, but my recollection is that one either ran the system on the computer, or programmed the device and ran the same thing in realtime.
I believe both Cadence (ask about Palladium) and Mentor Graphics have that integrated solution if you have the money to spend on it.
What I have done recently is create an interface between the simulation environment and host system. Different hdl simulators have different interfaces, and getting the simulator NOT think in batch mode, the traditional simulation model, instead run for ever like a real design is half of the problem.
Then from the host using C (or whatever) you can create abstractions that may or may not allow you to write your application software for whatever target (depending on what language and compiler capabilities you have). For example you can make a generic poke and peek function and on the final target have those actually poke and peek memory or I/O, but for simulation through the abstraction you talk to a testbench in the simulation that simulates the same memory cycle.
I went one step further and used (Berkeley) sockets between the host and test bench so that the simulation can keep running while the host applications stop and start. Not unlike having a real processor with an OS that you are starting applications and running them to completion and starting another. At least for test applications, for delivery you probably only have one app.
By creating these abstraction layers I can write real applications that will be used on the target when it is built. Along the way you can use software simulation of the logic initially, then if you like build an fpga with an abstraction interface (throw away logic) say a uart for example. Replace the shim between the applications abstraction layer and the simulator with a uart interface, or whatever. Then when you marry the processor and logic in the same chip or on the same board, replace the abstraction layer again with direct calls to whatever interfaces they have always though they were talking to. If something breaks and you have retained the abstraction layer you can take the application back to the simulation model and have access to all of your logic internals.
Specifically this time around I am using a hdl language cyclicity cdl which is on sourceforge, the documentation needs some help but the examples may get you going, and it produces synthesizable verilog, so you get an extra win there. I threw out all the scripting batch stuff other than the bare minimum needed to connect and start a C simulation model. So my test bench is in C (well C++ technically) the sockets layer was done there. The output can be .vcd files which gtkwave uses. Basically you can do the bulk of your HDL design using open source software with no licenses, etc. By adding one or two lines of code to the CDL simulation part I was able to have it run as an infinite loop, which I can say works quite well, there doesnt appear to be any memory leaks, etc.
both modelsim and cadence have standardized ways of connecting host C programs to the simulation world and from there you can use an IPC to get to host applications talking to an abstraction layer api.
this is probably way overkill for a pic, I have given up pics a while ago for the faster and C friendly arm based micros anyway. There is/was an open core pic that you could simply incorporate into your simulation, even though that is not what you are trying to do here.
Not that I've seen. Your best bet is to properly define the interfaces and behavior between the uC and FPGA and then define a series of test waveforms that can be applied using an automated tester. You would have to make the automated tester (or perhaps a logic analyzer may have some such functionality) out of an FPGA or uC (apply waveform, watch interrupts, breakpoints, etc). If you really want I know that Opencores.org has PIC and AVR-like 8-bit uC cores defined as VHDL, so you could implement your entire project on the FPGA and then just debug that.
Generally there isn't need to model the CPU at the RTL level. Since you don't really care about what it does bit by bit; you generally care about what it does, e.g. register values, memories and bus access.
The simplest is call at Bus Functional Model. This just generates the read and writes that the CPU does, often based on a text file. These are available for some CPUs and many popular buses (e.g. PCI, PCIe). THese simulate super fast.
The next step up is a functional cycle-accurate model. Those simulate fast. They are often encrypted.
Last is a full RTL model. Those usually are only available if you are working closely with the CPU vendor, e.g. using their core in your ASIC. Typically these are encrypted, unless you are a huge company.
Memory models are typically cycle-accurate (e.g. Micron).
My workmates from the hardware department use FPGA simulation software quite often to find timing-bugs and trace down strange behaviours.
Simulating one or two milliseconds can take several hours, so using the simulator for anything but very small things is not feasable.
You may want to have a look at SystemC though. http://en.wikipedia.org/wiki/SystemC

Resources