Autosar watchdog utility - watchdog

in a safe Autosar project suppose I configure watchdog deadline monitoring or flow control.
if the watchdog detects the deadline of flow violation, the ECU resets. when the violation is persistent, the ECU reset is persistent
what's the value added by the watchdog in this case? instead of ECU freezing, the ECU is resetting indefinitely?

SW should be well tested during development to make sure no resets, deadlocks or high cpu-load.
If a problem happened during production WDG would be your last line of defense so it will perform a reset to get out of this state and start over everything.
In most safe projects I am aware of, init state is considered as safe state because you initialize all IOs to default values "e.g. engine->stop".
If you have worries from such a situation you can set a flag once WdgM detects violation and save it in non-initialized ram section then check it in startup.

Related

How JTAG debugger able to stop watchdog timer

I am working on a project where I connect JTAG to SOC and debug the image on the SOC using the JTAG. The image also runs with watchdog timer that runs during core initialization, and which needs to be reset periodically to prevent the board being reset.
Now for my own understanding I was wondering how JTAG connects to the image and lets us set breakpoint during initialization without worrying about the watchdog timer. I have seen the image running for a long time, under JTAG, without the board being reset by watchdog.
I tried talking to multiple people in my team to try understand this but none of the explanations were satisfactory. Can somebody please explain what exactly is going on in terms of both JTAG and watchdog timer.
The answer depends on the type of SoC you are debugging since the watchdog function normally is an independent function inside or outside the SoC, and I don't see a direct relation to the JTAG interface of the CPU.
Still, some controllers do implement features to stop the watchdog time while the CPU is stopped by a breakpoint. For example, STM32F1 controllers offer the DBGMCU_CR register where you can (even through the debugger) configure whether the watchdog timer shall continue running while the core is halted.

Watchdog monitoring for multi microcontroller - Embedded Systems

I am using 3 micro-controller on a board.
Main micro, gateway micro and safety micro;
name suggest the associated applications.
Internal watchdog exist for all three, but I need to have an external supervision so as not to have a buggy timer code nullifying the effect of internal watchdog. Also to keep the BOM cost low, so can use just 1 external watchdog.
Propose to use the following strategy:
Main microcontroller: We plan to have the internal watchdog and as well an external watchdog for this.
Safety Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
Gateway Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
One issue with this is - EMI or noise issues over line causing SPI corruption and hence false RESET from main micro.
Has anybody faced similar challenge? Any suggestions for this?
Many Thanks for your help!!!!
Not knowing the specifics of your application, it is not possible to give you a definitive answer. The way you would normally solve this sort of problem is to do a failure mode and effects analysis. Essentially you list out all the parts of your system and then brainstorm all the possible failure modes you think could happen. EMC would be one of them. You then estimate a probability that each failure mode will occur and assign a severity to it in the event that it does occur. Multiplying these out will allow you to identify the areas that carry greater impact and need extra protection. When all the failure modes have a severity x risk value below a threshold set by your application, you will have a 'valid' solution.
Not doing a thorough analysis like this means you may very well put all your effort into defending the front door while leaving the back door unlocked.

Forcing a Biztalk Host to Throttle for Debugging Purposes

we're currently having issue in our production servers and would like to try to replicate the issue in our dev. I'm currently awaiting access to our Performance Monitoring Tool, and while waiting would like to play with it a little.
I'm thinking of, since I suspect a host throttling in prod, forcing hosts to throttle in dev and see if it will recreate the issue.
Is there a way to do this?
As others have mentioned, monitoring of the throttling counters and other counters like memory and WIP messages is a must to see what is going on in your production server. Also would recommend that set up a SCOM alert on throttling states of 3+ (publishing + delivery states), if you have SCOM.
Message throughput can grind to a halt on especially the memory (4, 5) and Queue Size (6) states. States 1+2 are generally short lived (e.g. arrival of a large batch of messages) and Biztalk recovers within a few seconds.
Simulating the memory state in your Dev environment should be straightforward by tweaking the throttling thresholds (obviously not something to be taken lightly in production!)
e.g. to trigger the Memory threshold states - AFAIK the lowest memory usage threshold you can set is 101MB. Running a load test in dev should then be able reproduce the throttle.
There is also apparantly a user-based throttling override to set states 10 and 11 although haven't actually tried this.
Some other experience on avoiding throttling:
(Caveat - I don't have an active BizTalk 2006/R2 setup - this is for 2009 / 2010)
If you do a lot of asynchronous processing (e.g. Queue receives), ensure that you have split functionality into separate Hosts for Receive, Processing and Send hosts. This way you can adjust the throttling for asynch Receive hosts to trigger much earlier than the processing and sending hosts - this should have the effect of constricting new incoming messages to the messagebox but allowing existing messages to complete processing.
On 64 bit hosts, the default 25% memory host usage throttling level is usually an unnecessary liability - we increased this using Yossi Dahan's recommendation of 50% on a 4GB server
Note that suspended messages count toward throttling state 6 - ensure that you have a strategy for dealing with suspended messages (and obviously ensure that the Sql Agent jobs are running!).

What exactly are "spin-locks"?

I always wondered what they are: every time I hear about them, images of futuristic flywheel-like devices go dancing (rolling?) through my mind...
What are they?
When you use regular locks (mutexes, critical sections etc), operating system puts your thread in the WAIT state and preempts it by scheduling other threads on the same core. This has a performance penalty if the wait time is really short, because your thread now has to wait for a preemption to receive CPU time again.
Besides, kernel objects are not available in every state of the kernel, such as in an interrupt handler or when paging is not available etc.
Spinlocks don't cause preemption but wait in a loop ("spin") till the other core releases the lock. This prevents the thread from losing its quantum and continue as soon as the lock gets released. The simple mechanism of spinlocks allows a kernel to utilize it in almost any state.
That's why on a single core machine a spinlock is simply a "disable interrupts" or "raise IRQL" which prevents thread scheduling completely.
Spinlocks ultimately allow kernels to avoid "Big Kernel Lock"s (a lock acquired when core enters kernel and released at the exit) and have granular locking over kernel primitives, causing better multi-processing on multi-core machines thus better performance.
EDIT: A question came up: "Does that mean I should use spinlocks wherever possible?" and I'll try to answer it:
As I mentioned, Spinlocks are only useful in places where anticipated waiting time is shorter than a quantum (read: milliseconds) and preemption doesn't make much sense (e.g. kernel objects aren't available).
If waiting time is unknown, or if you're in user mode Spinlocks aren't efficient. You consume 100% CPU time on the waiting core while checking if a spinlock is available. You prevent other threads from running on that core till your quantum expires. This scenario is only feasible for short bursts at kernel level and unlikely an option for a user-mode application.
Here is a question on SO addressing that: Spinlocks, How Useful Are They?
Say a resource is protected by a lock ,a thread that wants access to the resource needs to acquire the lock first. If the lock is not available, the thread might repeatedly check if the lock has been freed. During this time the thread busy waits, checking for the lock, using CPU, but not doing any useful work. Such a lock is termed as a spin lock.
It is pertty much a loop that keeps going till a certain condition is met:
while(cantGoOn) {};
while(something != TRUE ){};
// it happend
move_on();
It's a type of lock that does busy waiting
It's considered an anti-pattern, except for very low-level driver programming (where it can happen that calling a "proper" waiting function has more overhead than simply busy locking for a few cycles).
See for example Spinlocks in Linux kernel.
SpinLocks are the ones in which thread waits till the lock is available. This will normally be used to avoid overhead of obtaining the kernel objects when there is a scope of acquiring the kernel object within some small time period.
Ex:
While(SpinCount-- && Kernel Object is not free)
{}
try acquiring Kernel object
You would want to use a spinlock when you think it is cheaper to enter a busy waiting loop and pool a resource instead of blocking when the resource is locked.
Spinning can be beneficial when locks are fine grained and large in number (for example, a lock per node in a linked list) as well as when lock hold times are always extremely short. In general, while holding a spin lock, one should avoid blocking, calling anything that itself may block, holding more than one spin lock at once, making dynamically dispatched calls (interface and virtuals), making statically dispatched calls into any code one doesn't own, or allocating memory.
It's also important to note that SpinLock is a value type, for performance reasons. As such, one must be very careful not to accidentally copy a SpinLock instance, as the two instances (the original and the copy) would then be completely independent of one another, which would likely lead to erroneous behavior of the application. If a SpinLock instance must be passed around, it should be passed by reference rather than by value.
It's a loop that spins around until a condition is met.
In nutshell, spinlock employs atomic compare and swap (CAS) or test-and-set like instructions to implement lock free, wait free thread safe idiom. Such structures scale well in multi-core machines.
Well, yes - the point of spin locks (vs a traditional critical sections, etc) is that they offer better performance under some circumstances (multicore systems..), because they don't immediately yield the rest of the thread's quantum.
Spinlock, is a type of lock, which is non-block able & non-sleep-able. Any thread which want to acquire a spinlock for any shared or critical resource will continuously spin, wasting the CPU processing cycle till it acquire the lock for the specified resource. Once spinlock is acquired, it try to complete the work in its quantum and then release the resource respectively. Spinlock is the highest priority type of lock, simply can say, it is non-preemptive kind of lock.

ASP.NET Session State Server vs. InProc Session

What is the overhead performance penalty for running Session State Server instead of InProc? Is it significant? I understand that you can restart w3wp with the state server and retain all session state - is that the only advantage over InProc?
It depends on your deployment plans: on a single server, the penalty is small, but the benefit is equally limited: your session state survives process recycles (as mentioned) but that's about it. You'll have some cross process marshalling with StateServer mode, so expect some additional cpu load, nothing too impressive.
In a web farm/load balanced setup InProc won't work, unless you can configure sticky sessions/server affinity. Be mindful of the fact that the StateServer node itself can become a single point of failure, so be sure to compensate for that. Having said that, the latency of a StateServer is in general much less (= better) than when you use SQLServer mode.
Make sure that your code/site gracefully handles lost state, regardless of where you store the data.
If you have a load balance setup (without the use of sticky sessions) you cannot use InProc since (based on your load balance setup of course) you might switch between nodes.
Recycles of the worker process (but that is of course the same as restarting w3wp) will also kill your session when it is InProc.

Resources