I have some code in an Arduino library that is time sensitive, and want to protect it between noInterrupts() and interrupts(). The documentation states:
Some functions will not work while interrupts are disabled, and incoming communication may be ignored.
Is there a list of what (standard) functions won't work? In particular, I need save off the time with a call to millis(). Is the number behind millis() still getting updated, or should I move it out of the noInterrupts() / interrupts() block?
It would appear from this answer that millis() in particular would be disabled by disabling interrupts, as that call relies on a on an interrupt attached to a timer that fires at about 1KHz. I've pored over the official documentation though, and can find no exhaustive list of what can be affected. I'm sure many are dismayed by this obvious lack in the official documentation.
Looking further, the timer (Timer/Counter 0 in the ATmega documentation) that controls millis() still counts in the background whether interrupts are enabled or not -- the question is, if your code spans the time when the interrupt would have fired, you could miss a tick. See reference below.
Bottom line is if you need interrupts, keep your noInterrupts() sections brief. And keep your code that is attached to interrupts briefer. ;) Whether you're coding in sketches or bare-metal, it's always important to keep interrupts fast in-and-out.
This external reference is also interesting, shows the math and code behind the millis().
Related
We need to change a controller code from the out side as they do with industrial MCU .
So that you have an mcu,with a program on it, and someone can program some "words" to it, that will determine how it works.
So for example you can program an mcu -not with a programer but with some inputs from serial, to do some simple things such as:
if input A==1
b=1
I wonder if there is a smart way to do that with simple software on the mcu, that it has many #defines for various commands, and it perform them according to values it gets from the outside (and saved for the rest of the program).
I wonder if the industrial programers are using that method, or that every programing of a user is actually load a code(.hex) to the chip(with internal programer ) .
I prefer the simplest way(i wonder if its by pre defined software)
A couple of options come to mind so hopefully this answers your question. It sounds like the simplest version of your question is "How do I change the behavior of the MCU without an actual MCU programmer?" A couple of options come to mind.
1) Depending on the MCU you can have a bootloader that is essentially a small piece of code programmed in the MCU by a programmer that has the ability to reprogram other parts of the MCU. This doesn't require a programmer but involves some other form of letting the bootloader know what the new code is (USB, Serial, SD Card, etc). This will only work if the MCU has the ability to self flash.
2) Again, depending on MCU and scenario you could program a generic set of rules that carry out functionality based on the inputs given to the MCU. This could be in the form of IO pins, EEPROM, or a domain-specific script on an SD card that the MCU can read and interpret at runtime.
Both options depend on the MCU you are using and what hardware capabilities you have at your disposal. But you certainly have options other than reprogramming the end hardware with an actual programmer every time you want to make a change. Hopefully that helps.
How can I have a pause between looped plays? I do have a .wav file, and i do want to play it continuously, but I want an X ms pause between each playback. I know I can edit the .wav file and add 'quiet' at its end, but I wanted to know if it can be done programmatically, without using timers.
You definitely want to use a singleShot QTimer to do a pause in Qt. There are some other mechanisms in Qt that use a QTimer internally. Some of the alternatives may make it a little prettier, but often are in a larger framework and for a different use case.
Some alternatives that might fit your application include:
You could start a sequential animation that uses a QPauseAnimation, with internally uses some sort of QTimer.
You could poll QTime::currentTime() if you are checking something often inside a loop or a QThread. You could also do the same polling with QTime::elapsed().
You could #include "windows.h" or the appropriate library for your platform and command a sleep, if you don't mind the GUI hanging up during the sleep.
You could set up a QStateMachine and have a transition animation/timer built in to happen between when your video stops and when you want to do the next action, or have something else show up or start.
Hope that helps. Good luck.
I am looking into various OS designs in the hopes of writing a simple multitasking OS for the DCPU-16. However, everything I read about implementation of preemptive multitasking is centered around interrupts. It sounds like in the era of 16-bit hardware and software, cooperative multitasking was more common, but that requires every program to be written with multitasking in mind.
Is there any way to implement preemptive multitasking on an interruptless architecture? All I can think of is an interpreter which would dynamically switch tasks, but that would have a huge performance hit (possibly on the order of 10-20x+ if it had to parse every operation and didn't let anything run natively, I'm imagining).
Preemptive multitasking is normally implemented by having interrupt routines post status changes/interesting events to a scheduler, which decides which tasks to suspend, and which new tasks to start/continue based on priority. However, other interesting events can occur when a running task makes a call to an OS routine, which may have the same effect.
But all that matters is that some event is noted somewhere, and the scheduler decides who to run. So you can make all such event signalling/scheduling occur only only on OS calls.
You can add egregious calls to the scheduler at "convenient" points in various task application code to make your system switch more often. Whether it just switches, or uses some background information such as elapsed time since the last call is a scheduler detail.
Your system won't be as responsive as one driven by interrupts, but you've already given that up by choosing the CPU you did.
Actually, yes. The most effective method is to simply patch run-times in the loader. Kernel/daemon stuff can have custom patches for better responsiveness. Even better, if you have access to all the source, you can patch in the compiler.
The patch can consist of a distributed scheduler of sorts. Each program can be patched to have a very low-latency timer; on load, it will set the timer, and on each return from the scheduler, it will reset it. A simplistic method would allow code to simply do an
if (timer - start_timer) yield to scheduler;
which doesn't yield too big a performance hit. The main trouble is finding good points to pop them in. In between every function call is a start, and detecting loops and inserting them is primitive but effective if you really need to preempt responsively.
It's not perfect, but it'll work.
The main issue is making sure that the timer return is low latency; that way it is just a comparison and branch. Also, handling exceptions - errors in the code that cause, say, infinite loops - in some way. You can technically use a fairly simple hardware watchdog timer and assert a reset on the CPU without clearing any of the RAM; an in-RAM routine would be where RESET vector points, which would inspect and unwind the stack back to the program call (thus crashing the program but preserving everything else). It's sort of like a brute-force if-all-else-fails crash-the-program. Or you could POTENTIALLY change it to multi-task this way, RESET as an interrupt, but that is much more difficult.
So...yes. It's possible but complicated; using techniques from JIT compilers and dynamic translators (emulators use them).
This is a bit of a muddled explanation, I know, but I am very tired. If it's not clear enough I can come back and clear it up tomorrow.
By the way, asserting reset on a CPU mid-program sounds crazy, but it is a time-honored and proven technique. Early versions of Windows even did it to run compatibility mode on, I think 386's, properly, because there was no way to switch back to 32-bit from 16-bit mode. Other processors and OSes have done it too.
EDIT: So I did some research on what the DCPU is, haha. It's not a real CPU. I have no idea if you can assert reset in Notch's emulator, I would ask him. Handy technique, that is.
I think your assessment is correct. Preemptive multitasking occurs if the scheduler can interrupt (in the non-inflected, dictionary sense) a running task and switch to another autonomously. So there has to be some sort of actor that prompts the scheduler to action. If there are no interrupting devices (in the inflected, technical sense) then there's little you can do in general.
However, rather than switching to a full interpreter, one idea that occurs is just dynamically reprogramming supplied program code. So before entry into a process, the scheduler knows full process state, including what program counter value it's going to enter at. It can then scan forward from there, substituting, say, either the twentieth instruction code or the next jump instruction code that isn't immediately at the program counter with a jump back into the scheduler. When the process returns, the scheduler puts the original instruction back in. If it's a jump (conditional or otherwise) then it also effects the jump appropriately.
Of course, this scheme works only if the program code doesn't dynamically modify itself. And in that case you can preprocess it so that you know in advance where jumps are without a linear search. You could technically allow well-written self-modifying code if it were willing to nominate all addresses that may be modified, allowing you definitely to avoid those in your scheduler's dynamic modifications.
You'd end up sort of running an interpreter, but only for jumps.
another way is to keep to small tasks based on an event queue (like current GUI apps)
this is also cooperative but has the effect of not needing OS calls you just return from the task and then it will go on to the next task
if you then need to continue a task you need to pass the next "function" and a pointer to the data you need to the task queue
I want to run an LED pattern again and again using the for(), and without interrupting the other code that is running. But have encountered the problem of using delay() too much.
So, the BlinkWithoutDelay example repeats only one thing: turn the LED on, and off every second. If I were to do it with a pattern (and not just turning the LED on and off), how would I do it?
The problem is with millis()
What other options are there of running a pattern without using delay?
Agree your code would be nice to see, to underdatnd what you're trying to do. Assuming that the changes come LESS often than a loop without delay()'s in it, you can either use interrupts (actually very easy to set up on the arduino) or a library called Metro that lets you trigger timed events without using delay() or interrupts.
You've two options if you want to display a sequence, and have something else going on in the background.
First, you could sprinkle your sequence within the main loop(). It looks like a good few of the "LED chasers" and "KnightRider" effects on the internet are coded like this.
void loop () {
// do something
digitalWrite();
// do something
digitalWrite();
}
Or, you can use timer interrupts. This is a bit trickier to set up, but again, a quick internet search should bring up loads of examples. In this case, run the timer on the Arduino and set up the interrupts to trigger every xns. This will let the main loop do what it's doing, and every xns there will be a quick interruption to update your sequence.
If you want to run a time sensitive pattern, you might want to try using timer interrupts.
The tutorial here has a pretty good explanation and several examples of how to use the Arduino timer interrupts.
If you want to run the code while the led pattern is going on, I don't think it is possible on the arduino. That requires parallel processing of the code.
another quick question, I want to make simple console based game, nothing too fancy, just to have some weekend project to get more familiar with C. Basically I want to make tetris, but I end up with one problem:
How to let the game engine go, and in the same time wait for input? Obviously cin or scanf is useless for me.
You're looking for a library such as ncurses.
Many Rogue-like games are written using ncurses or similar.
There's two ways to do it:
The first is to run two threads; one waits for input and updates state accordingly while the other runs the game.
The other (more common in game development) way is to write the game as one big loop that executes many times a second, updating game state, redrawing the screen, and checking for input.
But instead of blocking when you get key input, you check for the presence of pending keypresses, and if nothing has happened, you just continue through your loop. If you have multiple input sources (keyboard, network, etc.) they all get put there in the loop, checking one after another.
Yes, it's called polling. No, it's not efficient. But high-end games are usually all about pulling the maximum performance and framerates out of the computer, not running cool.
For added efficiency, you can optionally block with a timeout -- saying "wait for a keypress, but no longer than 300 milliseconds" so you can continue on with your loop.
select() comes to mind, but there are other ways of waiting or checking for input as well.
You could work out how to change stdin to non-blocking, which would enable you to write something like tetris, but the game might be more directly expressed in an event-driven paradigm. Maybe it's a good excuse to learn windows programming.
Anyway, if you want to go the console route, if you are using the microsoft compiler, then you should have kbhit() available (via conio.h) which can tell you whether a call to fgetc on stdin would block.
Actually should mention that the MinGW gcc compiler 3.4.5 also supports kbhit().