Related
I'm currently setting up an AZURE RTOS (ThreadX on STM32), with Ethernet, SPI and ADCs activated.
This STM32 has to pass-through configuration information from time to time, coming from my PC over the Ethernet-Port.
It has to pass these information via SPI to two other STM32, which makes the first STM32 the system-controller / system-interface. This will be a low-priority task, since the activation of the passed configuration will be started by sync-lines, running from the system-controller to the two other STMs.
While doing so, the system-controller has to read-in ADC values constantly and pass them via Ethernet / TCP to my computer.
I've used the ThreadX TCP server example, as given by STM, as a starting point.
From there I've managed to set up three servers on three ports, communicating sucessfully with a python script on my PC (as a first test).
Now come the two great questions:
1)
Since my input signal may contain frequencies up to 2.5 MHz, I want to digitize this signal with the full 5 MSPS (Nyquist), which ADC3 is capable of.
The smallest internally available data-type at full resolution is uint16_t, which makes the data rate work out to be R = 16 * 5 MSPS = 80 MBit/s (worst-case, I bet, there is optimization possible ... e.g. 8 bits resolution, which halves the data-rate ... but this resolution might not be enough ... or 16 bits, and FFT afterwards, which is also sufficient, since I'm mostly interested in energy per frequency band, but initially I wanted to do this on my computer, for best flexibility).
Even if the Ethernet-IF is capable of doing 100MBit/s, the TCP layer of NetXDuo, I bet, is not.
(There is also USB OTG on this board available, but since networked devices are in my opinion more versatile, I prefer using Ethernet ... nevertheless, USB might be a backup solution)
From my measurements, a data-stream transmitted to the uC via TC from within python, and mirrored back within a thread to my PC allows for relatively consistent 20 MBit/s.
... How do I push this speed to a better level?
(I think 20MBit/s is the back-and-forth data-rate, so one-way may be faster)
However. Second question:
2)
The ADC within the STM is capable of storing data via DMA to memory.
There are two callbacks available, one at half-full, one at full buffer state.
My problem is mostly about the way of reading out the DMA and/or triggering the conversion in the first place.
How do you do this the "right" way on a RTOS (such that you don't brake the RT in RTOS)?
I see some options here, what are the pros/cons you can think of?
a) Let the ADC run freely, calling the call-backs at the respective fill-levels, triggering a TCP-transmission whenever one of the call-backs is reached
-> may lead to glitches due to insufficient speed of the TCP layer in my opinion.
b) Let the ADC conversion be triggered by a thread, which is preempted and will later TCP-transmit the data, as soon as the memory-buffer is full
-> may lead to inconsistency in the converted values, since you get burst-style conversions, with gaps in between, while the buffer is read
c) Let a thread trigger each conversion individually
-> A no-go I think, since threads are not triggered that often, to get a decent sample-frequency
d) Let a free-running ADC trigger callbacks, let a thread do the FFT, transmit within another thread the data via TCP
-> May work, but is less flexible, since the data gets crunched within the uC.
--> Are there other ways you can think of / what do you think about the ways I named here?
--> What do you think about question 1)?
Have a nice day!
What I know is as below and correct me if wrong, For the automotive bootloader based on any microcontroller, we will have
Startup code (Flash)
Primary bootloader (Flash)
Secondary bootloader (RAM)
As far as a power-on sequence is considered I know that,
From the startup code (provided by the micro vendor, Freescale, ST
Micro, etc.,) the control will be transferred to PBL (Primary
bootloader) using jump or function pointer.
PBL will download the SBL (Secondary bootloader) into RAM, which will
contain the flash driver, capable to download the application.
SBL will download the application into the flash area.
But what will happen before startup code is being executed or just after power on?
I know that each controller will have some sort of code to execute after power on POST (power-on self-test) but still not clear with sequence to operation till bootloader execution comes into execution.
It would be a great help if someone can provide a sequence of operations to reach startup code?
I find it this not uncommon confusion interesting.
POST is software in general, but your question is so vague. Usually when someone talks about POST they are talking about their x86 based computer, that is just software, happens well after the part you are confused about, and is in no way whatsoever required for a computer/processor to run, it has a purpose, adds value so it is there.
Microcontrollers in general do not have primary bootloaders nor secondary, they simply start running your application. Of the dozens/hundreds I have used/examined trying to think of any that have a primary or secondary. Can't think of any off hand. Certain brands in particular do have bootloaders that are usually programmed by them and you cant change or some that you can. How you get into the bootloader varies by brand, often a strap, sometimes a non-volatile bit in a register.
First off processors and the chip around it are dumb, very dumb. Only do what they are told by the humans. Incredibly simple machines. And while the difference between an mcu and a full blown system are at this view pretty much identical, the mcus are simpler and more reliable (for various reasons). The root of the answer starts with the processor or processor core or core or whatever term
might help you. In an mcu this is just one lego block in the whole of the chip, not necessarily even the largest block in the chip. When you look at arm based chips like the stm32 and others with a cortex-m (or older ones with ARMV7TDMI) that lego block is purchased ip from arm, the rest of the chip is either other purchased ip from one or more vendors or in-house made logic. the sram certainly and the flash probably is ip that the chip vendor buys for the specific process on the specific foundry (just like other cell library items, simple gates like AND, OR, NOT and more complicated gates).
Whatever processor core this is, it has an architecture and instruction set. While we know some architectures are implemented using microcode, unlikely that the mcus are, makes no sense the more cisc like might, but the arms and mips and such definitely not. But for this understanding it doesn't matter being microcoded or not there are bit patterns that drive the processor, machine code. We have all heard that chips are made of transistors, and they are. The transistors are part of the simplicity, the basic ones AND, OR, NOT gates you can look up on Wikipedia. You can (inefficiently) build the rest out of those fundamental blocks. A particular instruction tickles the logic, the transistors in a certain way to cause a chain of events, ones and zeros in a specific sequence that do the thing you asked. Logic is not limited to implementing processor instructions, most logic is not part of the decoding and execution of a processor instruction, most if it are equally dumb items. An sram is a lot of packed in bits (four transistors wired up a certain way per bit) with an address and data bus, the logic of an sram lights up rows and columns of these bits when writing or reading. Then there is more logic in front of that sram that decodes an address bus, etc.
As mentioned in the other answer, when power comes up then reset is released, the flip flop based items in the chip which are the registers we read in the manual plus countless others that are behind the scenes are set to their reset value which is done by wiring of the transistors. A number of state machines start which are similar to programs, but are hardwired. wait for reset to go high, once reset goes high then if this input to the state machine is this and that input to the state machine is that then I can move to the next state. The rules to get from one state to the next are implemented in logic. A chip with memory and flash for example might do a bist on the ram first, likely not in an mcu, doesn't make sense, this is logic not software doing this, this is not the post you think of in your laptop/desktop/server. The flash or ram or adcs or other logic might require some number of clocks to settle their logic before reset is released (the reset on the edge of the chip is not necessarily hard wired to all items in the chip, usually it is gated, delayed, etc). So there is a power on state machine that manages this, when the chip is ready then the processor itself will be released, this can be a few or dozens of clock cycles later. The clock itself has to settle, and the logic is designed to wait for that.
When the processor is released from reset it again may have some number of clocks to settle things in its design, it will have a state machine or many that start up the various blocks, and then based on the architectural design of that processor it does one of two things, fetches its first instruction from a known address (address within the processors address space which isn't necessarily the address in the chips view), or it uses a vector table approach and it reads a value from a known address, and that value read is the address of the first instruction and it fetches that instruction. Up to the first fetch there is no software, it is logic.
Depending on how the chip vendor has designed the chip, how they have defined the address space, and understand that addressing within a chip or board design is not some flat universal thing, to the programmer it is, but in reality it isn't. There are many busses with addresses and those address spaces are specific to that portion of the design. When you see the stm32 or others with a bootloader and a strap (boot0/boot1 pin), the logic on the other end of the processor bus may see a fetch at the well known address (meaning both the folks that implement the logic and the folks that write software for the logic know that this is the specific address where things start and if you don't put stuff there it won't boot/work) but as mentioned the chip vendor can do whatever they want with that and often do. As a programmer this can be easily understood as logic isn't any more magical than software:
if strap == 0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
For a certain address range that is decoded in front of this code, but also both banks may be directly addressable:
if address[24]==0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
And this way you can have what you see in the stm32s, that both address 0x00000000 and 0x08000000 or in other vendors chips 0x00000000 and 0x01000000 for example map to the same (flash) memory.
The reason being is that the cortex-ms is vector based, there is a table of addresses that point you at code rather than just instructions at known addresses (like the full sized arms arm7, arm9, arm11, cortex-a). The way you use that is you set your address for reset in the table to be 0x08000000 based so when the processor reads at 0x000000xx it is told to fetch instructions from 0x0800xxxx and it does. When the strap is the other way it finds a different flash which may or may not have a fixed space it may only be visible from the if-then-else. (pretty easy to see with a cortex-m and an SWD debugger and software).
The stm32s will have logic that if the strap is set to run the user application will fetch my guess is four words, if the first one or a specific one is all ones or for some chips all zeros (very often flash/rom resets to ones, because there is a logic in version saving a transistor, so the bit is a zero, but we see it as a one, the bits are all inverted, but this is not a hard and fast rule, just very common) the logic/state machine will, for the stm32 realize there is no user application and will load the bootloader. Now it is very possible the design actually always boots the bootloader and there is software there that looks at the application flash, but I think myself and others on this site decided that is not the case, but none of us work there nor have the visibility into the design. In either case the processor then starts executing what it finds and it is very dumb it is told fetch from this address and it does, the programmer had to make sure that stuff is at that address, and each and every instruction has to be laid out in order properly like train tracks, any gaps or mistakes and the trail goes off the rails, otherwise the train is stupid it just follows the tracks. As humans we call the software post or bootloader or application or whatever. It is just software. Once the processor is started if some software loads and runs other software the processor doesn't know it is stupid it just keeps performing the instructions it is fed as it rolls down the track.
Short answer:
Power ramps up to a chip specified level. At a chip specified time reset should be released. This releases state machines to get the chip ready as needed and release the processor. The processor based on its design either fetches its first instruction from a known place or it reads from a known place and that user planted value is the address where the first instruction lives. After that per the architecture of the chip the execution of that first instruction and fetching of more based on that instruction continue until it crashes or is turned off or put in reset.
There is no magic.
There are a number of good open cores out there that you can simulate with free tools and see (with free tools) the internal signals that make that chip work, you can see the post reset activity leading up to the first fetch and then all the execution from there.
Without knowing which microcontroller you are using, this should be general enough:
The hardware in the microcontroller resets several registers to their documented values. This includes the PC, the program counter.
If the microcontroller has configurable reset vectors the value can be chosen from a few alternatives, other controllers always use the same value.
The code at the location the PC points to is the startup code.
Note: It's always a good idea to read the data sheet of the controller!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
The topic is rather straight forward and I admit i wasn't able to find much on google.
Recently i started to code on STM32 and for a person that is coming from PC-related application, setting all the clocks was rather new.
I was wondering why a developer would want to discard/avoid maximum clock and in which condition?
Say that a microcontroller could work at 168Mhz, why should i choose 84Mhz?
Is it mainly related to power consumption? Are there any other reason?
Why the STM32 team (and microchip as well i guess) took the hassle of setting up a really nice UI on STM32CubeMX to select different combination?
Why should i use an external oscillator directly rather than the PLL path if i can achieve higher working frequency?
Is it mainly related to power consumption?
Yes, mainly. Lower frequency means lower consumption.
One could also save power by doing the work fast, then putting the cpu to sleep, thus improving average consumption, but the power supply might not like the variable load, and exact timekeeping would be rather difficult.
Are there any other reason?
Yes. Some peripherals don't work above certain frequencies. An example: the STM32F429 core can run at 180 MHz, but then there is no way to generate 48 MHz for USB. In order to use USB, the core must run at 168 MHz.
Why should i use an external oscillator directly rather than the PLL path if i can achieve higher working frequency?
An external oscillator has a much higher accuracy than an internal one, and it may take too long for a PLL to stabilize when waking up from standby. It depends on the application requirements.
Power is probably the main reason. But there may be various other reasons that a specific clock speed is used such as:
EMC emmissions.
Avoiding harmonics which interfere with sensitive analogue electronics.
Driving timers / clocks / ADCs etc that are required to be run at a very specific frequency as part of the design (For example I worked on a processor that run at 120MHz, however in order to get the exact required ADC sampling we had to run at something like 119.4MHz).
You might want to use an external oscillator if you intend to use the clock elsewhere on the board. Also you might want to use a very accurate crystal, or maybe not want to wait for the PLL to lock.
There are lots of reasons. However if you are doing something straight forward and don't care about power consumption or noise then running at max speed with the PLL is probably the best place to start in my opinion.
Power is the obvious one, and as it has been touched on in other answers but not directly. Performance. Faster clock does not mean faster code. ST has this magic cachy thing in front of the flash in addition to a real-ish cache in front of the flash (and disabled the arm cache it appears on the cortex-m4's I have tried). But in general the flash is your bottleneck, if as you see on a number of other vendor parts and sometimes ST, you have to keep adding wait states as you increase the system clock. So say at 16Mhz no wait states, at 32, one, 48 two and so on, depends on the system, you are dancing around the speed limit of the flash, making the processor sit around extra clocks while it waits for instructions to come in. And even on an st but easier to see on others, that directly affects your performance, you want to be perhaps right at the frequency where you go from zero to one wait states to maximize what you can feed the cpu.
Some designs the flash is already at half speed of the cpu/system clock where sram generally tracks and can cover the full range, so take the same code at zero wait states and run from flash then run from ram, on some number of MCUs the same machine code runs half speed on flash as it does on sram. some it is one to one and then when you add a wait state flash is half speed vs ram, and so on.
Peripherals have the same problem as mentioned. You may have to use a prescaler on the peripheral clock, so now reading a gpio pin that at 24mhz might have taken one clock at 80mhz may take three or four clocks.
PLLs are analog and jittery, they dont necessarily "lose" any clock cycles, but are worse as far as jitter as the oscillator which itself has a spec on jitter and accuracy as well as temperature affects. So the internal RC is the worst quality clock, direct from the oscillator bypassing the PLL is the best, then multiplying with the PLL will add jitter, but will allow you to go faster.
Back to power. The battery in your remote control on your TV might last a year or so (infra red, the bluetooth ones are days or weeks), they run the lowest clock rate they can to barely do the job and stay off as much as possible or in low power state. If they were to hop up to 120Mhz when they were awake and the battery now lasts weeks or half a year, vs a year, for no real benefit other than it is really cool to run at full speed. That doesnt make much sense. We heavily rely on battery based products now, if the microcontroller in the bluetooth module on your phone ran at its fully rated pll speed, and the microcontroller in your wifi module in your phone, and the one that drives the display, etc, all ran at max speed, your phone might not last even a day on one full charge. Nothing was gained by running faster, but something was lost.
For hobbies and stuff plugged into the wall, burn whatever power you want, but a noticable percentage of the mcu market is about price and power, chips that are screened to a higher speed cost more, lower speed parts, in some cases are just the chips that failed the higher screen, and cost less. tighter smaller code uses less flash you can buy a smaller/cheaper part, your clock can run slower because it takes fewer instructions to do the same thing than a possibly sloppy program and/or bad choice in programming languages, bigger part, lower yield both add to cost. then lowering the clock as low as you can go to keep your tightly written code to just barely meet timing uses the least amount of power ideally (As well as turning off or not turning on peripherals you are not using and prescaling the ones that are on even slower).
For cost and power you want the slowest clock you can tolerate with the smallest binary that is also tight and efficient so that you can just barely make timing. That is your ideal goal. But if you plan for field upgrades then you need to leave some margin for slower/larger code to be part of the upgrade and not have a dramatic effect on power consumption.
I'm new to Arduino and microcontrollers.
I was studying the specs and found that even the same board may have different frequencies with different input voltages (3.3V vs 5V). So the question is, what does frequency represent? Does it represent how many lines of assembly code it's able to run? Or the maximum PWM frequencies it's able to output?
A further question would be, if I'm looking for a board for a specific project, how do I decide which frequency I will need a priori, instead of trying everything out and see which one works?
What makes me more confused is that when it comes to computer CPUs, it seems that lower frequency CPUs can actually run faster than higher frequency ones (e.g. Intel). So how do I actually know how fast a microcontroller can run?
By frequency we mean the frequency of the CPU clock. Say your Arduino Uno runs on 16 MHz, which is 16,000,000 Hertz.
That means there are 16 Million clock cycles per second. The CPU executes the byte-code of the program. One Assembler instruction can actually take any number of CPU cycles to execute, usually between 1 and 4 cycles for simple stuff, and a little bit more heavy arithmetic and writing to memory. So it's a rough estimate of how many "lines of assembler" (that is, byte-code instructions) it can run per second. A measurement which is a little bit better is the "MIPS" value, the "Millions of instructions per second". There are other benchmark types for CPUs which are more accurate.
If you take at the datasheet for the AVR microprocessor architecture, you can see the cycles that each instructions needs: (link: http://www.atmel.com/images/Atmel-0856-AVR-Instruction-Set-Manual.pdf)
So for an ADD Rd, Rr instruction, an AVR CPU needs 1 clock cycle.
Take a desktop Intel CPU for example. It's common these days that they have a clock frequency of 2 GHz or more, which is 2 Billion cycles per second compared to the 16 million cycles per second on the Arduino's AVR CPU. So the Intel CPU beats the Arduino by far. Then again, the Arduino is designed for completley different stuff - it's a small microcontroller with low overhead, runs no OS etc. The use-case for such a CPU (and the architecture) is just different, which makes comparing them unjustified. There many other factors in play, like multi-core CPUs (4 Intel CPU cores vs. 1 AVR) and command pipelining, the speed of your memory / RAM, etc. It's really hard to compare a CPU to another one in every use-case possible, but for "general purpose computing", the Desktop CPUs (AMD, Intel, x64 architecture) far outruns the processing power of a mere Arduino AVR CPU.
I hope this clears up some confusion.
I think one confusion you have may be chip specific, I am not going to look it up right now but I do remember seeing this, the chip spec may say that for this input voltage range it can handle this frequency and for this voltage range it cannot. I think sparkfun the 3.3 are 8mhz and 5.0 are 16mhz or something like that. Anyway, that is not generally the case, but it is a chip by chip vendor by vendor thing and that is why you have to read the datasheet. Has nothing to do with arduinos or avrs specifically, just a general chip design thing.
How do you know how fast your microcontroller can run? That is a very loaded question, depends on your definition of fast. If it is simply what clock frequencies can I use, well "just read the datasheet" for that part, and then depending on your board design choose from what is available, if you do not have any external clocks then your choices may or may not be more limited, you may or may not have a pll that you can use to multiply the clock source.
if your definition of fast is how fast can I perform this task, how many whatevers per second or how much wall clock time does it take to complete some specific task. Well that is a benchmark problem and there are so many variables that there is actually no real answer. Yes it is very true that an x86 can have a lower clock and run faster than some other x86, historically the newer ones can do less stuff per clock than older ones for the same binaries, you have to then tune the compile to the newer chip and then you might get back some of your mips to mhz. but that is in part because you are using a different chip design that just speaks the same language (machine code). You can have a tall person that can recite a poem faster than a short person, both using english and the same poem, has nothing to do with them being short or tall, just that they are different humans.
There are different avr core variations but not remotely on par with the different x86 architectures. so while comparing a tiny vs an xmega you can probably have the xmega run "faster" at the same clock rate simply because it has more registers or a bigger address space, etc. But instructions per second is probably not really different, could be, but my guess is not so much.
Then there is the compiler, the compiler plays a huge role in how "fast" your code runs, change compilers or compiler versions or compiler settings and the machine code produced from the same high level source code (C for example) can vary greatly and as a result can have dramatic effects on the "speed" of the code. Take the dhrystone for example, very easy to demonstrate that the same exact source code on the same exact chip/board, same clock rates, etc can execute at vastly different speeds based on either using different compilers, versions or command line settings, kinda proving that the godfather of benchmarks is basically useless in providing any meaningful information.
Microcontrollers make the problem much worse as you often are running the program out of flash, and many, not all, but many have the ability to either divide or multiply or both the clock, but the flash is not always designed for the full range. You might have a chip that boots on an internal clock at 8mhz but you can use the pll to multiply that up to say 80mhz. But not uncommon that the flash is limited to say 16mhz on a chip like that so at 8mhz the flash can deliver an item say an instruction every cpu clock, but at 20mhz you have to put a wait state and although the cpu is running much faster you can only feed it at 16mhz so it is waiting around more, and then acts fast when it gets something, is it really "faster" or is clocking up making you slower. Certainly at just under 16mhz in this fantasy chip I am describing you can keep it to zero wait states so it is really faster, not necessarily twice as as there are other factors, but definitely faster than 8mhz. just at or above 16mhz though you take a huge performance hit compared to just under 16mhz. at just under 32mhz though it is pretty fast compared to just under, then at just over 32mhz another wait state setting and much slower again even though the clock is basically the same and so on.
Then there is the fetching, how does the cpu actually fetch, like an arm where it fetches a bunch of data per fetch transaction, even if it is not going to execute all of them if you branch to 0x1004 and at that address there is a branch to 0x2008 the core might fetch 0x10 bytes from 0x1000 to 0x100F, THEN extract the 0x1004 word/instruction, decode it to find it is a branch then read 0x10 bytes from 0x2000. basically reading 0x20 bytes to find 2 instructions. Take two instructions if both are in the 0x10 bytes then good if one is at 0x100C and the other at 0x2000 that is a performance hit. take this internal information and apply it to an application and all of its jumping around, changing one line of code or adding or removing a single nop to the bootstrap (causing the alignment of the program to change in the address space) can cause anywhere from a tiny to a large change in performance, swap two helper function sin the source code of your program, in the text, causing them to land in different address spaces once compiled, can have little to major performance effects without actually changing the functions themselves.
So performance is first of a foolish task to go after in one respect, in the other respect all that matters is your program as written with the compiler you are using on the hardware you are using, it is as fast as it is, and there are things you can do to make that code faster on that compiler on that target on that day, by changing compiler settings or the code or both. And ideally you build your final firmware, performance test that, and never build again as if you build a year or two later it may be on a different host compiler with a different compiler or compiler version and all bets are off on performance.
How do you pick a board, how much flash, ram, features, clock rate. A lot of it is experience by just trial and error, you fortunately live in a time where you can literally try hundreds of boards all of which cost anywhere from a few bucks to like 10 or 20 each, different cpu architectures different chip vendors, etc. there are many compiler choices and even languages available, basically there are too many easy to acquire choices, unlike back in the day when the parts were pretty cheap but you may have had to build your own board, write your code in asm, maybe even create your own assembler, etc. Have a rom programmer that cost hundreds to thousands of dollars. So go with the AVR you have and play with its features, play with the compiler and/or write or both. Do experiments to see if there are fetch effects or not. If you have clocking choices mess with those see what happens.
Of course all of this starts with reading the chip documentation from the vendor.
I have been working on this project nd the code has gotten ao huge that the microcontroller's flash memory is full,so I want to know if there is any way i can connect an external eeprom or any memory device that can help me have more program memory..
Thanx in advanced!!!!
The only 8-bit PICs that can use external program memory are high-end parts in the PIC18F series - all 64-pin or more.
If a substantial portion of your code size consists of text or other data (rather than actual code), you could store the data on an external SPI or I2C EEPROM. This would be much slower than having the data internally, and less convenient to use - you'd have to manually send an address and then read bytes from the external chip, you couldn't just access the data as an array.
The 16F877 is a rather old chip - you can certainly find ones with more capacity these days. A quick search on Microchip's part selector turns up several 16F chips with twice the program memory, such as the 16F1789. If you'd be willing to switch to the more powerful 18F series, you could double the program memory yet again - 18F4620, for example.