I'm working with some simple VGA driver code for use with the Xilinx Spartan 6 FPGA (via a Papilio Pro board). The code expects to have 4-bits of output per color, and so defines logic vectors for each color. However, my setup doesn't happen to provide the full 4 bits per color so I wanted to find a creative way to control this via the UCF.
The original UCF defined 4 pins for each color. In the case of blue, I only have two pins, so I chose to map the two I have to blues MSBs, thus:
NET Blue(0) IOSTANDARD=LVTTL; # N/C
NET Blue(1) IOSTANDARD=LVTTL; # N/C
NET Blue(2) LOC="P92" | IOSTANDARD=LVTTL; # to a pin
NET Blue(3) LOC="P87" | IOSTANDARD=LVTTL; # to a pin
(I totally omitted the first two constraints at first, and it still compiled and worked but complained about the inconsistent voltage standards (the absent ones defaulted to IOSTANDARD = LVCMOS25), thus throwing "WARNING:Place:838 - An IO Bus with more than one IO standard is found.")
The main warning is the one I'd like to know how to eliminate, preferably within the UCF:
WARNING:Place:837 - Partially locked IO Bus is found.
Following components of the bus are not locked:
Comp: Blue<1>
Comp: Blue<0>
What's the right way to map a net without a programmable pin location to a default value (logic '1' or '0', or perhaps a tri-state value) within the UCF in such a way as to eliminate this "Partially locked IO Bus" sort of warning?
My goal is that, in a setup with more or fewer bits per channel being driven by pins, only the UCF should need to change (not the source code). What I did works, despite the warnings... I'd just like to do it better and properly eliminate these warnings.
You've asked for pins within the top level of your code (on your entity). The tools therefore have to provide them. Hence you have to map them (otherwise it'll pick some random ones for you, which you usually don't want)
If those pins really have nowhere to go on the board and never will have, then remove them from the design completely (UCF and HDL).
Otherwise, you have to LOC them. You could add a PULLDOWN in the UCF to them to ensure they go to a low value.
Related
What I know is as below and correct me if wrong, For the automotive bootloader based on any microcontroller, we will have
Startup code (Flash)
Primary bootloader (Flash)
Secondary bootloader (RAM)
As far as a power-on sequence is considered I know that,
From the startup code (provided by the micro vendor, Freescale, ST
Micro, etc.,) the control will be transferred to PBL (Primary
bootloader) using jump or function pointer.
PBL will download the SBL (Secondary bootloader) into RAM, which will
contain the flash driver, capable to download the application.
SBL will download the application into the flash area.
But what will happen before startup code is being executed or just after power on?
I know that each controller will have some sort of code to execute after power on POST (power-on self-test) but still not clear with sequence to operation till bootloader execution comes into execution.
It would be a great help if someone can provide a sequence of operations to reach startup code?
I find it this not uncommon confusion interesting.
POST is software in general, but your question is so vague. Usually when someone talks about POST they are talking about their x86 based computer, that is just software, happens well after the part you are confused about, and is in no way whatsoever required for a computer/processor to run, it has a purpose, adds value so it is there.
Microcontrollers in general do not have primary bootloaders nor secondary, they simply start running your application. Of the dozens/hundreds I have used/examined trying to think of any that have a primary or secondary. Can't think of any off hand. Certain brands in particular do have bootloaders that are usually programmed by them and you cant change or some that you can. How you get into the bootloader varies by brand, often a strap, sometimes a non-volatile bit in a register.
First off processors and the chip around it are dumb, very dumb. Only do what they are told by the humans. Incredibly simple machines. And while the difference between an mcu and a full blown system are at this view pretty much identical, the mcus are simpler and more reliable (for various reasons). The root of the answer starts with the processor or processor core or core or whatever term
might help you. In an mcu this is just one lego block in the whole of the chip, not necessarily even the largest block in the chip. When you look at arm based chips like the stm32 and others with a cortex-m (or older ones with ARMV7TDMI) that lego block is purchased ip from arm, the rest of the chip is either other purchased ip from one or more vendors or in-house made logic. the sram certainly and the flash probably is ip that the chip vendor buys for the specific process on the specific foundry (just like other cell library items, simple gates like AND, OR, NOT and more complicated gates).
Whatever processor core this is, it has an architecture and instruction set. While we know some architectures are implemented using microcode, unlikely that the mcus are, makes no sense the more cisc like might, but the arms and mips and such definitely not. But for this understanding it doesn't matter being microcoded or not there are bit patterns that drive the processor, machine code. We have all heard that chips are made of transistors, and they are. The transistors are part of the simplicity, the basic ones AND, OR, NOT gates you can look up on Wikipedia. You can (inefficiently) build the rest out of those fundamental blocks. A particular instruction tickles the logic, the transistors in a certain way to cause a chain of events, ones and zeros in a specific sequence that do the thing you asked. Logic is not limited to implementing processor instructions, most logic is not part of the decoding and execution of a processor instruction, most if it are equally dumb items. An sram is a lot of packed in bits (four transistors wired up a certain way per bit) with an address and data bus, the logic of an sram lights up rows and columns of these bits when writing or reading. Then there is more logic in front of that sram that decodes an address bus, etc.
As mentioned in the other answer, when power comes up then reset is released, the flip flop based items in the chip which are the registers we read in the manual plus countless others that are behind the scenes are set to their reset value which is done by wiring of the transistors. A number of state machines start which are similar to programs, but are hardwired. wait for reset to go high, once reset goes high then if this input to the state machine is this and that input to the state machine is that then I can move to the next state. The rules to get from one state to the next are implemented in logic. A chip with memory and flash for example might do a bist on the ram first, likely not in an mcu, doesn't make sense, this is logic not software doing this, this is not the post you think of in your laptop/desktop/server. The flash or ram or adcs or other logic might require some number of clocks to settle their logic before reset is released (the reset on the edge of the chip is not necessarily hard wired to all items in the chip, usually it is gated, delayed, etc). So there is a power on state machine that manages this, when the chip is ready then the processor itself will be released, this can be a few or dozens of clock cycles later. The clock itself has to settle, and the logic is designed to wait for that.
When the processor is released from reset it again may have some number of clocks to settle things in its design, it will have a state machine or many that start up the various blocks, and then based on the architectural design of that processor it does one of two things, fetches its first instruction from a known address (address within the processors address space which isn't necessarily the address in the chips view), or it uses a vector table approach and it reads a value from a known address, and that value read is the address of the first instruction and it fetches that instruction. Up to the first fetch there is no software, it is logic.
Depending on how the chip vendor has designed the chip, how they have defined the address space, and understand that addressing within a chip or board design is not some flat universal thing, to the programmer it is, but in reality it isn't. There are many busses with addresses and those address spaces are specific to that portion of the design. When you see the stm32 or others with a bootloader and a strap (boot0/boot1 pin), the logic on the other end of the processor bus may see a fetch at the well known address (meaning both the folks that implement the logic and the folks that write software for the logic know that this is the specific address where things start and if you don't put stuff there it won't boot/work) but as mentioned the chip vendor can do whatever they want with that and often do. As a programmer this can be easily understood as logic isn't any more magical than software:
if strap == 0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
For a certain address range that is decoded in front of this code, but also both banks may be directly addressable:
if address[24]==0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
And this way you can have what you see in the stm32s, that both address 0x00000000 and 0x08000000 or in other vendors chips 0x00000000 and 0x01000000 for example map to the same (flash) memory.
The reason being is that the cortex-ms is vector based, there is a table of addresses that point you at code rather than just instructions at known addresses (like the full sized arms arm7, arm9, arm11, cortex-a). The way you use that is you set your address for reset in the table to be 0x08000000 based so when the processor reads at 0x000000xx it is told to fetch instructions from 0x0800xxxx and it does. When the strap is the other way it finds a different flash which may or may not have a fixed space it may only be visible from the if-then-else. (pretty easy to see with a cortex-m and an SWD debugger and software).
The stm32s will have logic that if the strap is set to run the user application will fetch my guess is four words, if the first one or a specific one is all ones or for some chips all zeros (very often flash/rom resets to ones, because there is a logic in version saving a transistor, so the bit is a zero, but we see it as a one, the bits are all inverted, but this is not a hard and fast rule, just very common) the logic/state machine will, for the stm32 realize there is no user application and will load the bootloader. Now it is very possible the design actually always boots the bootloader and there is software there that looks at the application flash, but I think myself and others on this site decided that is not the case, but none of us work there nor have the visibility into the design. In either case the processor then starts executing what it finds and it is very dumb it is told fetch from this address and it does, the programmer had to make sure that stuff is at that address, and each and every instruction has to be laid out in order properly like train tracks, any gaps or mistakes and the trail goes off the rails, otherwise the train is stupid it just follows the tracks. As humans we call the software post or bootloader or application or whatever. It is just software. Once the processor is started if some software loads and runs other software the processor doesn't know it is stupid it just keeps performing the instructions it is fed as it rolls down the track.
Short answer:
Power ramps up to a chip specified level. At a chip specified time reset should be released. This releases state machines to get the chip ready as needed and release the processor. The processor based on its design either fetches its first instruction from a known place or it reads from a known place and that user planted value is the address where the first instruction lives. After that per the architecture of the chip the execution of that first instruction and fetching of more based on that instruction continue until it crashes or is turned off or put in reset.
There is no magic.
There are a number of good open cores out there that you can simulate with free tools and see (with free tools) the internal signals that make that chip work, you can see the post reset activity leading up to the first fetch and then all the execution from there.
Without knowing which microcontroller you are using, this should be general enough:
The hardware in the microcontroller resets several registers to their documented values. This includes the PC, the program counter.
If the microcontroller has configurable reset vectors the value can be chosen from a few alternatives, other controllers always use the same value.
The code at the location the PC points to is the startup code.
Note: It's always a good idea to read the data sheet of the controller!
I'm building a firmware for a device based on Atmel/Microchip AT SAMG55.
In a simple function, trigger some relais connected to GPIO pins.
Because I want to interlock different I/O, avoiding that 2 specific outputs are high level on the same time, I need to know the pin level I set before.
In another project, based on the SAMD21, there was a function that reads output pin state
static inline bool port_pin_get_output_level(const uint8_t gpio_pin)
The SAMG55 port library in ASF is quite different, so i tried ioport_get_pin_level(pin), but i'm not getting expected result. I think that it works only with pins configured as inputs.
Are there any recommended solutions?
Referring to Figure 16-2 in the SAMG55 data sheet, and to sections 16.5.4 and 16.5.8:
16.5.4 Output Control
... The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and
the Clear Output Data Register (PIO_CODR). These write operations,
respectively, set and clear the Output Data Status Register
(PIO_ODSR), which represents the data driven on the I/O lines. ...
16.5.8 Inputs
The level on each I/O line can be read through PIO_PDSR. This register indicates the level of the I/O lines regardless of their
configuration, whether uniquely as an input, or driven by the PIO
Controller, or driven by a peripheral. Reading the I/O line levels
requires the clock of the PIO Controller to be enabled, otherwise
PIO_PDSR reads the levels present on the I/O line at the time the
clock was disabled.
So, as long as the pin is configured such that the actual level on the pin always corresponds to the level we're trying to drive - which is not the case with an open collector configuration, for example - then Tarick Welling's answer is correct: you can read the output state from the Output Data Status Register (PIO_ODSR).
However the true state of the pin, regardless of driver configuration, can be read (subject to a resynchronisation delay that may or may not be relevant in any given application) from the Pin Data Status Register (PIO_PDSR).
You can do some low level programming. You use the high level HAL functions to configure, set and reset the pins but before you do that you would. Read the value for the pin by addressing the data value of the register. In AVR that would be done by reading PORTx. In a STM32 this can be done by reading the value of GPIOx->ODR. You would of course then need to extract the correct pin but this can be done.
You can also look inside the definition of port_pin_get_output_level and check how they did it and convert that into the way this board/vendor/HAL does its addressing.
update:
When looking inside the datasheet for the SAM G55G/J. Page 340 gives us the answer we need.
The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and the
Clear Output Data Register (PIO_CODR). These write operations, respectively, set and clear the Output Data
Status Register (PIO_ODSR), which represents the data driven on the I/O lines.
So we can drive the output by writing to PIO_SODR and PIO_CODR to set and reset the pins respectively. But also read from PIO_ODSR this is a register which contains the state of the pin.
A quick google search turns up two options for Atmel/AVR controllers:
read back from the same location you used to set your output value (PORTx register)
This will give you the value that you have written into the register before.
read the actual value using the PINx registers
This will give you the value that you could actually measure on your device.
The difference between the two can be important: if you set a GPIO that is pulled down below the logic voltage threshold (i.e. if connected to GND) to HIGH, PORTx will read HIGH (the value you set) while PINx will read LOW (the actual value).
https://www.avrfreaks.net/forum/reading-pin-set-output
I am trying to start a project in which I would create my own app on iOS using Swift to communicate with an Arduino 101 to control multiple LEDs. I have used this project as a base point.
After getting this to work with my custom app, I wanted to figure out how to make this work with multiple LEDs instead of just one. Currently, I am just performing writeCharacteristic to send a 1 or a 0 to the Arduino depending on which button I press (ON/OFF). However, for the new project, I need to be able to select one of the lights (select one of four output pins), and write a 1 or a 0 to turn it on and off. I didn't know what approach I should take to do this.
I don't need any code, just suggestions on how I can make this work through swift/Arduino code.
Thanks.
It's all explained in the manual.
https://www.arduino.cc/en/Reference/CurieBLE
Service design patterns
A characteristic value can be up to 20 bytes long. This is a key
constraint in designing services. Given this limit, you should
consider how best to store data about your sensors and actuators most
effectively for your application. The simplest design pattern is to
store one sensor or actuator value per characteristic, in ASCII
encoded values.
So either create a separate BLEBoolCharacteristic instance for each LED or combine the switch state of all leds in the same BLECharacteristic. For example you could encode 8 LED states in a single byte (1 LED per bit).
Do whatever you prefer. But read manuals...
Is there a way to set specific port pins without effecting other pins at the same port?
For example:
I used LATB[13:6] for 7-segment LCD, the rest LATB bits are used for other purposes.
Now I need to set LATB = 0x003F for display '0', if i do this the rest of the bits are changed.
Someone can help me?
You'll have to split the operation, since you can't address specifically bits 6 to 13 in a 16 bit register. For instance, assuming LATB is a 16 bit register on which bits 6 to 13 (a range of 8 bits) map to a 7-segment display with period (making 8 segments), and we want to set those pins in particular to 0x3f = 0b00111111, we can do:
LATB = (LATB & ~(0xff<<6)) | (0x3f<<6);
0xff is a bit mask of which bits we want to affect, representing 8 bits, which we shift into position 6-13 using <<6.
However, this is not atomic; we are reading, masking out the bits we want to adjust, setting them to new values, and writing back the entire register including the preserved other bits. Thus we may need for instance to disable interrupts around such a line.
For many MCUs there are particular code paths supporting modification of single bits, or dedicated logic for clear/set. Those might mean that you could perform the adjustment without risking trampling another change if you stick to plainer operations, such as:
val = 0x3f;
LATB |= (val<<6); // set bits which should be set
LATB &= (val<<6) | ~(0xff<<6); // clear bits that should be clear
In this example, we're not doing the display update in one step, but each update we are making is left in a form the compiler might be able to optimize to a single instruction (IOR and AND, respectively).
Some processors also have instructions to access sections of a word like this, frequently named bitfield operations. I don't think PIC24 is among those. It does have single-bit access instructions, but they seem to either operate on the working file or require fixed bit positions, which means setting bit by bit would have to be unrolled.
C also does have a concept of bit fields, which means is is possible to define a struct interpretation of the latch register that does have a name for the bits you want to affect, but it's a fairly fragile method. You're writing architecture specific code anyway when relying on the particular register names. It is likely best to inspect documentation for your compiler and platform libraries.
I am developing an application on ATMEL AT89C51 of 8051 family.
Could anyone suggest how to determine in coding whether the reset has been done due to power cycle or through software?
According to the Atmel 8051 Microcontrollers Hardware Manual (PDF link), the power-off flag (POF / bit 4) in the power control register (PCON / 87h) is set by hardware when VCC rises from 0 to its nominal voltage. The power-off flag reset value will be 1 only after a power on (cold reset). A warm reset (e.g. software reset) does not affect the value of this bit.
I've often found that different vendors implement their own registers in the SFR space that can be taken advantage of for cases such as this. For example, Silicon Labs uses a power-on reset flag (PORSF) in their reset source register (RSTSRC).
It really depends if you wanted to depend on some specific 8051 variant vendor. It is best to use vendor provided registers, but if you changed vendor your code will brake, or even worse, it will misbehave.
If you had external RAM in your system (and it was not battery powered), than you could write a sequence of bytes (like 0xAA, 0x55...) somewhere in the reserved part of the memory, and check if it was still there after start up. If not, you have had a cold start. Of course, you should modify assembler start up code to make sure it does not initialize this part of memory (or it would be zero at each start), and you should instruct your linker to exclude this memory from linkage so that it does not get used by anything else.
Finally, include conditional compilation in your code so that if you had some 8051 variant with special registers, it would be used, if not, try the plan B.
I have done that with few bytes of internal 8051 memory /all my external RAM was battery powered/ and then I have learned than not every 8051 variant has had consistent policy at start up - some have all their internal memory initialized, some have initialized only SFR and some other specific areas leaving me few bytes to play with the procedure described.
I don't think there is a method to determine how reset has occurred because once reset everything starts from the beginning in 8051.
One method i guess would work is,
Say take a variable X, before every software code of reset, just set X=1 (indicating software reset) and store this variable in any ROM if you interfaced externally.
On every reset, at the beginning include an instance which checks this variable X to see which reset had occurred and change X to 0, for next time detection.
If you do not have an external ROM, interface an D-latch atleast.
I hope this works. Do tell me if this works.