ATMEGA32 UART Communication - serial-port

I am trying to do serial communication in ATMEGA32 and I have a question:
In asynchronous serial communication both UBRRH and UCSRC registers have same location. I don't know which conditions that location will act as UBRRH and for which conditions, it will act as UCSRC. I need different values for each register according to the work assigned to those registers
In the datasheet, they have mentioned the use of URSEL bit for selection betweem two registers but somehow I am not getting that.

The answer is: Yes, the URSEL bit. According to the datasheet:
When doing a write access of this I/O location, the high bit of the
value written, the USART Register Select (URSEL) bit, controls which
one of the two registers that will be written. If URSEL is zero during
a write operation, the UBRRH value will be updated. If URSEL is one,
the UCSRC setting will be updated.
This means, when you write to UCSRC, regardless of what value you want to put there, also set the URSEL bit (make sure that URSEL is 1):
UCSRC = (1<<URSEL)| ... whatever else ...
When you write to UBRRH, make sure that URSEL bit must is zero. Here are some different ways of doing that:
UBRRH = (0<<URSEL)| ... whatever else ... // just showing that URSEL isn't set
UBRRH = ...some value... // simple not setting URSEL
UBRRH = (someValue)&(~(1<<URSEL) // Ensuring that URSEL isn't set
URSEL bit is just a high bit. So whatever value you write to UCSRC, set (turn on, make 1) the high bit (bit 7). And when writing to UBRRH, make sure that bit 7 is cleared. Another way of thinking about it, every value you write to UBRRH must be under 128. And every value that you want to write to UCSRC, add 128 to it: this will turn on bit 7. This is just as a way of explanation, the code above is clearer.
How is this done? I don't know, I am not a uC designer. What seems likely is that the same IO location location is mapped to two different registers in the processor. Say you have a register named foo, and when you write a value to it the uC checks if the high bit is set. If it is it writes the value to internal memory location 1 and if it isn't it writes the value to internal memory location 2.
If you are using the URSEL bit correctly, then the values are being written correctly. Your testing not showing the correct values because you are not reading them propertly. Page 162 of the datasheet:
Doing a read access to the UBRRH or the UCSRC Register is a more
complex operation. How- ever, in most applications, it is rarely
necessary to read any of these registers.
The read access is controlled by a timed sequence. Reading the I/O
location once returns the UBRRH Register contents. If the register
location was read in previous system clock cycle, reading the register
in the current clock cycle will return the UCSRC contents. Note that
the timed sequence for reading the UCSRC is an atomic operation.
Interrupts must therefore be controlled (for example by disabling
interrupts globally) during the read operation.
So when your read UBRRH / UCSRC for the first time you get UBRRH. If you immediately read again you read UCSRC. But as the documentation suggests, there is no real reason to read these registers. It seems that you do not trust the datasheet, but this is a mistake: the datasheet is the best source of information about such matters: without datasheets we would be nowhere.

Related

Setting initial values for pitch and roll for MPU6050 DMP processing

I want to handle case when my MPU6050 is disconnected and reset. Unfortunately, after reinit MPU6050 shows pitch and roll values as 0 and stabilizes after ~1-2s with right values. I would like to hint DMP by writing last read values before reset. Is it any interface for it?
Btw, no matter I configure LPF with value 5 or 188 - 'issue' still exists.
Solution
Try logging the 16 bytes of memory beginning at D_0_192 (defined in inv_mpu_dmp_motion_driver.c of motion_driver_6.12).
unsigned short buf[16];
mpu_read_mem(D_0_192, 16, buf);
// your chosen method of logging this buffer
Mine looked like this shortly after mpu_set_dmp_state(1):
3fffdfeb 003eb3b6 000d2278 00002f3c
and like this after stabilizing for 15 seconds:
1e246556 386e559d 01b407b2 004d6ad9
If my MPU6050-using device starts upside-down, it takes about this long for the readings to stabilize.
After the readings have stabilized, record the value as a constant and write it back to the same location when DMP setup happens:
mpu_write_mem(D_0_192, 16, buf);
Method
I logged the contents of some of the DMP's mentioned but unreferenced/undocumented registers in inv_mpu_dmp_motion_driver.c figuring that they probably expose this part of DMP state somewhere. I did this after a call to mpu_set_dmp_state(1).
I found a number of values that change while the DMP is running. D_0_192, which I believe to be the internal state corresponding to attitude was identifiable by its sluggish relaxation time. I haven't taken the time to interpret the contents. I have only copied the buffer and observed the intended result -- that the contents of the DMP fifo begin at the recorded attitude.
Disclaimer
This is undocumented, potentially very wrong, and certainly brittle (untested on anything other than the DMP firmware blob included in motion_driver_6.12). Use at your own risk.

Reading output pin level on SAMDG55

I'm building a firmware for a device based on Atmel/Microchip AT SAMG55.
In a simple function, trigger some relais connected to GPIO pins.
Because I want to interlock different I/O, avoiding that 2 specific outputs are high level on the same time, I need to know the pin level I set before.
In another project, based on the SAMD21, there was a function that reads output pin state
static inline bool port_pin_get_output_level(const uint8_t gpio_pin)
The SAMG55 port library in ASF is quite different, so i tried ioport_get_pin_level(pin), but i'm not getting expected result. I think that it works only with pins configured as inputs.
Are there any recommended solutions?
Referring to Figure 16-2 in the SAMG55 data sheet, and to sections 16.5.4 and 16.5.8:
16.5.4 Output Control
... The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and
the Clear Output Data Register (PIO_CODR). These write operations,
respectively, set and clear the Output Data Status Register
(PIO_ODSR), which represents the data driven on the I/O lines. ...
16.5.8 Inputs
The level on each I/O line can be read through PIO_PDSR. This register indicates the level of the I/O lines regardless of their
configuration, whether uniquely as an input, or driven by the PIO
Controller, or driven by a peripheral. Reading the I/O line levels
requires the clock of the PIO Controller to be enabled, otherwise
PIO_PDSR reads the levels present on the I/O line at the time the
clock was disabled.
So, as long as the pin is configured such that the actual level on the pin always corresponds to the level we're trying to drive - which is not the case with an open collector configuration, for example - then Tarick Welling's answer is correct: you can read the output state from the Output Data Status Register (PIO_ODSR).
However the true state of the pin, regardless of driver configuration, can be read (subject to a resynchronisation delay that may or may not be relevant in any given application) from the Pin Data Status Register (PIO_PDSR).
You can do some low level programming. You use the high level HAL functions to configure, set and reset the pins but before you do that you would. Read the value for the pin by addressing the data value of the register. In AVR that would be done by reading PORTx. In a STM32 this can be done by reading the value of GPIOx->ODR. You would of course then need to extract the correct pin but this can be done.
You can also look inside the definition of port_pin_get_output_level and check how they did it and convert that into the way this board/vendor/HAL does its addressing.
update:
When looking inside the datasheet for the SAM G55G/J. Page 340 gives us the answer we need.
The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and the
Clear Output Data Register (PIO_CODR). These write operations, respectively, set and clear the Output Data
Status Register (PIO_ODSR), which represents the data driven on the I/O lines.
So we can drive the output by writing to PIO_SODR and PIO_CODR to set and reset the pins respectively. But also read from PIO_ODSR this is a register which contains the state of the pin.
A quick google search turns up two options for Atmel/AVR controllers:
read back from the same location you used to set your output value (PORTx register)
This will give you the value that you have written into the register before.
read the actual value using the PINx registers
This will give you the value that you could actually measure on your device.
The difference between the two can be important: if you set a GPIO that is pulled down below the logic voltage threshold (i.e. if connected to GND) to HIGH, PORTx will read HIGH (the value you set) while PINx will read LOW (the actual value).
https://www.avrfreaks.net/forum/reading-pin-set-output

PIC24F - Set LATx specific pins without effecting the other pins

Is there a way to set specific port pins without effecting other pins at the same port?
For example:
I used LATB[13:6] for 7-segment LCD, the rest LATB bits are used for other purposes.
Now I need to set LATB = 0x003F for display '0', if i do this the rest of the bits are changed.
Someone can help me?
You'll have to split the operation, since you can't address specifically bits 6 to 13 in a 16 bit register. For instance, assuming LATB is a 16 bit register on which bits 6 to 13 (a range of 8 bits) map to a 7-segment display with period (making 8 segments), and we want to set those pins in particular to 0x3f = 0b00111111, we can do:
LATB = (LATB & ~(0xff<<6)) | (0x3f<<6);
0xff is a bit mask of which bits we want to affect, representing 8 bits, which we shift into position 6-13 using <<6.
However, this is not atomic; we are reading, masking out the bits we want to adjust, setting them to new values, and writing back the entire register including the preserved other bits. Thus we may need for instance to disable interrupts around such a line.
For many MCUs there are particular code paths supporting modification of single bits, or dedicated logic for clear/set. Those might mean that you could perform the adjustment without risking trampling another change if you stick to plainer operations, such as:
val = 0x3f;
LATB |= (val<<6); // set bits which should be set
LATB &= (val<<6) | ~(0xff<<6); // clear bits that should be clear
In this example, we're not doing the display update in one step, but each update we are making is left in a form the compiler might be able to optimize to a single instruction (IOR and AND, respectively).
Some processors also have instructions to access sections of a word like this, frequently named bitfield operations. I don't think PIC24 is among those. It does have single-bit access instructions, but they seem to either operate on the working file or require fixed bit positions, which means setting bit by bit would have to be unrolled.
C also does have a concept of bit fields, which means is is possible to define a struct interpretation of the latch register that does have a name for the bits you want to affect, but it's a fairly fragile method. You're writing architecture specific code anyway when relying on the particular register names. It is likely best to inspect documentation for your compiler and platform libraries.

how interrupts works and what is the function of vectors in MSP430 ?

Can someone explain me how to write ISR and how to set their priority when they are many in one program?
What is the function of vectors and is it necessary to consider them while interrupt handling?
If its possible please provide some examples as well (c code).
Just like when a doorbell or phone rings at your home you stop what you are doing, deal with the interrupt, then, ideally, return to what you were doing.
Same with a processor (msp430 or otherwise). There are ways to interrupt the processor for various reasons. I have a new byte in the uart for you, a timer has timed out, a gpio pin has changed state, etc. Things that you have configured to be something that interrupts the processor when they happen.
Just like the doorbell. the hardware has to have a way to stop and save something to remember what it was doing, find out what the interrupt is and handle it, then go back to what it was doing. Processors often, quite literally interrupt between instructions they will finish the current instruction (with piplines "current" is a bit fuzzy). Then based on the interrupt and the design of the processor there is some place that the hardware and software agree upon (the hardware dictates and the programmers use) such that the software can tell the processor where the code is that handles all interrupts or that particular flavor of interrupt, depending on how the processor is designed. A common solution is an interrupt vector table, a list of addresses usually that the programmer sets that point to the code that handles each one of those events or interrupts, both the programmer and the hardware know that a particular interrupt will cause a particular address to be read in the memory space and the hardware assumes that address is the code for that interupt.
So the processor gets an interrupt, it saves the state of the machine which at a minimum is the program counter and can depending on the design also save the status register and gprs, but often the programmer is responsible for saving gprs and such as needed. The hardware then based on the interrupt/event reads from an address, usually that address contains an address to a handler so for example 0xFFF8 might be the address to the interrupt handler (dont know didnt look it up for the msp430). so 0xFFF8 is not where the code is but the number at that address is where the code is maybe 0xD008 for example. It depends on the processor architecture but when you finish handling the interrupt you need to tell the processor so it can return to what was interrupted. often that is a special return from interrupt instruction but different processors have different solutions.
Priority if any, is dictated by the hardware design, something as simple as an msp430 might not (not sure off hand) have a priority scheme other than whoever gets here first. and the scheme might be that before you exit the handler you check to see if any others have come in while you were handling that one that interrupted you. if there is a priority scheme in the design then it simply repeats the process saves state (of the interrupt or forground code interrupted) finds the entry point for the handler using a vector table usually. when the highest priority handler finishes it returns and control goes back to the next higher priority thing, and eventually back to the forground task (assuming nothing else comes along).
in general an isr needs to not destroy anything the foreground task was using, preserve the state of the gprs if needed, preserve the state of the status register, dont mess up the stack or memory used by the foreground task, etc. And ideally keep the isr lean and mean, dont waste a lot of time there. the vector table is just where you fill in the addresses for entry points into the code reset handler interrupt handler, etc.
An interrupt handler (also known as an interrupt service routine or ISR) is a piece of code that runs when an event (I/O) occurs that requires CPU attention. An interrupt event is typically asynchronous, hence the reason a handler must be registered for the event.
For example, in the case of Serial communication, data is received by the USCI peripheral (configured for UART) that needs to be processed. In this case, an interrupt will be issued by the USCI peripheral and the CPU will begin executing from the interrupt handler (addressed by the interrupt vector). Vectors are at fixed locations and are outlined in the datasheet of your device. When the end of the interrupt handler is reached, the CPU will go back to where it left off (or service another interrupt). A datasheet/user's guide will explain the default priorities of interrupts.
A typical interrupt handler using the IAR Embedded Workbench IDE will look like the following:
// Port 1 interrupt service routine
#pragma vector=PORT1_VECTOR
__interrupt void Port_1(void)
{
P1OUT ^= 0x01;
// P1.0 = toggle
P1IFG &= ~0x10;
// P1.4 IFG cleared
}
Further reading is available here.

How to know whether Power on reset or Software reset has occured in 8051 microcontrller

I am developing an application on ATMEL AT89C51 of 8051 family.
Could anyone suggest how to determine in coding whether the reset has been done due to power cycle or through software?
According to the Atmel 8051 Microcontrollers Hardware Manual (PDF link), the power-off flag (POF / bit 4) in the power control register (PCON / 87h) is set by hardware when VCC rises from 0 to its nominal voltage. The power-off flag reset value will be 1 only after a power on (cold reset). A warm reset (e.g. software reset) does not affect the value of this bit.
I've often found that different vendors implement their own registers in the SFR space that can be taken advantage of for cases such as this. For example, Silicon Labs uses a power-on reset flag (PORSF) in their reset source register (RSTSRC).
It really depends if you wanted to depend on some specific 8051 variant vendor. It is best to use vendor provided registers, but if you changed vendor your code will brake, or even worse, it will misbehave.
If you had external RAM in your system (and it was not battery powered), than you could write a sequence of bytes (like 0xAA, 0x55...) somewhere in the reserved part of the memory, and check if it was still there after start up. If not, you have had a cold start. Of course, you should modify assembler start up code to make sure it does not initialize this part of memory (or it would be zero at each start), and you should instruct your linker to exclude this memory from linkage so that it does not get used by anything else.
Finally, include conditional compilation in your code so that if you had some 8051 variant with special registers, it would be used, if not, try the plan B.
I have done that with few bytes of internal 8051 memory /all my external RAM was battery powered/ and then I have learned than not every 8051 variant has had consistent policy at start up - some have all their internal memory initialized, some have initialized only SFR and some other specific areas leaving me few bytes to play with the procedure described.
I don't think there is a method to determine how reset has occurred because once reset everything starts from the beginning in 8051.
One method i guess would work is,
Say take a variable X, before every software code of reset, just set X=1 (indicating software reset) and store this variable in any ROM if you interfaced externally.
On every reset, at the beginning include an instance which checks this variable X to see which reset had occurred and change X to 0, for next time detection.
If you do not have an external ROM, interface an D-latch atleast.
I hope this works. Do tell me if this works.

Resources