Reading output pin level on SAMDG55 - microcontroller

I'm building a firmware for a device based on Atmel/Microchip AT SAMG55.
In a simple function, trigger some relais connected to GPIO pins.
Because I want to interlock different I/O, avoiding that 2 specific outputs are high level on the same time, I need to know the pin level I set before.
In another project, based on the SAMD21, there was a function that reads output pin state
static inline bool port_pin_get_output_level(const uint8_t gpio_pin)
The SAMG55 port library in ASF is quite different, so i tried ioport_get_pin_level(pin), but i'm not getting expected result. I think that it works only with pins configured as inputs.
Are there any recommended solutions?

Referring to Figure 16-2 in the SAMG55 data sheet, and to sections 16.5.4 and 16.5.8:
16.5.4 Output Control
... The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and
the Clear Output Data Register (PIO_CODR). These write operations,
respectively, set and clear the Output Data Status Register
(PIO_ODSR), which represents the data driven on the I/O lines. ...
16.5.8 Inputs
The level on each I/O line can be read through PIO_PDSR. This register indicates the level of the I/O lines regardless of their
configuration, whether uniquely as an input, or driven by the PIO
Controller, or driven by a peripheral. Reading the I/O line levels
requires the clock of the PIO Controller to be enabled, otherwise
PIO_PDSR reads the levels present on the I/O line at the time the
clock was disabled.
So, as long as the pin is configured such that the actual level on the pin always corresponds to the level we're trying to drive - which is not the case with an open collector configuration, for example - then Tarick Welling's answer is correct: you can read the output state from the Output Data Status Register (PIO_ODSR).
However the true state of the pin, regardless of driver configuration, can be read (subject to a resynchronisation delay that may or may not be relevant in any given application) from the Pin Data Status Register (PIO_PDSR).

You can do some low level programming. You use the high level HAL functions to configure, set and reset the pins but before you do that you would. Read the value for the pin by addressing the data value of the register. In AVR that would be done by reading PORTx. In a STM32 this can be done by reading the value of GPIOx->ODR. You would of course then need to extract the correct pin but this can be done.
You can also look inside the definition of port_pin_get_output_level and check how they did it and convert that into the way this board/vendor/HAL does its addressing.
update:
When looking inside the datasheet for the SAM G55G/J. Page 340 gives us the answer we need.
The level driven on an I/O line can be determined by writing in the Set Output Data Register (PIO_SODR) and the
Clear Output Data Register (PIO_CODR). These write operations, respectively, set and clear the Output Data
Status Register (PIO_ODSR), which represents the data driven on the I/O lines.
So we can drive the output by writing to PIO_SODR and PIO_CODR to set and reset the pins respectively. But also read from PIO_ODSR this is a register which contains the state of the pin.

A quick google search turns up two options for Atmel/AVR controllers:
read back from the same location you used to set your output value (PORTx register)
This will give you the value that you have written into the register before.
read the actual value using the PINx registers
This will give you the value that you could actually measure on your device.
The difference between the two can be important: if you set a GPIO that is pulled down below the logic voltage threshold (i.e. if connected to GND) to HIGH, PORTx will read HIGH (the value you set) while PINx will read LOW (the actual value).
https://www.avrfreaks.net/forum/reading-pin-set-output

Related

Setting initial values for pitch and roll for MPU6050 DMP processing

I want to handle case when my MPU6050 is disconnected and reset. Unfortunately, after reinit MPU6050 shows pitch and roll values as 0 and stabilizes after ~1-2s with right values. I would like to hint DMP by writing last read values before reset. Is it any interface for it?
Btw, no matter I configure LPF with value 5 or 188 - 'issue' still exists.
Solution
Try logging the 16 bytes of memory beginning at D_0_192 (defined in inv_mpu_dmp_motion_driver.c of motion_driver_6.12).
unsigned short buf[16];
mpu_read_mem(D_0_192, 16, buf);
// your chosen method of logging this buffer
Mine looked like this shortly after mpu_set_dmp_state(1):
3fffdfeb 003eb3b6 000d2278 00002f3c
and like this after stabilizing for 15 seconds:
1e246556 386e559d 01b407b2 004d6ad9
If my MPU6050-using device starts upside-down, it takes about this long for the readings to stabilize.
After the readings have stabilized, record the value as a constant and write it back to the same location when DMP setup happens:
mpu_write_mem(D_0_192, 16, buf);
Method
I logged the contents of some of the DMP's mentioned but unreferenced/undocumented registers in inv_mpu_dmp_motion_driver.c figuring that they probably expose this part of DMP state somewhere. I did this after a call to mpu_set_dmp_state(1).
I found a number of values that change while the DMP is running. D_0_192, which I believe to be the internal state corresponding to attitude was identifiable by its sluggish relaxation time. I haven't taken the time to interpret the contents. I have only copied the buffer and observed the intended result -- that the contents of the DMP fifo begin at the recorded attitude.
Disclaimer
This is undocumented, potentially very wrong, and certainly brittle (untested on anything other than the DMP firmware blob included in motion_driver_6.12). Use at your own risk.

Passing data on to packet for transmission using readstream

I am using the readstream interface to sample at 100hz, I have been able to integrate the interface into Oscilloscope application. I just have a doubt in the way I pass on the buffer value on to the packet to be transmitted . Currently this is how I am doing it :
uint8_t i=0;
event void ReadStream.bufferDone( error_t result,uint16_t* buffer, uint16_t count )
{
if (reading < count )
i++;
local.readings[reading++] = buffer[i];
}
I have defined a buffer size of 50, I am not sure this is the way to do it as I am noticing just one sample per packet even though I have set Nreadings=2.
Also the sampling rate does not seem to be 100 samples/second when I check.I am not doing something right in the way I pass data to the packet to be transmitted.
I think I need to clarify a few things according to your questions and comments.
Reading a single sample from an accelerometer on micaZ motes works as follows:
Turn on the accelerometer.
Wait 17 milliseconds. According to the ADXL202E (the accelerometer) datasheet, startup time is 16.3 ms. This is because this particular hardware is capable of providing first reading not immediately after being powered on, but with some delay. If you decrease this delay, you will likely get a wrong reading, however, the behavior is undefined, so you may sometimes get a correct reading or the result may depend on environment conditions, such as ambient temperature. Changing this 17-ms delay to a lower value is certainly a bad idea.
Read values (in two axes) from the Analog to Digital Converter (ADC), which as an MCU component that converts analog output voltage of the accelerometer to the digital value (an integer). The speed at which ADC can sample is independent from the parameters of the accelerometer: it is another piece of hardware.
Turn off the accelerometer.
This is what happens when you call Read.read() in your code. You see that the maximum frequency at which you can sample is once every 17 ms, that is, 58 samples per second. It may be even a bit smaller because of some overhead from MCU or inaccuracy of timers. This is true when you sample by calling Read.read() in a loop or every fixed interval, because this call itself lasts no less than 17 ms (I mean the delay between the command and the event).
What you may want to do is:
Turn on the accelerometer.
Wait 17 ms.
Perform series of reads.
Turn off the accelerometer.
If you do so, you have one 17-ms delay for a set of samples instead of such delay for each sample. What is important, these steps have nothing to do with the interface you use for performing readings. You may call Read.read() multiple times in your application, however, it cannot be the same implementation of the read command that is already implemented for this accelerometer, because the existing implementation is responsible for turning on and off the accelerometer, and it waits 17 ms before reading each sample. For convenience, you may implement the ReadStream interface instead and call it once in your application.
Moreover, you wrote that ReadStream used a microsecond timer and is independent from the 17-ms settling time of the ADC. That sentence is completely wrong. First of all, you cannot say that an interface uses or does not use a timer. The interface is just a set of commands and events without their definitions. A particular implementation of the interface may use timers. The Read and ReadStream interfaces may be implemented multiple times on different platforms by various hardware components, such as accelerometers, thermometers, hygrometers, magnetometers, and so on. Secondly, the 17-ms settling time refers to the accelerometer, not the ADC. And no matter which interface you use, Read or ReadStream, and which timers a driver uses, milli- or microsecond, the 17-ms delay is always required after powering on the accelerometer. As I mentioned, you probably want to make this delay once per multiple reads instead of once per a single read.
It seems that the TinyOS source code already contains an implementation of the accelerometer driver providing the ReadStream interface which allows you to sample continuously. Look at the AccelXStreamC and AccelYStreamC components (in tos/sensorboards/mts300/).
The ReadStream interface consists of two commands. postBuffer(val_t *buf, uint16_t count) is called to provide a buffer for samples. In the accelerometer driver, val_t is defined as uint16_t. You may post multiple buffers, one by one. This command does not yet start sampling and filling buffers. For that purpose, there is a read(uint32_t usPeriod) command, which directs the device to start filling buffers by sampling with the specified period (in microseconds). When a buffer is full, you get an event bufferDone(error_t result, val_t *buf, uint16_t count) and a component starts filling a next buffer, if any. If there are no buffers left, you get additionally an event readDone(error_t result, uint32_t usActualPeriod), which passes to your application a parameter usActualPeriod, which indicates an actual sampling period and may be different (especially, higher) from a period you requested when calling read due to some hardware constraints.
So the solution is to use the ReadStream interface provided by AccelXStreamC and AccelYStreamC (or maybe some higher-level components that use them) and pass an expected period in microseconds to the read command. If the actual period is lower than one you expect, this means that sampling at higher rate is impossible either due to hardware constraints or because it was not implemented in the ADC driver. In the second case, you may try to fix the driver, although it requires good knowledge of low-level programming. The ADC driver source code for this platform is located in tos/chips/atm128/adc.

ATMEGA32 UART Communication

I am trying to do serial communication in ATMEGA32 and I have a question:
In asynchronous serial communication both UBRRH and UCSRC registers have same location. I don't know which conditions that location will act as UBRRH and for which conditions, it will act as UCSRC. I need different values for each register according to the work assigned to those registers
In the datasheet, they have mentioned the use of URSEL bit for selection betweem two registers but somehow I am not getting that.
The answer is: Yes, the URSEL bit. According to the datasheet:
When doing a write access of this I/O location, the high bit of the
value written, the USART Register Select (URSEL) bit, controls which
one of the two registers that will be written. If URSEL is zero during
a write operation, the UBRRH value will be updated. If URSEL is one,
the UCSRC setting will be updated.
This means, when you write to UCSRC, regardless of what value you want to put there, also set the URSEL bit (make sure that URSEL is 1):
UCSRC = (1<<URSEL)| ... whatever else ...
When you write to UBRRH, make sure that URSEL bit must is zero. Here are some different ways of doing that:
UBRRH = (0<<URSEL)| ... whatever else ... // just showing that URSEL isn't set
UBRRH = ...some value... // simple not setting URSEL
UBRRH = (someValue)&(~(1<<URSEL) // Ensuring that URSEL isn't set
URSEL bit is just a high bit. So whatever value you write to UCSRC, set (turn on, make 1) the high bit (bit 7). And when writing to UBRRH, make sure that bit 7 is cleared. Another way of thinking about it, every value you write to UBRRH must be under 128. And every value that you want to write to UCSRC, add 128 to it: this will turn on bit 7. This is just as a way of explanation, the code above is clearer.
How is this done? I don't know, I am not a uC designer. What seems likely is that the same IO location location is mapped to two different registers in the processor. Say you have a register named foo, and when you write a value to it the uC checks if the high bit is set. If it is it writes the value to internal memory location 1 and if it isn't it writes the value to internal memory location 2.
If you are using the URSEL bit correctly, then the values are being written correctly. Your testing not showing the correct values because you are not reading them propertly. Page 162 of the datasheet:
Doing a read access to the UBRRH or the UCSRC Register is a more
complex operation. How- ever, in most applications, it is rarely
necessary to read any of these registers.
The read access is controlled by a timed sequence. Reading the I/O
location once returns the UBRRH Register contents. If the register
location was read in previous system clock cycle, reading the register
in the current clock cycle will return the UCSRC contents. Note that
the timed sequence for reading the UCSRC is an atomic operation.
Interrupts must therefore be controlled (for example by disabling
interrupts globally) during the read operation.
So when your read UBRRH / UCSRC for the first time you get UBRRH. If you immediately read again you read UCSRC. But as the documentation suggests, there is no real reason to read these registers. It seems that you do not trust the datasheet, but this is a mistake: the datasheet is the best source of information about such matters: without datasheets we would be nowhere.

VHDL UCF - how to define a constraint that has no pin?

I'm working with some simple VGA driver code for use with the Xilinx Spartan 6 FPGA (via a Papilio Pro board). The code expects to have 4-bits of output per color, and so defines logic vectors for each color. However, my setup doesn't happen to provide the full 4 bits per color so I wanted to find a creative way to control this via the UCF.
The original UCF defined 4 pins for each color. In the case of blue, I only have two pins, so I chose to map the two I have to blues MSBs, thus:
NET Blue(0) IOSTANDARD=LVTTL; # N/C
NET Blue(1) IOSTANDARD=LVTTL; # N/C
NET Blue(2) LOC="P92" | IOSTANDARD=LVTTL; # to a pin
NET Blue(3) LOC="P87" | IOSTANDARD=LVTTL; # to a pin
(I totally omitted the first two constraints at first, and it still compiled and worked but complained about the inconsistent voltage standards (the absent ones defaulted to IOSTANDARD = LVCMOS25), thus throwing "WARNING:Place:838 - An IO Bus with more than one IO standard is found.")
The main warning is the one I'd like to know how to eliminate, preferably within the UCF:
WARNING:Place:837 - Partially locked IO Bus is found.
Following components of the bus are not locked:
Comp: Blue<1>
Comp: Blue<0>
What's the right way to map a net without a programmable pin location to a default value (logic '1' or '0', or perhaps a tri-state value) within the UCF in such a way as to eliminate this "Partially locked IO Bus" sort of warning?
My goal is that, in a setup with more or fewer bits per channel being driven by pins, only the UCF should need to change (not the source code). What I did works, despite the warnings... I'd just like to do it better and properly eliminate these warnings.
You've asked for pins within the top level of your code (on your entity). The tools therefore have to provide them. Hence you have to map them (otherwise it'll pick some random ones for you, which you usually don't want)
If those pins really have nowhere to go on the board and never will have, then remove them from the design completely (UCF and HDL).
Otherwise, you have to LOC them. You could add a PULLDOWN in the UCF to them to ensure they go to a low value.

How to know whether Power on reset or Software reset has occured in 8051 microcontrller

I am developing an application on ATMEL AT89C51 of 8051 family.
Could anyone suggest how to determine in coding whether the reset has been done due to power cycle or through software?
According to the Atmel 8051 Microcontrollers Hardware Manual (PDF link), the power-off flag (POF / bit 4) in the power control register (PCON / 87h) is set by hardware when VCC rises from 0 to its nominal voltage. The power-off flag reset value will be 1 only after a power on (cold reset). A warm reset (e.g. software reset) does not affect the value of this bit.
I've often found that different vendors implement their own registers in the SFR space that can be taken advantage of for cases such as this. For example, Silicon Labs uses a power-on reset flag (PORSF) in their reset source register (RSTSRC).
It really depends if you wanted to depend on some specific 8051 variant vendor. It is best to use vendor provided registers, but if you changed vendor your code will brake, or even worse, it will misbehave.
If you had external RAM in your system (and it was not battery powered), than you could write a sequence of bytes (like 0xAA, 0x55...) somewhere in the reserved part of the memory, and check if it was still there after start up. If not, you have had a cold start. Of course, you should modify assembler start up code to make sure it does not initialize this part of memory (or it would be zero at each start), and you should instruct your linker to exclude this memory from linkage so that it does not get used by anything else.
Finally, include conditional compilation in your code so that if you had some 8051 variant with special registers, it would be used, if not, try the plan B.
I have done that with few bytes of internal 8051 memory /all my external RAM was battery powered/ and then I have learned than not every 8051 variant has had consistent policy at start up - some have all their internal memory initialized, some have initialized only SFR and some other specific areas leaving me few bytes to play with the procedure described.
I don't think there is a method to determine how reset has occurred because once reset everything starts from the beginning in 8051.
One method i guess would work is,
Say take a variable X, before every software code of reset, just set X=1 (indicating software reset) and store this variable in any ROM if you interfaced externally.
On every reset, at the beginning include an instance which checks this variable X to see which reset had occurred and change X to 0, for next time detection.
If you do not have an external ROM, interface an D-latch atleast.
I hope this works. Do tell me if this works.

Resources