10 bit ADC Value to Voltage measurement - atmega

I am trying to do ADC using ATMEGA8 and receiving the ADC value from the potentiometer. As it is 10 Bit ADC the highest value i can receive is 1024.
Now i want to convert this value to actual voltage and view it on terminal using serial. My reference voltage is 5V.
this is what i am doing
#define REF_ADC_Volt 5000
#define ADC_Div_Factor 1023
//init ADC
void Init_ADC()
{
ADMUX |= (1<<REFS0); //Reference voltage set at AREF pin
ADCSRA |= (1 << ADEN); //Enable ADC
ADCSRA |= (1 << ADPS0) | (1 << ADPS1) | (1 << ADPS2); //set prescale by 128 div factor
}
//Read ADC
uint16_t Read_ADC(uint8_t ch)
{
ch = ch & 0x07;
ADMUX |= ch; //Setting ADC Channel
ADCSRA |= (1<<ADSC); //ADC start conversion
while (! (ADCSRA & (1<<ADIF)) ); //Wait till conversion is over
ADCSRA |= (1<<ADIF); //Clear ADC Flag
return(ADCW); //Return ADC value 10 bit
}
int main(void)
{
_delay_ms(2000);
Init_ADC();
USART_Init(103);
double ADC_Val,Res_ADC_Val;
char *number_string="00000";
USART_Transmit_String("ACS712 Current Sensor ADC Value: \r\n");
while (1)
{
ADC_Val = Read_ADC(0);
Res_ADC_Val = ((REF_ADC_Volt / ADC_Div_Factor) * ADC_Val)/1000;
dtostrf(Res_ADC_Val,1,2,number_string);
USART_Transmit_String(number_string);
itoa(ADC_Val,number_string,10);
USART_Transmit(' ');
USART_Transmit_String(number_string);
USART_Transmit_String("\r\n");
ClearBuffer(number_string);
_delay_ms(1000);
}
}
Now the problem is after conversion the highest voltage i am getting is 4.09V at ADC value of 1023. But it should be 5V right??
According to this calculation
Res_ADC_Val = ((REF_ADC_Volt / ADC_Div_Factor) * ADC_Val)/1000;
where
REF_ADC_Volt = 5000mV
ADC_Div_Factor = 1023
ADC_Val = 1023
I am totally confused as when i use my calculator its 5V only but I'm getting 4.09. Why? and How to solve this?
Thanks in advance.

Both REF_ADC_Volt and ADC_Div_Factor are two integer literals.
Hence, the first division produces an integer result (most likely 4).
Then, you multiply the result of this division (4) by ADC_Val.
This means 4 * 1023 = 4.092.
You should promote your literals to floating point:
#define REF_ADC_Volt 5000.0
#define ADC_Div_Factor 1023.0
or re-arrange the expression to allow implicit casting to work, eg:
Res_ADC_Val = REF_ADC_Volt * ADC_Val / ADC_Div_Factor / 1000.0;
EDIT #1:
Optimization tip
As pointed out in other answers, the implementation above is sub optimal. Optimization was not the topic of the answer, but it is always interesting to discuss such things.
Please note that the solutions proposed in the other answers are not the most efficent either.
In fact, there is no need to perform all those divisions, since they all involve constant values.
You can define one constant as your scalar and perform just one multiplication each time:
#define ADC_TO_VOLT 0.00488758553275 // (5000.0 / 1023.0) / 1000.0
Res_ADC_Val = ADC_Val * ADC_TO_VOLT;
Moreover, there may be no need to use double values. I believe single precision values (float) should suffice, but that depends on your application and it's hard to judge from your minimal example.

Previous answer is completely correct, but if code execution time and optimisation is important to you, you can do much of the calculation in int instead of double.
#define REF_ADC_mVolt = 5000
#define ADC_Div_Factor = 1023
double Res_ADC_Val;
uint16_t ADC_Val; //notice the uint16_t instead double
ADC_Val = Read_ADC(0);
Res_ADC_Val = (((uint32_t)REF_ADC_mVolt * ADC_Val)/ ADC_Div_Factor )/(double)1000.0;
Now there is only one slow "double" division. Notice the typecast to uint32_t to avoid overflow. And notice (double)1000.0 where the .0 alone would cast only to float instead double.

Related

WHy is this code not executing (ADC BATTERY VOLTAGE MEASURE)

EDIT NEW PICTURE
void setup() {
Serial.begin(9600);
Serial.println("Setup completed.");
}
void loop() {
// Read external battery VCC voltage
Serial.print("Bat: ");
uint16_t batVolts = getBatteryVolts();
Serial.print(batVolts);
Serial.print(" - ");
Serial.println(getBatteryVolts2());
delay(500);
}
// One way of getting the battery voltate without any double or float calculations
unsigned int getBatteryVolts() {
//http://www.gammon.com.au/adc
// Adjust this value to your boards specific internal BG voltage x1000
const long InternalReferenceVoltage = 1100L; // <-- change this for your ATMEga328P pin 21 AREF value
// REFS1 REFS0 --> 0 1, AVcc internal ref. -Selects AVcc external reference
// MUX3 MUX2 MUX1 MUX0 --> 1110 1.1V (VBG) -Selects channel 14, bandgap voltage, to measure
ADMUX = (0 << REFS1) | (1 << REFS0) | (0 << ADLAR) | (1 << MUX3) | (1 << MUX2) | (1 << MUX1) | (0 << MUX0);
// Let mux settle a little to get a more stable A/D conversion
delay(50);
// Start a conversion
ADCSRA |= _BV( ADSC );
// Wait for conversion to complete
while ( ( (ADCSRA & (1 << ADSC)) != 0 ) );
// Scale the value - calculates for straight line value
unsigned int results = (((InternalReferenceVoltage * 1024L) / ADC) + 5L) / 10L;
return results;
}
// A different way using float to determine the VCC voltage
float getBatteryVolts2() {
// You MUST measure the voltage at pin 21 (AREF) using just a simple one line sketch consisting
// of: analogReference(INTERNAL);
// analogRead(A0);
// Then use the measured value here.
const float InternalReferenceVoltage = 1.1; // <- as measured (or 1v1 by default)
// turn ADC on
ADCSRA = bit (ADEN);
// Prescaler of 128
ADCSRA |= bit (ADPS0) | bit (ADPS1) | bit (ADPS2);
// MUX3 MUX2 MUX1 MUX0 --> 1110 1.1V (VBG) - Selects channel 14, bandgap voltage, to measure
ADMUX = bit (REFS0) ;
ADMUX |= B00000000; //I made it A0 //ADMUX = bit (REFS0) | bit (MUX3) | bit (MUX2) | bit (MUX1);
// let it stabilize
delay (10);
// start a conversion
bitSet (ADCSRA, ADSC);
// Wait for the conversion to finish
while (bit_is_set(ADCSRA, ADSC))
{
;
}
// Float normally reduces precion but works OK here. Add 0.5 for rounding not truncating.
float results = InternalReferenceVoltage / float (ADC + 0.5) * 1024.0;
return results;
}
I tried executing this program but it did not work i believe there is some issue with my pin declaration or circuit. Please check. I want the code to read my voltage but it constantly reads wrong value and it is not even reading from A0
enter image description here
I just changed this part of the code
enter image description here
Unfortunately you did not follow my advice to study the linked information at https://github.com/RalphBacon/Arduino-Battery-Monitor. Especially those provided at http://www.gammon.com.au/adc
Instead you obviously messed with the first snipped you found without understanding what it does.
Otherwise I cannot explain why you would change
ADMUX = (0 << REFS1) | (1 << REFS0) | (0 << ADLAR) | (1 << MUX3) | (1 << MUX2) | (1 << MUX1) | (0 << MUX0);
to
ADMUX = bit (REFS0) ;
ADMUX |= B00000000;
You don't want to read analog channel 0. You want to read the bandgap voltage (which is used as internal reference voltage).
There's the reference voltage, the ADC value and the measured voltage.
Usually you would use a known reference voltage and the ADC value to calculate the measured voltage.
V = ADC * Aref / 1023
But in this case you use the ADC voltage and the known measured voltage to calculate the reference voltage, which is the voltage of your battery connected to Aref.
Aref = V_bandgap * 1023 / ADC
But in order to do that you must set the ADMUX register to measure the internal voltage reference (1.1V) using an external reference voltage.

12-bit ADC in MSP430FR2476 seems to only work in 10-bit mode

Here is the problem: I am trying to initialize the 12-bit built-in ADC on MSP430FR2476, but no matter what I do it seems to work at 10 bits. I change the resolution bits in a control register and alas, to no avail. No matter what I have tried nothing helps, always 10 bits. The most significant byte never gets higher than 3. Please help, here is a snippet of the code:
//Configuring ADC
PMMCTL2 |= (INTREFEN); //Internal reference, default 1.5V
ADCCTL0=ADCCTL1 = 0; //Ensuring that the ADC is off
ADCCTL0 |= (ADCSHT_7 + ADCMSC); //sample and hold = 64clk, multiple conversion
ADCCTL1 |= (ADCSHP + ADCSSEL_2 + ADCCONSEQ_1 + ADCDIV_7); //Conversion is triggered
//manually, ADC clock source - SMCLK/8 ~ 2Mhz, sequence of
//channels single conversion,
ADCCTL2 |= (ADCRES_2); //12 bit resolution, no matter what setting I have, no change
ADCMCTL0 |= (ADCSREF_1 + ADCINCH_1); //Employing the internal reference and starting
//conversion from A1 (P1.1)
ADCIE |= ADCIE0; //Activate interrupt
ADCCTL0 |= (ADCON); //Switching ADC on
SYSCFG2 |= (BIT1); //Activate ADC module on the pins (this line
//doesn't do anything for some reason
void adc_convert_begin(){
ADCCTL0 |= ADCENC;
ADCCTL0 |= ADCSC; //Start conversion
//The interrupt simpy send the most significant byte over UART
__attribute__((interrupt(ADC_VECTOR)))
void ADC_ISR(void){
switch(__even_in_range (ADCIV, 0x0C)){
case 0x0C:
adc_data[adc_index] = ADCMEM0;
UCA1TXBUF = (unsigned char)(ADCMEM0>>8);
break;
}
}
The error happens to be here:
ADCCTL2 |= (ADCRES_2);
The idea is the default value is 1, so when I perform an |= operation on the register, the final value turns out to be 3, instead of 2. I need to zero that bit field first!

How to setup a function for multiple ADC input channels on PIC18F26K22?

I'm using a PIC18f26k22 to simply read two potentiometers (connects to analog pin AN0 and AN1). Working with a single pot is easy but more than one pot requires a bit-shifting technique which I haven't understood clearly. I did look around the internet and found an ADC_Read() function. I made some changes to the code so that I could use it for PIC18F26K22.
The problem is that even though I use that function in main, only the ADC channel AN0 works but the AN1 channel doesn't respond (i.e. it won't toggle LEDs).
unsigned int ADC_Read (unsigned char channel). In the main function int 'num' and 'den' are used to read each analog input AN0 and AN1, respectively. The only response that I get is from num (AN0).
unsigned int ADC_Read(unsigned char channel)
{
if(channel > 7) //Channel range is 0 ~ 7
return 0;
ADCON0 &= 0b11000000; //Clearing channel selection bits
ADCON0 |= channel<<2; //Setting channel selection bits
ADCON2bits.ACQT = 0b001; // 2 Aquisition Time
GO_nDONE = 1; //Initializes A/D conversion
while(GO_nDONE); //Waiting for conversion to complete
return ((ADRESH<<8)+ADRESL); //Return result
}
The ADON bit off the ADC is in bit 0 of the ADCON0 register so
you will switch off your ADC here:
ADCON0 &= 0b11000000; //Clearing channel selection bits AND ADON
change it to:
ADCON0 &= 0b10000011; //Clearing channel selection bits
This will only reset the cannel bits. Know you are able to select a new channel.
ADCON0 |= channel<<2; //Setting channel selection bits

How to make more precise the reading of AnalogPins in Arduino?

I'm new here so, if I make any mistake, sorry. Well, I'm working with Arduino (Mega2560) to construct an Ammeter and found out a little problem... Arduino Mega measures voltage from 0 to 5V, and the AnalogPins return a 10-bit value according with the reading (that is, 1 bit represents 5/(2^10)=4mV (approximately)). But, in the case of ammeter, I need to use a resistor with small resistance so that my circuit don't get changes. So my objective is read the voltage drop and from V = R.I, calculate the current. But, as the voltage drop is such as slowly, the pin can't read any value.
Eg.: there is a current flowing from 2mA in the region that I would like to measure. With a resistance of 0.3 ohms (the lower value I found here) , would be: V = 2m . 0.3 = 0.6mV.
As I said, the lower posssible value of reading in analogPins is 4mV.
Thus, how to improve my precision of reading? For example, instead of 1023 represents only 5V, the same value represents around of 30 or 40mV...
0 - 0 V
1023 - 30/40 mV
You can use 1.1V internal voltage reference, or some more precise external one (This can be archieved by analogReference). BTW with such a small currents it would be more convenient to use bigger resistor.
Or, forget about limited functionality of analogRead and do it directly. For example 2.56V reference, differential input with 10x or 200x gain (but you'll get range -512 to 511 -> 2.56/512).
In below example, voltage_meter reads 500 samples in about 1 millisecond and returns the average. I set the reference to 1.1v for better precision.
int battery_pin = A3;
float voltage_meter()
{
//read battery voltage per %
long sum = 0; // sum of samples taken
float voltage = 0.0; // calculated voltage
float output = 0.0; //output value
for (int i = 0; i < 500; i++)
{
sum += analogRead(battery_pin);
delayMicroseconds(1000);
}
// calculate the voltage
voltage = sum / (float)500;
// voltage = (voltage * 5.0) / 1023.0; //for default reference voltage
voltage = (voltage * 1.1) / 1023.0; //for internal 1.1v reference
//round value by two precision
voltage = roundf(voltage * 100) / 100;
return voltage;
}
void setup()
{
analogReference(INTERNAL); //set reference voltage to internal
Serial.begin(9600);
}
void loop()
{
Serial.print("Voltage Level: ");
Serial.print(voltage_meter(), 4);
Serial.println(" V");
delay(1000);
}
On ATmega based boards (UNO, Nano, Mini, Mega), it takes about 100 microseconds (0.0001 s) to read an analog input, so the maximum reading rate is about 10,000 times a second.
100 Microsecond and no 1000 how the example

Measuring the period of a square wave using microcontroller

I am new to microcontroller. The following code measures the period of a square wave. I have marked some lines which I haven't understood. The code is as follows:
#include <avr/io.h>
#include <avr/interrupt.h>
ISR(TIMER1_CAPT_vect)
{
int counter_value = ICR1; //16 bit value
PORTB = (counter_value >> 7); // What has been done here?
TCNT1 = 0; // why this line?
}
int main(void)
{
DDRB = 0xFF;
TCCR1A = 0x00;
TCCR1B = 0b11000010;
TIMSK = 0b00100000;
sei();
while(1);
cli();
}
What has actually been done in those lines?
ISR(TIMER1_CAPT_vect)
{
int counter_value = ICR1; //16 bit value
PORTB = (counter_value >> 7); // What has been done here?
PORTB is a set of 8 output lines. Presumably, they are connected by a bus to some device you haven't mentioned. Maybe even a set of LEDS to display a binary number.
The result from the counter is 16 bits. To get the most significant bits, shift the result to the right to discard the less significant bits. (This operation loses precision, but you only have 8 bits of output, not 16.) As to why the shift is only 7 instead of 8, or why the unsigned value of the counter is saved as a signed int first, I don't know. I suspect it is a mistake. I would have done PORTB = (ICR1 >> 8); instead.
TCNT1 = 0; // why this line?
Since we have recorded the time of the capture and sent it out PORTB, we now want to reset the timer for the next capture.
}

Resources