For my recent project I need to measure two voltages with an ATmega8 and display them on a character LCD.

What’s the big deal?

When using a microcontroller to measure a voltage using a built-in ADC like in the ATmega8 then you will only get a digital representation of how many fractions of the reference voltage is present on the analog pin, not an absolute voltage reading in Volts (like you get from your multimeter).

There is nothing wrong with that if you only have to deal with these numbers in software where you can set thresholds or compare the magnitude of two voltages.

In this case I need to communicate with humans though, so it is desirable to display the actual voltage in Volts with digits after the decimal point. This is where we run into problems especially on small microcontrollers. Usually there is no fast way to deal with floating point numbers in 8bit micros. All the floating point calculation algorithms are software implementations which take up a lot of program memory and generally are pretty slow compared to integer math because they take many more cycles.

In my case I have found a very elegant solution to this problem, where by using simple arithmetic we can simplify the problem so that we only have to deal with displaying the decimal point after the correct digit rather than actually calculating floating point values.

Let’s start with an overview of what’s given (i.e. what we have to work with):

- ADC resolution: 10 bits (1024 steps)
- ADC reference voltage: 2.56V

The resolution of the ADC (i.e. how many volts one ADC count is) can be calculated like so:

The actual voltage at the input of the ADC for a given ADC reading k is:

Now by looking at this a bit closer we can see that 2.56V is actually a power of 2 once we multiply with 100, thus we can calculate the input voltage as follows:

Now why is this great?

Dividing by four can be done in a single instruction by right shifting the ADC reading by two bits.

Because we know that the reading divided by four is our input voltage times 100, we now simply have to make sure to display the decimal point in the correct place on the display. We never had to use floating point math to come to an exact decimal representation of the input voltage (the accuracy will still be limited by the absolute accuracy and temperature stability of the internal voltage reference of course).

Now this is only the beginning of the story.

Because I’m actually measuring voltages that are outside of the safe input voltage range of the ADC I have used voltage dividers to bring those voltages to appropriate levels.

I have to measure two voltages as follows:

- V1: 0-20V fed into a 8:1 voltage divider, ADC input is 0-2.5V
- V2: 0-3V fed into a 3:2.5 voltage divider, ADC input is 0-2.5V

### Calculating V1

We can use the very general solution to the problem above and together with our voltage divider we can find a formula for calculating V1 from the ADC reading:

Multiplying by 8 simply yields:

Again we have reduced our problem to simply multiplying the ADC reading by two and making sure to display the decimal point at the correct digit.

uint16_t ADC_readPSUOutV(void) { uint16_t temp; temp = ADC_readRaw(6); // read from ADC channel 6 temp = temp<<1; // multiply output by 2 return temp; }

### Calculating V2

The same works for V2:

Using simple arithmetic we find

This result is even nicer because we can get a reading directly in millivolts just by multiplying the ADC reading by 3.

uint16_t ADC_readPSUOutI(void) { uint16_t temp; temp = ADC_readRaw(7); // read from ADC channel 7 temp = temp * 3; return temp; }

Elia