Hooking up many inputs to Arduino [closed] - arduino

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need help setting up a system with my Arduino and hooking up ~ 90 inputs to it. Here is the system I am envisioning:
The Arduino is hooked up to a change/money insertion machine (like paying for a coke at a vending machine).
When the customer inserts the appropriate amount of money, they can choose which machine they want to activate (like choosing different candy bars on a vending machine). There will be about ~90 choices.
I want the Arduino to take this input and be able to signal to any individual machine out of the 90 machines to activate some process in that individual machine.
How the system currently works is that each individual machine has its own money insertion mechanism that activates the process individually at each machine. I want to create a centralized payment system that knows about each individual machine.
My questions are the following:
Is it possible to hook up 90 inputs the Arduino and then to send individual messages to each of the 90 inputs?
(My research had lead me to a shift register, but it seems unlikely to be able to connect 90 individual inputs to the Arduino this way.
Is there a part that connects to the Arduino that can accept money as change?

There are multiplexers that you can connect your arduino to, which increase its capacity for inputs:
Arduino Playground Multiplexer Tutorial
Sparkfun has a prebuilt shield which encapsulates the multiplexer IC, but it only gives you 48 inputs.
As for taking coins, a device called a coin acceptor can be found. There are a number of suppliers, including Sparkfun.
There may be ways you can reduce the necessary inputs and/or outputs. For example, do you need 90 buttons, or could you use a 10 key pad, and give each item a two digit code?

Yes, it's entirely doable. A keyboard has 101 keys, and at least historically, was decoded by a chip much, much less powerful than the Arduino. Shift registers can be chained to allow large numbers of inputs and outputs, at the cost of read/write speed. There are also chips you can buy (such as the LM8330) that decode a matrix keypad for you, and are accessible over I2C, which only requires two pins. And a coin slot is electrically the same as a push-button, except it only toggles when a valid coin is inserted.

Related

Multiple LED control with few GPIO pins in Renesas microcontroller [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I need to control 12 leds through 4 GPIO pins. I did some reading online and saw we can use Charlieplexing method to control leds with fewer gpio pins. But is it possible to make two or more leds to glow at the same time in this method. Those 12 leds are grouped into 4 groups(3 leds in a group) and 1 led would be lit in each group at a time.
Anybody used this method or any other method please provide some guidance on how to go about this implementation. I'll try it. Thanks.
You can turn on more than one led, using 4 GPIOs (let's alone the current limiting problems - hardware problem), but not in any combination.
BUT: if you manage to turn on any single led at a time, you can fool the user by turning on multiple leds, one after the other, in a quick loop. At any given moment only a LED is on, but the human eye continues to see it lit for about 1/20th of second.
Have a interrupt run every ms (or so), and a mask of 12 bit indicating which leds have to be on. The interrupt handler continuosly "rotates" the GPIOs in order to touch every combination of rows and columns, and in the same time rotates a bit in an internal register (1000b, 0100b, 0010b 0001b, then reload 1000). When the AND of the internal register and the LEDs mask is not zero, the GPIO configuration must be left active, otherwise set the GPIO to off.
Update: forgot to mention, keeping the LEDS on for so short time, will make them less bright than normal. You can partly correct this by allowing more current (up to a safe limit), and/or by choosing brighter LEDs...
You can accomplish this by using external shift registers with latched outputs (e.g. MC74HCT595A) and resistors. The GPIO pins can provide the data, shift clock, and latch clock. Also, the shift registers can be daisy chained so that you can go beyond 12 LEDs if you wish. Be sure that the logic family of the shift registers chosen matches that of your MCU.

What would make sense to use for measuring a 0-10V analog signal? (Arduino, ESP32, ESP8266 with ADS1115) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm trying to read in a 0-10V analog signal from a pressure gauge (Balzers SingleGauge TPG 251) with the help of an arduino nano. With a voltage divider I'm able to get to a 0-5V range to not damage my arduino. Sad thing is that the resolution of the integrated 10-bit ADC is pretty low (5000mV/1023 ~ 5mV steps) for my purposes. The pressure gauge can measure from 10e-11 mbar to 1000 mbar and will give a value from 0-10V accordingly. Through observations I've seen the display change values in this way:
1000 to 100 being in 10 steps = 91 numbers
100 to 1 in 0.1 steps = 991 numbers
9.9e-1 to 1e-2 = 99 numbers per decade (there are 11 decades --> 99*1 = 1089 numbers)
Total of 2171 separate analog steps (if the steps are linear).
This means that each analog step will be of 4.6 mV in a 0-10V range. Since I'm using a voltage devider the mV/step difference will be halved, reaching a 2.3 mV/step. This is obviously to low to resolve and so I need to use an 12- or 16-bit ADC. I decided to use a 16-bit ADC because why not have a better precision, right? I have bought the ADS1115 and I hope to be able to read the values a lot better and more precise (max 0.15mV). With the analog value I can then reversly get the pressure as a number.
The real problems come in now. I want to have the setup connected to the local wifi where people can look at the pressure. And I would also like to let them enter the value they want to be alerted on per email that they also enter.
I've seen this tutorial with an ESP32 (https://randomnerdtutorials.com/esp32-email-alert-temperature-threshold/) where it's possible. So I'll try to follow through that with an ESP8266 nodeMCU.
I know this is a lot to take in and I hope somebody has an Idea how I can advance in this project. Here are some questions from my side:
Do I need both an arduino and a ESP8266 for this project? Since Arduinos analog range is from 0-5V and the ESPs analog range is from 0-1V, arduino can thus handle more. ESP would then just have the purpose of sending the email and web server stuff.
Is there a way to just use the ESP8266 and the ADS1115 and still read the analog values with a high precision? Or is it just not possible?
Any feedback what would make sense here? Maybe there is a way to do this that I've not yet googled.
Thank you in advance!
ADS1115 has a I2C interface. Just hook it up to the ESP. There is no need for the Arduino nano.
Please read the datasheet of your ADC.
http://www.esp8266learning.com/ads1115-analog-to-digital-converter-and-esp8266.php

What happens before Micro-controller Startup Code being executed? or Power-On/Reset Sequence?

What I know is as below and correct me if wrong, For the automotive bootloader based on any microcontroller, we will have
Startup code (Flash)
Primary bootloader (Flash)
Secondary bootloader (RAM)
As far as a power-on sequence is considered I know that,
From the startup code (provided by the micro vendor, Freescale, ST
Micro, etc.,) the control will be transferred to PBL (Primary
bootloader) using jump or function pointer.
PBL will download the SBL (Secondary bootloader) into RAM, which will
contain the flash driver, capable to download the application.
SBL will download the application into the flash area.
But what will happen before startup code is being executed or just after power on?
I know that each controller will have some sort of code to execute after power on POST (power-on self-test) but still not clear with sequence to operation till bootloader execution comes into execution.
It would be a great help if someone can provide a sequence of operations to reach startup code?
I find it this not uncommon confusion interesting.
POST is software in general, but your question is so vague. Usually when someone talks about POST they are talking about their x86 based computer, that is just software, happens well after the part you are confused about, and is in no way whatsoever required for a computer/processor to run, it has a purpose, adds value so it is there.
Microcontrollers in general do not have primary bootloaders nor secondary, they simply start running your application. Of the dozens/hundreds I have used/examined trying to think of any that have a primary or secondary. Can't think of any off hand. Certain brands in particular do have bootloaders that are usually programmed by them and you cant change or some that you can. How you get into the bootloader varies by brand, often a strap, sometimes a non-volatile bit in a register.
First off processors and the chip around it are dumb, very dumb. Only do what they are told by the humans. Incredibly simple machines. And while the difference between an mcu and a full blown system are at this view pretty much identical, the mcus are simpler and more reliable (for various reasons). The root of the answer starts with the processor or processor core or core or whatever term
might help you. In an mcu this is just one lego block in the whole of the chip, not necessarily even the largest block in the chip. When you look at arm based chips like the stm32 and others with a cortex-m (or older ones with ARMV7TDMI) that lego block is purchased ip from arm, the rest of the chip is either other purchased ip from one or more vendors or in-house made logic. the sram certainly and the flash probably is ip that the chip vendor buys for the specific process on the specific foundry (just like other cell library items, simple gates like AND, OR, NOT and more complicated gates).
Whatever processor core this is, it has an architecture and instruction set. While we know some architectures are implemented using microcode, unlikely that the mcus are, makes no sense the more cisc like might, but the arms and mips and such definitely not. But for this understanding it doesn't matter being microcoded or not there are bit patterns that drive the processor, machine code. We have all heard that chips are made of transistors, and they are. The transistors are part of the simplicity, the basic ones AND, OR, NOT gates you can look up on Wikipedia. You can (inefficiently) build the rest out of those fundamental blocks. A particular instruction tickles the logic, the transistors in a certain way to cause a chain of events, ones and zeros in a specific sequence that do the thing you asked. Logic is not limited to implementing processor instructions, most logic is not part of the decoding and execution of a processor instruction, most if it are equally dumb items. An sram is a lot of packed in bits (four transistors wired up a certain way per bit) with an address and data bus, the logic of an sram lights up rows and columns of these bits when writing or reading. Then there is more logic in front of that sram that decodes an address bus, etc.
As mentioned in the other answer, when power comes up then reset is released, the flip flop based items in the chip which are the registers we read in the manual plus countless others that are behind the scenes are set to their reset value which is done by wiring of the transistors. A number of state machines start which are similar to programs, but are hardwired. wait for reset to go high, once reset goes high then if this input to the state machine is this and that input to the state machine is that then I can move to the next state. The rules to get from one state to the next are implemented in logic. A chip with memory and flash for example might do a bist on the ram first, likely not in an mcu, doesn't make sense, this is logic not software doing this, this is not the post you think of in your laptop/desktop/server. The flash or ram or adcs or other logic might require some number of clocks to settle their logic before reset is released (the reset on the edge of the chip is not necessarily hard wired to all items in the chip, usually it is gated, delayed, etc). So there is a power on state machine that manages this, when the chip is ready then the processor itself will be released, this can be a few or dozens of clock cycles later. The clock itself has to settle, and the logic is designed to wait for that.
When the processor is released from reset it again may have some number of clocks to settle things in its design, it will have a state machine or many that start up the various blocks, and then based on the architectural design of that processor it does one of two things, fetches its first instruction from a known address (address within the processors address space which isn't necessarily the address in the chips view), or it uses a vector table approach and it reads a value from a known address, and that value read is the address of the first instruction and it fetches that instruction. Up to the first fetch there is no software, it is logic.
Depending on how the chip vendor has designed the chip, how they have defined the address space, and understand that addressing within a chip or board design is not some flat universal thing, to the programmer it is, but in reality it isn't. There are many busses with addresses and those address spaces are specific to that portion of the design. When you see the stm32 or others with a bootloader and a strap (boot0/boot1 pin), the logic on the other end of the processor bus may see a fetch at the well known address (meaning both the folks that implement the logic and the folks that write software for the logic know that this is the specific address where things start and if you don't put stuff there it won't boot/work) but as mentioned the chip vendor can do whatever they want with that and often do. As a programmer this can be easily understood as logic isn't any more magical than software:
if strap == 0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
For a certain address range that is decoded in front of this code, but also both banks may be directly addressable:
if address[24]==0 return flash_bank_0[address&mask]
else return flash_bank_1[address&mask]
And this way you can have what you see in the stm32s, that both address 0x00000000 and 0x08000000 or in other vendors chips 0x00000000 and 0x01000000 for example map to the same (flash) memory.
The reason being is that the cortex-ms is vector based, there is a table of addresses that point you at code rather than just instructions at known addresses (like the full sized arms arm7, arm9, arm11, cortex-a). The way you use that is you set your address for reset in the table to be 0x08000000 based so when the processor reads at 0x000000xx it is told to fetch instructions from 0x0800xxxx and it does. When the strap is the other way it finds a different flash which may or may not have a fixed space it may only be visible from the if-then-else. (pretty easy to see with a cortex-m and an SWD debugger and software).
The stm32s will have logic that if the strap is set to run the user application will fetch my guess is four words, if the first one or a specific one is all ones or for some chips all zeros (very often flash/rom resets to ones, because there is a logic in version saving a transistor, so the bit is a zero, but we see it as a one, the bits are all inverted, but this is not a hard and fast rule, just very common) the logic/state machine will, for the stm32 realize there is no user application and will load the bootloader. Now it is very possible the design actually always boots the bootloader and there is software there that looks at the application flash, but I think myself and others on this site decided that is not the case, but none of us work there nor have the visibility into the design. In either case the processor then starts executing what it finds and it is very dumb it is told fetch from this address and it does, the programmer had to make sure that stuff is at that address, and each and every instruction has to be laid out in order properly like train tracks, any gaps or mistakes and the trail goes off the rails, otherwise the train is stupid it just follows the tracks. As humans we call the software post or bootloader or application or whatever. It is just software. Once the processor is started if some software loads and runs other software the processor doesn't know it is stupid it just keeps performing the instructions it is fed as it rolls down the track.
Short answer:
Power ramps up to a chip specified level. At a chip specified time reset should be released. This releases state machines to get the chip ready as needed and release the processor. The processor based on its design either fetches its first instruction from a known place or it reads from a known place and that user planted value is the address where the first instruction lives. After that per the architecture of the chip the execution of that first instruction and fetching of more based on that instruction continue until it crashes or is turned off or put in reset.
There is no magic.
There are a number of good open cores out there that you can simulate with free tools and see (with free tools) the internal signals that make that chip work, you can see the post reset activity leading up to the first fetch and then all the execution from there.
Without knowing which microcontroller you are using, this should be general enough:
The hardware in the microcontroller resets several registers to their documented values. This includes the PC, the program counter.
If the microcontroller has configurable reset vectors the value can be chosen from a few alternatives, other controllers always use the same value.
The code at the location the PC points to is the startup code.
Note: It's always a good idea to read the data sheet of the controller!

Why people wouldn't just use the maximum available clock in microcontrollers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
The topic is rather straight forward and I admit i wasn't able to find much on google.
Recently i started to code on STM32 and for a person that is coming from PC-related application, setting all the clocks was rather new.
I was wondering why a developer would want to discard/avoid maximum clock and in which condition?
Say that a microcontroller could work at 168Mhz, why should i choose 84Mhz?
Is it mainly related to power consumption? Are there any other reason?
Why the STM32 team (and microchip as well i guess) took the hassle of setting up a really nice UI on STM32CubeMX to select different combination?
Why should i use an external oscillator directly rather than the PLL path if i can achieve higher working frequency?
Is it mainly related to power consumption?
Yes, mainly. Lower frequency means lower consumption.
One could also save power by doing the work fast, then putting the cpu to sleep, thus improving average consumption, but the power supply might not like the variable load, and exact timekeeping would be rather difficult.
Are there any other reason?
Yes. Some peripherals don't work above certain frequencies. An example: the STM32F429 core can run at 180 MHz, but then there is no way to generate 48 MHz for USB. In order to use USB, the core must run at 168 MHz.
Why should i use an external oscillator directly rather than the PLL path if i can achieve higher working frequency?
An external oscillator has a much higher accuracy than an internal one, and it may take too long for a PLL to stabilize when waking up from standby. It depends on the application requirements.
Power is probably the main reason. But there may be various other reasons that a specific clock speed is used such as:
EMC emmissions.
Avoiding harmonics which interfere with sensitive analogue electronics.
Driving timers / clocks / ADCs etc that are required to be run at a very specific frequency as part of the design (For example I worked on a processor that run at 120MHz, however in order to get the exact required ADC sampling we had to run at something like 119.4MHz).
You might want to use an external oscillator if you intend to use the clock elsewhere on the board. Also you might want to use a very accurate crystal, or maybe not want to wait for the PLL to lock.
There are lots of reasons. However if you are doing something straight forward and don't care about power consumption or noise then running at max speed with the PLL is probably the best place to start in my opinion.
Power is the obvious one, and as it has been touched on in other answers but not directly. Performance. Faster clock does not mean faster code. ST has this magic cachy thing in front of the flash in addition to a real-ish cache in front of the flash (and disabled the arm cache it appears on the cortex-m4's I have tried). But in general the flash is your bottleneck, if as you see on a number of other vendor parts and sometimes ST, you have to keep adding wait states as you increase the system clock. So say at 16Mhz no wait states, at 32, one, 48 two and so on, depends on the system, you are dancing around the speed limit of the flash, making the processor sit around extra clocks while it waits for instructions to come in. And even on an st but easier to see on others, that directly affects your performance, you want to be perhaps right at the frequency where you go from zero to one wait states to maximize what you can feed the cpu.
Some designs the flash is already at half speed of the cpu/system clock where sram generally tracks and can cover the full range, so take the same code at zero wait states and run from flash then run from ram, on some number of MCUs the same machine code runs half speed on flash as it does on sram. some it is one to one and then when you add a wait state flash is half speed vs ram, and so on.
Peripherals have the same problem as mentioned. You may have to use a prescaler on the peripheral clock, so now reading a gpio pin that at 24mhz might have taken one clock at 80mhz may take three or four clocks.
PLLs are analog and jittery, they dont necessarily "lose" any clock cycles, but are worse as far as jitter as the oscillator which itself has a spec on jitter and accuracy as well as temperature affects. So the internal RC is the worst quality clock, direct from the oscillator bypassing the PLL is the best, then multiplying with the PLL will add jitter, but will allow you to go faster.
Back to power. The battery in your remote control on your TV might last a year or so (infra red, the bluetooth ones are days or weeks), they run the lowest clock rate they can to barely do the job and stay off as much as possible or in low power state. If they were to hop up to 120Mhz when they were awake and the battery now lasts weeks or half a year, vs a year, for no real benefit other than it is really cool to run at full speed. That doesnt make much sense. We heavily rely on battery based products now, if the microcontroller in the bluetooth module on your phone ran at its fully rated pll speed, and the microcontroller in your wifi module in your phone, and the one that drives the display, etc, all ran at max speed, your phone might not last even a day on one full charge. Nothing was gained by running faster, but something was lost.
For hobbies and stuff plugged into the wall, burn whatever power you want, but a noticable percentage of the mcu market is about price and power, chips that are screened to a higher speed cost more, lower speed parts, in some cases are just the chips that failed the higher screen, and cost less. tighter smaller code uses less flash you can buy a smaller/cheaper part, your clock can run slower because it takes fewer instructions to do the same thing than a possibly sloppy program and/or bad choice in programming languages, bigger part, lower yield both add to cost. then lowering the clock as low as you can go to keep your tightly written code to just barely meet timing uses the least amount of power ideally (As well as turning off or not turning on peripherals you are not using and prescaling the ones that are on even slower).
For cost and power you want the slowest clock you can tolerate with the smallest binary that is also tight and efficient so that you can just barely make timing. That is your ideal goal. But if you plan for field upgrades then you need to leave some margin for slower/larger code to be part of the upgrade and not have a dramatic effect on power consumption.

Maximum potential speed for serial port rs232 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.

Resources