I'm not sure if each hardware type (display screen, USB, printer, etc) has to follow a unified standard in order to communicate with the CPU. For example, the bits transmitted back and forth between a display screen interface and the CPU are interpreted by the CPU as a specific command, and this interpretation is also correct (for the same bits) even if another display screen is used (from another manufacturer).
If this is not true, how BIOS is supposed to communicate with hundreds of different hardware devices with varying methods of interpreting bits going back and forth from the device interface to the CPU?
I find the standardization notion to be much more practical.
The BIOS itself actually only needs to understand a limited set of hardware required to boot the CPU. It does not need to understand "hundreds" of devices. For example, the BIOS has no idea what a USB printer is.
In general, the BIOS only understands the following devices:
The CPU/Chipset "core" hardware - e.g. the DDR3 memory controller
Basic PCI/PCI Express initialization - nothing device-specific
The video controller - just enough code for basic initialization, typically provided by an Option ROM
The SATA controller - as long as it is IDE/ACHI compatible.
The USB controller - possibly just USB 2.0
Standard USB storage devices
Standard USB keyboard/mouse devices
Ethernet controller - typically provided by an Option ROM
Any other device is ignored by the BIOS, unless the vendor included an Option ROM on the board. (You typically see this on SAS/SCSI controllers or Ethernet cards.)
Note most of the devices listed above conform to a standard specification, so they are software compatible regardless of who made it. For example, a USB 2.0 controller should comply with the EHCI spec, it would be compatible across all BIOSes. SATA controllers should follow the AHCI spec.
Once the Operating System loads, it takes over from the BIOS and loads its own drivers to interface with the hardware.
There is specific way(i.e. protocol) for each hardware to communicate with CPU. Maybe we can regard it as "device specification". To communicate with hundreds of different hardware devices BIOS should implement corresponding protocols within it. Thus we can say BIOS is actually a "collection" of specifications.
Whenever new spec is announced, BIOS should be modified to support it, or BIOS does not identify the corresponding device,not to speak of configuring it !
Related
I am involved in a project where we have some kind of IoT device. An nxp processor with an LTE modem on a PCB. The software running on it connects to the modem over a single uart interface. It will initialize the modem through AT commands, and finally made a data call to the provider (PPP).
Then, it uses lwIP (light weight IP) to open some mqtt subscriptions, and allow user code to make http get/post requests to our servers.
Every 15 minutes we want to retrieve signal strength from the modem and report this back to the server. What I do now, is put the modem back in command mode, retrieve the signal strength info, go back to data mode, and resume normal operation.
The round trip from data mode, to commando mode, and back to data mode takes several seconds (4-5 ish). This is annoying, because during that time we are not receptive for commands.
I've read about gsm mux 07.10. By following some defined protocol it allows to create virtual serial ports, over one physical uart. That sounds nice, although I realize this will go at the cost of performance (bytes will be added to each frame we send to either command mode / data mode).
The gsm mux 07.10 spec dates from 1999. I am far from an expert in mobile solutions. I was wondering: is muxing still the way to go? How does a typical smart phone deals with this for example? Do they include modems with more than one uart to have parallel access to AT commands and a live internet connection? Or do they in fact still rely on gsm mux?
If somebody would be so kind to give some insights. Also on potential C libraries that are available that implement gsm mux 07.10? It seems that TinyGSM implements it (although I can't seem to find where), and I also can find the linux kernel driver that implements gsm mux 07.10. But that driver is written on top the tty interfaces in linux, so that would mean I would have to reverse engineer the kernel driver and strip out the tty stuff and replace it with my own uart implementation.
First of all, the spec numbering is the old GSM specification numbering, so those old specs will never be updated, the new specifications with new numbering scheme will. I do not remember when the switch was made, but I do remember someone at work giving a presentation on 07.10 probably around 1998/1999, so probably a few years after that or around that time (and definitely before 2009).
The newer spec numbering scheme uses three digits for the first part.
So for instance the old AT command spec 07.07 is now 27.007, and the current 07.10 multiplex specification is 27.010.
The following is what I remember of 07.10.
The motivations for developing 07.10 was to exactly support the kind of scenario that you describe. Remember back in the mid 90's, if mobile phones had a serial interface then that was RS-232 though each manufacturer's proprietary connector at the bottom of the phone. One single serial interface.
However, in order to use 07.10 mux in serial communication you needed to install some specific serial drivers in Windows with support for 07.10 (and I think maybe there was some reliability issue with them?), and for that reason 07.10 never took of and became anything more than an rarely used solution.
Also by the end of the 90's additional serial interfaces like Bluetooth and IrDA became available on many phones, and later USB as well, which both added additional physical interfaces as well as natively multiplexing within each protocol.
So the need for multiplexing over physical RS-232 became less of an issue, and whatever little popularity 07.10 ever had dwindled down to virtual nothing.
Fast forward a couple of decades and suddenly someone asks about it on stackoverflow. Good on you :) As far as I can tell I cannot see any fundamental problems with using it for the purpose you present.
Modern smart phones that support AT commands will most likely have a code base for the AT command parsing with roots in the 90's, which most likely include the AT+CMUX command. Of course manufacturers today have zero explicit wish for supporting it, but when it is already present it will just come along with the collection of all other legacy AT commands that they support.
So if the modem supports AT+CMUX you should be good to go. I have no experience or recommendation with regards to client protocol libraries.
I have tested my code on Contiki OS Cooja Simulator, I now want to transfer it to esp8266 module but could not find a proper any guide on how to transfer code from ContikiOS?
It's not that simple - "transferring" the code to a microcontroller is the easy part! What you call "transfer" is more widely known as "programming", "flashing" or "uploading" in this area. Instruction how to do it with Contiki and Contiki-NG are in the tutorial here: https://github.com/contiki-ng/contiki-ng/wiki/Tutorial:-Hello,-World!#running-the-example-on-a-real-device
However, for an embedded OS such as Contiki to work correctly on a specific microcontroller, not only must the microcontroller specified in compilation settings (so that the compilers knows what code to generate), the OS itself must be adapted to that specific microcontroller and the specific board. Each microcontroller has its own way of providing functionality that the OS needs, for example, hardware timers and interrupts. OS needs to support the microcontroller, that is, provide an adaptation layer between the OS core code and the API exposed by the hardware. Different boards may use the same microcontroller, but differ in the pins used for I/O, LEDs, peripherals available, and so on. Each of the supported boards must have a small adaptation layer in the OS as well.
Unfortunately ESP microcontrollers have never been officially supported by the Contiki OS, so you will need to get another hardware to try it out!
I am just learning about embedded systems and checking about wifi modules. I see in the datasheet they mention about a core processor that is integrated with rf SoC. I also see another processor on the MCU called application processor. I am confused about its purpose. What is it used for? Can someone please clarify? For reference, I was reading about the ATSAMW25 module.
Typically, devices that include wireless technologies (whether its Bluetooth/BLE, WiFi, LoRa, etc) include both the hardware required to manage the wireless connectivity and then separate hardware for running the higher-level application of the system. Frequently, managing the wireless protocol is intensive enough that it is best done with its own small processor running its own firmware to deal with connectivity and sending data over the link and might include a fair amount of proprietary firmware from the vendor (ie, Microchip in your example). To enable programmers to write their own code for the system, these protocol processors are paired with application processors, ones for whom the development tools and documentation are more openly available to developers for implementing whatever they want to do with the module. By separating the two operations (wireless/protocol and application), the code developers implement has less chance of causing fundamental problems for the wireless connectivity (like, application code hanging causes entire WiFi networks to fail) and the proprietary aspects of the system can be better protected (or another way, more documentation can be provided to developers without signing an NDA as the application processor is more "open" while the details of the wireless implementation are usually not).
In the case of the module youre looking at, the wireless hardware is all inside the ATWINC1500 and is accessed via SPI and some other GPIO by the SAMD21G (the application processor). All the code you write for the module end up running on the SAMD21G with some library/driver support to implement the wireless functions (which under the hood, are implemented by talking to the ATWINC1500). The ATWINC1500 simply runs the code the vendor (Microchip) wrote to actually do all the wireless protocol work and provides an interface for another processor (in this module, the SAMD21G) to control it.
I am currently learning about microcontrollers and processors, and I have a couple questions about some distinctions between the two. As I understand, the MCU contains a processor that implements a processor architecture. For example, I am using a SAML22 Microcontroller that has a ARM Cortex M0 for its processor. So it would have the following:
Architecture - ARM
Processor - ARM Cortex M0
MCU - SAML22
Are the registers that I gather from the SAML22 data sheet related to the ARM Cortex M0? If so, how?
No, the microcontroller datasheet describes peripherals which are not part of the ARM core.
The SAML22 has a Cortex-M0+ core, which is described in ARM documents "Technical Reference Manual" (TRM, DDI0484) and the less detailed "Device Generic User Guide" (DGUG, DUI0662).
You are trying to overcomplicate this. an mcu has a processor. A processor has a processor. there have been processors that you can find on both an MCU and an SOC that is linux capable (not just rtos or uclinux). Its like having a few horsepower motor on your lawnmower and also having the same or similar on your golf cart. Or like having a school made of bricks or a house made of the same style/brand of bricks. Dont get hung up on that. Particularly with the rest of your question which has nothing to do with the processor used in the chip at all.
Atmel wants to make an MCU, so the either create, reuse or buy a processor, they have at least one if not more processors that are their IP but they choose to buy someone elses IP. Now they want to wrap some logic around it they can use some of their own ip or buy some. Each major block is a new discussion. Do they make their own uart from scratch, do they take a uart they created years ago and re-use that, or do they buy a uart. Do they make an ADC from scratch, do they take an ADC they made years ago and use that, do they purchase an ADC design from someone and use that. Repeat for every major or minor block in the design. Just like Honda making a car, which parts are they going to make themselves and which parts are they going to buy, and does that have any relevance to a design they made years ago, or a truck sized vehicle vs a compact car, they both have four wheels, an engine and some seats, in some cases may share some components and others completely incompatible components. but its the same story, do we make a seat, use one we already have, buy one. do we make a rear view mirror, use one we already have from a prior design, or buy them from someone else. the rear view mirror decision likely has nothing to do with the seat decision.
Registers its just a term a thing you write/store some information in. A uart has registers to make it work. A processor usually has registers to make it work. An ADC usually has registers to make it work. Consider each of these blocks as separable.
A processor core is a logic blob that is programmable in the sense that it has a set of rules and its primary interface is a memory bus where it is the master, it expects to find when fetched instructions per its design that tell it what to do, up to the chip vendor to wire that up to something that will feed it instructions. It may have some interrupt lines and a few other things but its primary interface is that memory bus. The "registers" inside are part of the design accessed by the processor internals and not generally memory mapped.
A uart is a logic blob that that is programmable, it has some sort of a memory/interface bus where it is typically a slave. It also has some other signals that go off chip, RX, TX, RTS, CTS, DTR...The registers inside the uart are addressable through the interface bus and are used to make the uart operate. It is up to the chip vendor to connect this bus in a way that it fits into the address space of a bus master that directly or indirectly can write/read the registers in the uart to make it operate. It is programmable in the sense that programming the registers per its spec makes it operate.
An ADC is a logic blob that is sometimes found to be programmable, sometimes not. The converter itself isnt usually. But when used in a chip that does more than just ADConversions there will be an additional logic blob wrapped around the ADC to make it programmable and that logic blob will have some sort of an interface bus where it is a slave. It is up to the chip vendor to connect this bus to a bus master that in some way is capable of programming the ADC to do something.
This isnt limited to ARM based microcontrollers. You look inside an intel x86 processor there is third party IP in there not invented nor created by intel, a lot of it may be but not all. Same goes for pretty much everyone else.
Processor based chips are just cars with seats and an engine and wheels that were per that design sourced from somewhere and then interfaced to each other using more IP from someone be it in house or not.
For any of these chips each IP blob has documentation uart documentation, adc documentation, processor core documentation. Sometimes the license agreement prevents the chip vendor from publishing the documentation and you have to get a driver from them in some sort of board support package or SDK in some form, countless examples of this with chip vendors you have heard of from atmel to intel to zilog. Likewise there are license agreements or common practices that guide what parts the chip vendor is going to document and how and what parts not. So generally but not always when you have specifically an ARM or MIPS core as part of a design. The processor documentation as you should generally do always, is from the processor vendor so ARM, MIPS, etc. The uart, the ADC, and some others be they in house or purchased IP are generally in the chip vendors documentation. The chip vendor ideally created the address space within the rules of the various ip, so the chip vendor often documents where in the processors address space each logic blob lives, then you go to the documentation for that logic blob to see what the individual control interfaces do, registers or memory mapped memory. Not always true though esp with uarts, you sometimes find this is a 16550 compatible and you have to go find a 16550 document from someone else and connect the dots on your own. The raspberry pi includes for example other peripherals where they basically say this is just an arm purchased blob go to arm for this, or there is a blob here and we wont show you how it works (but we publish the linux driver for it and if eager you can reverse engineer from that).
With an Atmel now Microchip ARM based product you (generally) go to arm to get info on the processor core, its general purpose registers as well as the very few internal to the core peripherals like the systick timer if present. The uart, the gpio, the address space, spi, i2c, etc are going to be in one or more Atmel documents for that part, they cover the register specs for those peripherals.
As far as how many documents it takes from the chip vendor that is very much chip vendor and over time family or product line specific. Some chip vendors as with some customers like the board design specific stuff in one document usually called a datasheet. Pinouts, electrical stuff, etc. And the other documents cover the uart register specs and such. Some designs are such that they reuse the same core components. if you have a uart then here is the register spec all of our uarts are the same. so there will be a manual for all of the chips peripherals and maybe the processor or maybe the processor core itself is in a separate manual. In some cases with that design solution, the peripherals if present are always at the same address or they are not. The one I am thinking of you find the board design stuff in the datasheet along with the address map, but the rest of the information for programmers is in the family reference manual, so you need at least those two documents.
And there are of course vendors that either make bad documents with holes as sad habit, or some that intentionally do not provide documentation without an NDA, for fear that a competitor will make a clone perhaps, or just a habit for that company that goes way back. Sometimes those closed book companies do very well sometimes that causes them pain. Broadcom and allwinner seem to do okay, allwinner docs tend to get left about by chip vendors and I guess they dont get punished, but other companies you will get called out for that possibly with a financial or other punishment. Its all in the legal agreement.
There are a few perpherals where there is only one or two designs and everyone just buys it, and despite being undocumented looking at linux/unix drivers can see that everyone uses the same IP.
Way more than you asked for but could tell from your question you were off on the wrong path.
Generally the ARM stuff is not in the chip vendor stuff, so no you wouldnt find it there. Sometimes (rare) the chip vendor will re-publish an arm document in whole or in part. Better to get it from arm directly. In the case of a cortex-m the arm peripherals on core are at fixed/well known addresses. For the cortex-a, and older arm11, arm10, arm9...The chip vendor straps a peripheral base address in the address space they have designed for that product and the internal arm peripherals if any are based off of that. so you can find two products likely from different vendors with the same core but the memory mapped peripherals inside are at different addresses for this reason. (in the technical reference manuals for the various arm cores).
Can we broadcast Music using wifi broadcast and listen to thhe same on devices supporting monitor mode.
I would like to listen on monitor mode because I expect the number of devices getting connected is too high for wifi to work properly using IP-protocol.
I want the wifi device to act as a FM broadcast where every device recieves every packets and stream the music.
Are you talking about this Wifibroadcast , here?
If so: well yes, monitor mode is the underlying technology, as can be seen here.
Now, if this is about doing a commercial product, sadly, you cannot expect any kind of interoperability from this.
Streaming audio/video over Wi-Fi is a business, and the the power in charge (Wi-Fi Alliance aka WFA) as some view on it, including certification programs. Have a look at Miracast, using Wi-Fi Direct.
As for multicast / broadcast, it is even more of a business and the realm of proprietary technologies for now (example here - and no, this is not limited to automobile). This is quite complicated, to start with because of the synchronization problem across receivers: you don't want 2 radio receivers in the same room to play with a 1 seconds delay, this would be cacophony.
EDIT:
Meaning, be it with the Wifibroadcast OSS project or with the proprietary industry about it, since there is not yet an open protocol for this (as "publicly available standard specification", I don't even go about implementation, FLOSS or not), you will have to provide a specific application for every receiver to match your broadcaster protocol, and vice versa. And that is the state of the industry today. That is what the company I mentioned above, or this other one more well know, or these are doing. And so, they do not interoperate. This will be your problem: provide a receiver app for Windows, Mac OS, Android and iOS (where you may not even have access to sub-layer 3 API) that will match your radio broadcaster protocol. And Linux too, please.
Though, this is the direction of history because this is what the user wants: stream A/V to/from device/application X from brand A to device/application Y from brand B.
And so people have been working on this, on layer 2, because layer 3 and above have unsolvable challenges with it, at IEEE since 2004 with Ethernet AVB, which is a set of protocols. You can download some of its standards for free, others for a moderate fee depending on how old they are. There is a SIG taking care of certification(http://avnu.org/certified-products/) to guarantee interoperability.
It is for 802.3 (aka wired Ethernet), but there is some work done to bring this to 802.11 Wi-Fi. Because again, that's what the user wants, the market is here, no question about that. It will take a long time. Even more to get consumer electronic grade devices or applications of the shelves. But they will interoperate out of the box, that's the goal.
There's even been work done on moving this to layer 3/IP as well BTW, with some performance sacrifice.
So come back in a few years, and all should be setup. Or, if you have lots of time and money and no urge to deliver, implement a solution based on these standards?
PS:
Link to AVnu (Ethernet AVB SIG) page about use cases for consumer electronics audio streaming, wired or wireless:
http://avnu.org/consumer/
...and its 10 pages white paper at the bottom of the page.