As we all know, the hex file is the heart of our application code which will be programmed into the microcontroller's flash memory for execution. My doubt is before the execution of this hex file, will it be verified by a microcontroller or it will just execute once all start-up processes finished?
Disclaimer: Because I don't know all microcontrollers, this is not a complete answer.*
The flashed binary executable will just be executed.
Some microcontrollers check for a certain value at a fixed address to decide whether to start the built-in bootloader or a flashed user program.
If you need the user program to be checked, you will need to implement this yourself. I have worked with such systems, it is quite common, especially in safety-related environments.
Concerning the format of hex files:
Intelhex as well as other format like SREC are human readable text representations of binary data. The common reason for the checksums in these formats is to ensure data consistency during transmission, which was done via unreliable channels back at the time when the formats were invented.
Another advantage is the limitation to 7-bit ASCII characters that can be transferred losslessly via old internet protocols.
However, the "real" contents, the binary data, is stored directly in the flash memory of the microcontrollers. Checksums might be used by the receiving software (for example the bootloader) in the microcontroller when the user program is to be flashed. But after flashing they are gone.
Related
I studied many subjects from physics to logic gates to processor.
I also studied computer architecture, compilers, Assembly x86, operating systems, GPU, ....
All the subjects mentioned above, for some reasons didn't cover what is going "after an executable file being produced by compilers" downward the processor.
Please could you provide me with resources explain these things. because the way I am thinking drive me crazy if I didn't understand why things works the way they are working.
Like I want to understand for-example; why UNIX files start with 'elf'? if you tell me it 's convention. then how the computer as machine understand that a file start by 'elf' being passed to it?
It is a job of operating system. Then how the computer understand the code of operating system ?. I know that processor will read it represented in Binary.
But how really the computer understand the binary? don't tell via transistors and logic gates. What I need to understand how the binary is being signaled to the computer hardware ? how to hardware is really being implemented to understand the binary?
Please any resources about this stage I mention above share it with me ?
Binary is really just electrical signals and small charges stored in memory circuits. The binary data (ex:0101) is just a number represented with 2 symbols instead of 10 (like in base 10 or decimal). What matters, is the way you interpret those numbers to give them meaning.
For example, in processor architectures like x64, the developers of the architecture will create an instruction set that has a certain amount of instructions with a very specific format. The binary data thus becomes interpreted to be those instructions. Since the CPU is manufactured in its logic circuits to understand those instructions, then it can execute them by doing what they should according to the architecture.
The CPU is also configured to start executing at a specific address. The motherboard manufacturer puts machine code in binary at that address to kickstart the CPU. This is called the firmware. The OS developer will put its machine code on the hard-drive at a conventional position and in a conventional format so that the firmware can find and execute it.
Even in electrical terms there needs to be a conventional interpretation of signals. A certain chip on your motherboard might interpret 3V as a high signal and another might be 5V. What matters is that interconnected chips understand each other's interface. For example, an x64 processor might be connected to modern DDR4 RAM. The CPU knows that if it applies a signal (let's say 5V square wave 2600MHz) to certain pins of that chip then each oscillation will write or read data in the memory cells that are selected. It knows that because it expects the RAM module to follow the DDR4 standard. If the RAM module doesn't follow the standard then the CPU will not be able to interact with it.
MPI standard states that when parallel programs are running on heterogenerous environment, they may have different representations for a same datatype(like big endian and small endian machines for intergers), so datatype representation conversion might be needed when doing point to point communication. I don't know how Open MPI implements this.
For instance, current Open MPI uses UCX library defaultly, I have study some codes of UCX library and Open MPI's ucx module. However, for continuous datatype like MPI_INT, I didn't find any representation conversion happen. I wonder is it because I miss that part or the implementation didn't satisfy the standard?
If you want to run an Open MPI app on an heterogeneous cluster, you have to configure --enable-heterogeneous (this is disabled by default). Keep in mind this is supposed to work, but it is lightly tested, mainly because of a lack of interest/real use cases. FWIW, IBM Power is now little endian, and Fujitsu is moving from Sparc to ARM for HPC, so virtually all HPC processors are (or will soon be) little endian.
Open MPI uses convertors (see opal/datatype/opal_convertor.h) to pack the data before sending it, and unpack it once received.
The data is packed in its current endianness. Data conversion (e.g. swap bytes) is performed by the receiver if the sender has a different endianness.
There are two ways of using UCX : pml/ucx and pml/ob1+btl/ucx and I have tested none of them in a heterogeneous environment. If you are facing some issues with pml/ucx, try mpirun --mca pml ob1 ....
I need (to design?) a protocol for communication between a microprocessor-driven data logger, and a PC (or similar) via serial connection. There will be no control lines, the only way the device/PC can know if they're connected is by the data they're receiving. Connection might be broken and re-established at any time. The serial connection is full-duplex. (8n1)
The problem is what sort of packets to use, handshaking codes, or similar. The microprocessor is extremely limited in capability, so the protocol needs to be as simple as possible. But the data logger will have a number of features such as scheduling logging, downloading logs, setting sample rates, and so on, which may be active simultaneously.
My bloated version would go like this: For both the data logger and PC, a fixed packet size of 16 bytes with a simple 1 byte check sum, perhaps a 0x00 byte at the beginning/end to simplify recognition of packets, and one byte denoting the kind of data in the packet (command / settings / log data / live feed values etc). To synchronize, a unique "hello/reset" packet (of all zero's for example) could be sent by the PC, which when detected by the device is then returned to confirm synchronization.
I'd appreciate any comments on this approach, and welcome any other suggestions as well as general observations.
Observations: I think I will have to roll my own, since I need it to be as lightweight as possible. I'll be taking bits and pieces from protocols suggested in answers, as well as some others I've found... Slip,
PPP and HLDC.
You can use Google's Protocol Buffers as a data exchange format (also check out the C bindings project if you're using C). It's a very efficient format, well suited to such tasks.
Microcontroller Interconnect Network (MIN) is designed for just this purpose: tiny 8-bit microcontrollers talking to something else.
The code is MIT licensed and there's embedded C and also Python implementations:
https://github.com/min-protocol/min
I wouldn't try to invent something from scratch, perhaps you could reuse something from the past like ZMODEM or one of its cousins? Most of the problems you mention have been solved, and there are probably a number of other cases you haven't even though of yet.
Details on zmodem:
http://www.techfest.com/hardware/modem/zmodem.htm
And the c source code is in the public domain.
I have an executable program which outputs data to the harddisk e.g. C:\documents.
I need some means to intercept the data in Windows 7 before they get to the hard drive. Then I will encrypt the data and send it back to the harddisk. Unfortunately, the .exe file does not support redirection command i.e. > in command prompt. Do you know how I can achieve such a thing in any programming language (c, c++, JAVA, php).
The encryption can only be done before the plain data is sent to the disk not after.
Any ideas most welcome. Thanks
This is virtually impossible in general. Many programs write to disk using memory-mapped files. In such a scheme, a memory range is mapped to (part of) a file. In such a scheme, writes to file can't be distinguished from writes to memory. A statement like p[OFFSET_OF_FIELD_X] = 17; is a logically write to file. Furthermore, the OS will keep track of the synchronization of memory and disk. Not all logical writes to memory are directly translated into physical writes to disk. From time to time, at the whim of the OS, dirty memory pages are copied back to disk.
Even in the simpler case of CreateFile/WriteFile, there's little room to intercept the data on the fly. The closest you could achieve is the use of Microsoft Detours. I know of at least one snakeoil encyption program (WxVault, crapware shipped on Dells) that does that. It repeatedly crashed my application in the field, which is why my program unpatches any attempt to intercept data on the fly. So, not even such hacks are robust against programs that dislike interference.
I'm asked to read from and write to a half-duplex serial connection using POSIX calls (more specifically, writing in C on Linux 2.6.x). I'm having slight troubles finding detailed information on that particular model (most pages concentrate on full-duplex) and as I am getting slight anomalies when reading, I wanted to check whether maybe I am doing something wrong here.
With a half-duplex serial connection, I can only read or write. This is not a problem, as there is no unsolicited incoming data on the line - the only time any packages are sent to me (for reading) is when I asked for them beforehand.
So what my code does is to write() to the port whenever something needs to be sent. Should this data result in a response (something I know beforehand), I simply read(). There are no special functions I am calling - but maybe I should? And is this approach correct? I.e. write when the line is free?
I would read the Linux kernel source documentation, there may be a text file about the serial driver; if not, you could read through the actual driver code to see what it does (it's not as scary as it sounds, I promise!)