linux serial ports -- mulithread program - serial-port

I am working on a smartcard reader project here i will have to read/write data from the smartcard reader.
Also i will have to read/write data from PC application.
There are two serial port on my microcontroller one connected to smartcard reader other to PC.
Smartcard reader <------> Microcontroller <-----> PC
I have ported linux & using /ttys0 & /ttys1 driver for this.
1> My question is if application have to find that some data is available to be read from the port than will i have to always check it with read() system call ?
2> Does ttys0 driver have internal buffer to store received data ? Or data is lost if application do not read data immediately ?
3> Here using seprate threads for rx/tx from each port, is it right approach ?
Please guide me i am new to Embedded linux.
//John

Yes, there's a fair amount of buffering on linux tty's.
You have a few choices for how to interact with them.
you can make them non-blocking, and frequently poll to see if you can read data from them (but this may result in uselessly spinning CPU cycles, slowing other tasks)
you can use select() to yield to the scheduler until one of your devices has data for you to act on
you can use blocking I/O, however since you have multiple ports that may also require multiple threads

TTY programming is similar to socket programming in Linux. So basically you can set the socket to be a asynchronous and receive a signal once data is available. Regarding buffering, yes it's buffered using two flipping buffers. You can check chapter 18 in Linux device drivers 3rd edition regarding TTY implementation in the kernel.

Related

How does a kernel driver talk with another device?

I have an FPGA board with unix-based firmware. I need write out the program to run on this firmware that will send commands to some devices via I2C bus and will receive responses. I use for this special character file in Unix that i map in my program and write to it special commands & read from it responses. Each memory area in this mapped memory corresponds specific register of the FPGA which specified in Unix-based firmware (as i understand).
So, the question is the next one. As i understand, when i write some command to that mapped memory region of the special character file the kernel calls certain driver to handle bytes that I've written and send them through I2C bus (for example). Am I right? If so, is there some guarantee that the response from that device will be buffered and I will be able to read it from the mapped region in any time? Or does it depend on implementation specific driver?
I'm sorry if question is not clear some way, I am a newbie in this stuff.

how CPU distributes data from network?

I'm learning network communications and already familiar with TCP/IP networking layers (physical, data link ... and application layers) and how data moves through this nodes. But I have some questions about what happens inside a machine, when data is received by a Network Interface Card(NIC).
Questions:
How CPU knows that data from other machine is arrived?
How CPU informs OS that data from other machine is arrived?
How OS knows which application the data is for?
Please, give me some deep explanation for this topic, or advice some useful materials to make it clear.
To give you a general view from Linux point(should be similar for other OS):
The packets arrive in NIC. These packets are copied into circular queues in RAM via DMA. The arrival of packets will generate an interrupt to let the system know that their are packets in RAM. Corresponding to the interrupt there will be an interrupt handler routine registered with the Operating System via the network driver. (To keep things simple didn't talk about softirq's). Each CPU has a poll function whose job is to harvest packets from these queue's and pass them onto upper n/w layers. So answering your queries:
How CPU knows that data from other machine is arrived?
When interrupt occurs and poll loop is not running on the CPU, the OS(via network driver)
will ask the CPU to start the poll loop for harvesting the packets.
How CPU informs OS that data from other machine is arrived?
CPU doesn't need to inform OS. The OS will know when the interrupt occurs as the interrupt handler is a part of the network driver which is part of OS. Infact in a way OS will tell the CPU to start harvesting packets.
How OS knows which application the data is for?
The communication is done via sockets which will have a port number. The packets arrived will have a port number which will guide the OS to take the packet to the required application.

Serial communication crashes in LabVIEW

I am controlling a device over serial connection using LabVIEW (version 7.0). It is connected using USB, and is installed as a virtual serial port on the computer (running Windows XP). Every now and then my device crashes when my program sends a command, and it is unable to accept any more input (the device itself also stops working) until it has timed out.
I've looked at the serial port traffic using Portmon. Whenever the device crashes the serial driver sends the command I send using my program four times instead of just once, with an IOCTL_SERIAL_GET_COMMSTATUS command in between. I cannot see what this last command returns, but I assume something happens in the communication earlier on. I'm thinking my configuration of the port is not entirely right, but I have no idea how or why. I open and close the connection to my device every time I want to write something to it.
For completeness' sake: it has a baud rate of 9600, 8 bits, no parity, 1 stop bit, and no flow control. I'm aware that the correct settings of these parameters depend on the device, but the manufacturer has not supplied any recommended settings.
The driver is a DLL of some sort? If so, this is the most likely source of your problem, and you likely will need to contact the author of the driver. LabVIEW does have crashing bugs, but by far the most common source of crashes in simple communications apps is a buggy third-party DLL.
In other words, I doubt this is a LabVIEW problem at all and that you would have the same difficulty if you wrote a C program to talk to this driver. I only know what you've posted here about your system, but after many years of chasing down such issues, I would start with the device manufacturer/driver author.
If you have evidence to the contrary, please share.

MSDOS API serial port only reading last sent character

I've been playing with the MSDOS API in assembler for some time and I'm trying to build an application to read/write from the serial port. I'm currently using VMware Workstation 11 + VSPE (http://www.eterlogic.com/Products.VSPE.html) to emulate the serial port communication.
One thing I noticed is that if I send let's say "asdfgh" into the serial port and then read it in MSDOS (Using interrupt 21h function 03h, but I also tried interrupt 14h function 02h), it only returns the last character read: "h"
According to some documentation I read, if an application sends data faster than I can process it, characters will be lost, which means that either there is another way to make MSDOS save bytes to a buffer (controlling flow) or I have to write a driver that does this (or maybe a TSR program that manages this I dunno).
So the question is, do I have to write a driver or is there another way to do this?
I managed to write a program that turns on the 16550 FIFO buffer and now It works using the DOS API.
So it was just the FIFO buffer
No need to slow down the baud rate or whatsoever. Nonetheless, if one happen to have a 8250 it's probably either that or implement a communications driver on top of it
Also, this helped a lot:
https://en.wikibooks.org/wiki/Serial_Programming/8250_UART_Programming#UART_Registers

Hyper-V: Connecting VMs through named pipe loses data

We are trying to connect two Hyper-V VMs through a serial port. Hyper-V exposes the serial port as a named pipe to the host system, and implements the server end of the named pipe. Consequentially, to connect them, we need to write a named-pipe client which connects to both VMs, and copies the data back and forth.
We have written such an application. Unfortunately, this application loses data.
If we connect two hyperterms, and have them exchange data, the transmission sometimes succeeds, but in many cases, the receiving end reports errors, or the transmission just deadlocks. Likewise, if we use the link to run a kernel debugger, it also seems to hang often.
What could be the cause of the data loss? What precautions must be taken when connecting named pipes in such a manner?
Edit: We have worked around the problem, using kdsrv.exe. The COM port of the debuggee continues to be exposed through a named pipe, however, the debugger end talks to kdserv via TCP.
The data loss is not due to the named pipes. It is infact the COM ports (emulated and physical) that may lose data since they operate with a small buffer in the UART.
The named pipe receives all the data that is written to the COM port. Your program reads data from the named pipe and writes it to another named pipe. This is where data loss can originate if you write too fast the receiveing COM port's UART can overflow leading to data loss.
You may need to add some delay to avoid exceeding the baud rate expected by the receiving side.
In addition, you are missing ResetEvent() calls in your program.
For your KD issues, you may need to add resets=0 to the connection string.
I did not try to connect VM via Serial but I connected VM and Host via usb (through network)
and it works.
If it is required for your software to establish serial connection try to test via serial emulators with work through tcp\ip.
I think John's suggestion is correct - if u are using a slow CPU to emulate TWO VM, then the guest OS's drivers for serial port is highly drifted away from the high speed version. So John's suggestion is to set the input/output side of the serial link to the slowest possible speed. Ie, you cannot use high baud rate for the inter-VM serial communication. Instead u have to use the slowest possible speed, and so that the VM guest driver will take that cue and use the slower version of the driver. But your physical machine must have sufficient CPU speed to run two VM concurrently, to avoid the "emulation drift" of the the serial driver.
Well, just my guess, but there is a VirtualBox version of your problem, seemingly no issues running it:
http://bodocsi.net/2011/02/how-setup-serial-port-link-in-virtualbox-between-two-guest-virtual-machine-in-linux/
But the following bug ticket for VirtualBox does describe many similarities to your problem:
https://www.virtualbox.org/ticket/1548
And reading the end seemingly indicate the solution has to do with VirtualBox's internal source code. Perhaps it is Hyper-V's problem?

Resources