I need to implement asynchronous read/write support in my linux device driver.
The user space program should get a asynchronous signal from device driver, indicating that the driver has data and the user space program can read it.
Below are the options i found by googling and from LDD book.
[1] Implement poll-read. The driver returns status of read/write queue. The user space program can then decide whether to perform read/write on the device.
[2] Implement async notifications. The device driver is able to send a signal to user space when data is ready on driver side. The user space program can then read the data.
However i have seen developers using select_read call with tty driver. Not sure what support should be added to my existing device driver for using select_read from user space.
Need your advice on the most efficient methods from the above.
Asynchronouse notifications (signals) are harder to use, so it is usually recommended to use poll() instead.
You do not need to separately implement select(), both poll() and select() are user-space interfaces that map to your driver's .poll callback in the kernel.
Related
The state of affairs: A GUI program, implemented in Java or C#, communicates with a low-one-digit-range number of devices via TCP/IP. Connections are GUI<-->Device only, never Device<--->Device.
Network traffic tends to be low, but the devices tend to send their messages in a somewhat synchronized manner.
The GUI takes a message, reads it and updates a simple graphic representation of the data. Its CPU usage hovers around 5%.
Although we've yet to see any networking problems, we're thinking of adding send and recieve queues in the GUI program.
Is this ...
entirely nonsensical?
well-advised?
urgently needed?
NET experts,
I have a scenario where a 4-port PSTN card is installed in a server and I have installed Freebpx on that server as per suggestion by someone. When a call comes on any of the PSTN line, it is forwarded to one of the operators on his hard phone.
Each operator is also having a computer screen at his table, powered by an individual CPU. This runs our CRM software to be handled by operator. When a call arrives to an operator hard phone, say operator 2, we want that the Caller number should also be displayed in the CRM software. Based on this caller number, operator can enter some information related to the Caller and save it in database via our CRM software. Also, when operator disconnect the call, we should receive call stop time for statistics later on.
Thus, we need caller number and call start time when a call is picked by an operator on his hard phone. and then we need call end time when a call is finished.
Can someone help us how we can achieve this? Do we have to capture the SIP packets and parse them or their is some other way to do so? Our CRM database is totally separate from the Freepbx and resides on another server.
If you want to get these events realtime, you should look at AMI (Asterisk Manager Interface - port 5038 TCP by default) and it's configuration manager.conf (note: FreePBX uses the, see manager_custom.conf).
If you want the archived version, you should set up a database server, and point the CDR (Call Detail Records) module to it. PostgreSQL or MySQL/MariaDB works just fine. Asterisk will simply ignore additional fields in the CDR as long as they can be NULL or has a DEFAULT value. This can be used to store custom data.
I would like to copy data from host to device and run some kernels in parallel. There seems to be conflicting information on whether running a cublasSetMatrixAsync function call will be blocking on the default stream?
I am seeing it block execution and am wondering what the correct way to use it is. Should cublasSetMatrixAsync be on the non-default stream? If so, is there an easy way for default stream to block if it needs the matrix on device for some kernel in the future?
Yes, it has blocking behavior.
From the programming guide:
Two commands from different streams cannot run concurrently if any one of the following operations is issued in-between them by the host thread:
...
• any CUDA command to the default stream,
cublasSetMatrixAsync is not exempt from this.
A general rule for CUDA concurrency is, if you want it, don't use the default stream.
is there an easy way for default stream to block if it needs the matrix on device for some kernel in the future?
issue a cudaDeviceSynchronize()
That will force all cuda device activity, in any stream associated with that device, to finish before any subsequent commands, issued to any stream associated with that device, can begin.
Could anyone explain if additional synchronization, e.g., locking, is needed in the following two situations in a Linux network driver? I am interested in the kernel 2.6.32 and newer.
1. .probe VS .ndo_open
In a driver for a PCI network card, the net_device instance is usually registered in .probe() callback. Suppose a driver specifies .ndo_open callback in the net_device_ops, performs other necessary operations and then calls register_netdev().
Is it possible for that .ndo_open callback to be called by the kernel after register_netdev() but before the end of .probe callback? I suppose it is, but may be, there is a stronger guarantee, something that ensures that the device can be opened no earlier than .probe ends?
In other words, if .probe callback accesses, say, the private part of the net_device struct after register_netdev() and ndo_open callback accesses that part too, do I need to use locks or other means to synchronize these accesses?
2. .ndo_start_xmit VS NAPI poll
Is there any guarantee that, for a given network device, .ndo_start_xmit callback and NAPI poll callback provided by a driver never execute concurrently?
I know that .ndo_start_xmit is executed with BH disabled at least and poll runs in the softirq, and hence, BH context. But this serializes execution of these callbacks on the local CPU only. Is it possible for .ndo_start_xmit and poll for the same network device to execute simultaneously on different CPUs?
As above, if these callbacks access the same data, is it needed to protect the data with a lock or something?
References to the kernel code and/or the docs are appreciated.
EDIT:
To check the first situation, I conducted an experiment and added a 1-minute delay right before the end of the call to register_netdev() in e1000 driver (kernel: 3.11-rc1). I also added debug prints there in .probe and .ndo_open callbacks. Then I loaded e1000.ko, and tried to access the network device it services before the delay ended (in fact, NetworkManager did that before me), then checked the system log.
Result: yes, it is possible for .ndo_open to be called even before the end of .probe although the "race window" is usually rather small.
The second situation (.ndo_start_xmit VS NAPI poll) is still unclear to me and any help is appreciated.
Wrt the ".ndo_start_xmit VS NAPI poll" qs, well here's how am thinking:
the start-xmit method of a network driver is invoked in NET_TX_SOFTIRQ context - it is in a softirq ctx itself. So is the NAPI receive poll method, but of course in the NET_RX_SOFTIRQ context.
Now the two softirq's will lock each other out - not race - on any local core. But by design intent, softirq's can certainly run in parallel on SMP; thus, who is to say that these two methods, the ".ndo_start_xmit VS NAPI poll", running in two separate softirq context's, will not ever race?
IOW, I guess it could happen. Be safe, use spinlocks to protect global data.
Also, with modern TCP offload techniques becoming more prevalent, GSO is/could also be invoked at any point.
HTH!
I had the idea of running a small service next to the OS but I'm not sure if it is possible. I tried to figure it out by reading some docs but didn't get far, so here comes my question.
I read about the UEFI runtime services.
Would it be possible to have a small module in the firmware which runs next to what ever operating system is used and that sends information concerning the location of the device to an address on the internet?
As far as my knowledge goes, I would say that it should not possbile to run something in the background once UEFI handed the control over to the OS kernel.
To clarify my intentions, I would like to have something like that on my laptop. There is the Prey project but it is installed inside the OS. I'm using a Linux distribution without autologin. If somebody would steal it they would probably just install Windows.
What you want to do is prohibited because that would be the gateway for viruses, loggers and other malwares.
That said, if you want to get some code running aside of the OS, you should look at System Management Mode (SMM).
SMM is an execution mode of the x86 processors orthogonal to the standard protected mode. SMM allows the BIOS to completely suspend the OS on all the CPUs at once and enter in SMM mode to execute some BIOS services. Switching to SMM mode happens right now on your x86 machine, as you're reading this Stackoverflow answer. It is triggered either by:
hardware: a dedicated System Management Interrupt line (SMI#), very similar to how IRQs work,
software: via an I/O access to a location considered special by the motherboard logic (port 0xb2 is common).
SMM services are called SMM handlers and for instance sensors values are very often retrieved by the means of a SMM call to a SMM handler.
SMM handlers are setup during the DXE phase of UEFI firmware initialization into the SMRAM, an area dedicated to SMM handlers. See the following diagram:
SMM Drivers are dispatched by the SMM Core during the DXE Phase. So
additional SMI handlers can be registered in the DXE Phase. Late in
the DXE Phase, when no more SMM drivers can be dispatched, SMRAM will
be locked down (as recommended practice). Once SMRAM is locked down,
no additional SMM Drivers may be dispatched, so not additional SMI
handlers can be registered. For example, an SMM Driver that registers
an SMI handler cannot be loaded from the EFI Shell or be added as a
DriverOption in the UEFI Boot Manager.
source: tianocore
This means that the code of your SMM handler must be present in the BIOS image, which implies rebuilding the BIOS with your handler added. It's tricky but tools exist out there to both provide a DXE environment and build your SMM handler code into a PE executable, as well as other tools to add a DXE driver to an existing BIOS image. Not all BIOS manufacturers are supported though. It's risky unless your Flash chip is in a socket and you can reprogram it externally.
But the first thing is to check if the SMRAM is locked on your system. If you're lucky you can add your very own SMM handler directly in SMRAM. It's fidly but doable.
Note: SMM handlers inside the BIOS are independent from the OS so it would run even if a robber installs a new Operating System, which is what you want. However being outside of an OS has huge disadvantages: you'd need to embedd in your SMM handler a driver for the network interface (a polling-only, interrupt-less driver!) and wlan 802.11, DHCP and IP support to connect to the Wifi and get your data routed to an external host on the Internet. How would you determine the wifi SSID and password? Well you could wait for the OS to initialize the network adapter for you, but you'd need to save/restore the full state of the network host controller between calls. Not a small or easy project.
As far as my knowledge goes, I would say that it should not possbile to run something in the background once UEFI handed the control over to the OS kernel.
I agree. Certainly, the boot environment (prior to ExitBootServices() only uses a single threaded model.
There is no concept of threads in the UEFI spec as far as I can see. Furthermore each runtime service is something the OS deliberately invokes, much like the OS provides system calls for applications. Once you enter a runtime system function, note the following restriction from 7.1:
Callers are prohibited from using certain other services from another processor or on the same
processor following an interrupt as specified in Table 30.
Depending on which parts of the UEFI firmware your runtime service needed access to would depend on which firmware functions would be non-reentrant whilst your call was busy.
Which is to say that, even if you were prepared to sacrifice a thread to sit eternally inside an EFI runtime service, you could well block the entire rest of the kernel from using other runtime services.
I do not think it is going to be possible unfortunately, but interesting question all the same!