Driver Portability - unix

This might be a lame question for this site, but here goes.
Can drivers be portable? For instance, could one write a driver for the keyboard-backlight of a mac, and port it to another unix system, maybe Solaris?
or is driver portability a contradiction in terms?
any articles covering this topic would be much appreciated.

Device drivers need to serve two layers of abstraction which makes portability at least hard:
1) Of course the driver needs to be written for a specific device. Your understandable assumption is now that once I implemented a driver according to a proper device specification (data sheet, ...), why shouldn't it run on every computer that needs to access the device.
In comes point
2) Drivers are written to fit into a certain operating system. Each OS has its own means of
a) accessing devices, e.g., the functions for reading/writing I/O ports might be called differently or have different signatures. Furthermore
b), the ultimate goal of a driver is to make the device accessible to the user - be it through a file system interface or network sockets or the X input protocol. For these purposes each OS again comes with its own set of abstractions that the driver needs to fit into.
These are the reasons porting drivers is kind of hard. Nevertheless, there are approaches that try to achieve this, most of the time by wrapping the original driver with glue code that translates the expected driver/OS interface into the target interfaces.
NDISWrapper is a library that allows running Windows WiFi drivers on Linux,
Some guys from Karlsruhe proposed to use device-driver virtual machines,
Several OS frameworks I'm aware of use device-driver wrapper libraries to run Linux/BSD device drivers in their environments. See for instance Genode, L4Re, Minix3.

Yes, they can be.
Assuming a driver is written for the device specifications, the only thing that prevents the driver portability is the underlying operating systems as different operating systems have different architecture and different controls to call and load device drivers.
But there have been known implementations, wherein the underlying OS could be abstracted and a uniform platform could be provided. This can lead to driver portability.

Related

Where OS Kernel and Network protocol stack overlaps?

I'm trying to learn network protocol stack(ie. Transport, IP, datalink layer library code implementation) along with linux. I'm confused where to start.
First question is whether these codes come as in-built features of linux kernel/above library layers.
If so why I can see 3rd party protocol stack in some applications (by blunk micro system - developer of protocol stack)
If Linux doesn't have it as core feature, is linux give only placeholders for network part(like just Macros to enable the 3rd party stack ). But an article says it has Net4 networking codebase.
If linux has in-built network features what are the linux modules I need to go through or where to start? Not only in the network perspective, if i'm guided to explore in linux in all aspects (process, memory, drivers) in the "code level", it would be helpful please.
Note: I'm greedy to write my own OS and protocol stack hence trying to understand an existing system.
Thanks in advance!
First question is whether these codes come as in-built features of linux kernel/above library layers.
Linux kernel has network stack up to including layer 4, i.e., TCP and UDP (well, kernel + a set of utilities needed to configure it). I think DNS is in kernel too, but I am not so sure. TLS used to be implemended as a library (OpenSSL and GnuTLS are I think the most common ones), but there seems to be kernel part too now (link.
Note, that some of the TCP functionality is offloaded to the network card (hardware). At high speeds (1Gb+) you won't get full performance without these features.
I am not familiar with all VoIP related protocols, but I think they are libraries, not kernel.
If so why I can see 3rd party protocol stack in some applications (by blunk micro system - developer of protocol stack)
I believe the reason is performance. If you implement a custom stack with a subset of features, it might work better for your applications. Also there are advanced features and protocols that might not be available in the kernel itself.
If Linux doesn't have it as core feature, is linux give only placeholders for network part(like just Macros to enable the 3rd party stack ). But an article says it has Net4 networking codebase.
there is a very large codebase
If linux has in-built network features what are the linux modules I need to go through or where to start? Not only in the network perspective, if i'm guided to explore in linux in all aspects (process, memory, drivers) in the "code level", it would be helpful please.
hmmm, this is a very good question, and I don't think there is an easy answer. In my experience reading the code is the only way to figure this out. However some people tried to fish LWN.net for information.
you could probably start somewhere here: include/net/
First question is whether these codes come as in-built features of linux kernel/above library layers.
If linux has in-built network features what are the linux modules I need to go through or where to start?
You can think of a protocol stack as of a library. Linux kernel has one which runs inside the kernel address space and uses kernel APIs unavailable in user-space: https://github.com/torvalds/linux/tree/master/net/ipv4
There are multiple in-depth books about Linux kernel networking. Reading one is required for good understanding.
If so why I can see 3rd party protocol stack in some applications (by blunk micro system - developer of protocol stack)
Zero-copy, low-latency and streaming (processing an Ethernet packet in CPU-L1-cache-line-sized chunks while it hasn't been read off the wire in full) networking have been problematic with Linux kernel network stack. For these reasons makers of networking hardware offered their own user-space network stacks, aka kernel bypass.
Linux kernel network stack is getting better these days with MSG_ZEROCOPY and io_uring.

cudaMemcpy device to distant host

I am working on a simulation which is running on a host and use the GPU for the computation. Once the computation is done, the host copy the memory from the device to itself and then send the computed data to a distant host.
Basically the data will do : GPU -> HOST -> NETWORK CARD
Since the simulation is in real time, time is very important, and I would like to have something like that : GPU -> NETWORKCARD, in order to reduce the delay of data transfer.
Is it possible?
If no, is it something that we might see someday?
Edit : Distant host => CPU
Yes, this is possible in CUDA 4.0 and later using the GPUDirect facility on platforms which support unified direct addressing (which I think is basically linux with Fermi or Kepler Telsa cards at this stage). You haven't said much about what you mean by "distant host", but if you have a network where MPI is feasible, there is probably a ready solution for you to use.
At least mvapich2 already has support for GPU-GPU transfers using either Infiniband or TCP/IP, including RDMA directly to the Infiniband adapter over the PCI express bus. Other MPI implementations probably also have support by now, although I haven't look too closely at it recently to know for sure.

Networking from a Kernel Mode Driver

The question is pretty self-explanatory, I require the ability to open and control a socket from a kernel mode driver in windows xp. I know that vista and after provides a kernel mode winsock equivalent, but no such thing for XP.
Cheers
Edit
I've had a recommendation to have a user-mode service doing the socket work, and one to use TDI. Which is best?
TDI is not an easy interface to use. It is designed to abstract network transport drivers (TCP, NetBEUI, AppleTalk, etc) from applications. You will have to fully understand the API to be able to use it for socket work - this is certainly a lot more work than writing a user-mode service and communicating with that. You can issue a reverse IRP from the service to the driver, so the driver can trigger comms when it needs to.
Also, the more complexity you remove from your driver (here, into user-mode), the better.
However, using a user-mode service will require a context switch per data transfer to the driver, which for you might be on a per-packet basis. This is an overhead best avoided.
I'm curious as to why a driver needs to perform network I/O. This superficially at least seems to indicate a design issue.
Use TDI interface, it's avaliable on XP and Vista.
http://msdn.microsoft.com/en-us/library/ff565112.aspx

Is the Linux system clipboard represented in the file system somewhere as a device?

If not why not? It seems as though reading, writing, and appending to it would be far more flexible provided multi instance and multi-user issues are accounted for.
AFAIK no.
But you can use xclip if you want command-line access to the X11 Clipboard
No..
The operating system is not for GUI/Application layer semantics it only provides the raw abstraction to present a consistent, pretty system to user-space applications. If you want to do something like this, I would advise you write a system daemon that applications can use as a copy store and access through system IPC such as DBus.
Standards in the freedesktop.org standards may define standards for GUI interoperability and advise they communicate through something like DBus.
Rather than a kernel space system, you may want to manage copy and paste semantics above OS services such as IPC and keep the policy in user-land but through operating system mechanics.
Whilst a device driver presentation kind-of makes sense, IMHO it belongs in user-space as some kind of mini-database with source/target data and meta-data relating to encoding and so on ... none of which are strictly kernel concerns.
Please don't write a copy/paste device driver :)
edit toned down the bolding ..
There is no kernel-level "clipboard" - it's a concept belonging to higher layers, such as X11 for example. Of course, nothing stops you from writing a device driver, user-space filesystem, or whatever, to make it visible in those terms!

Do most modern kernels use DMA for network IO with generic Ethernet controllers?

In most modern operating systems like Linux and Windows, is network IO typically accomplished using DMA? This is concerning generic Ethernet controllers; I'm not asking about things that require special drivers (such as many wireless cards, at least in Linux). I imagine the answer is "yes," but I'm interested in any sources (esp. for the Linux kernel), as well as resources providing more general information. Thanks.
I don't know that there really is such a thing as a generic network interface controller, but the nearest thing I know of -- the NE2000 interface specification, implemented by a large number of cheap controllers -- appears to have at least some limited DMA support, and more sophisticated controllers are likely to include more sophisticated features.
The question should be a bit different:
Is typical network adapter have dma
controller on board ?
After finding answer on this question ( i guess in 99.9% it will be yes), you should ask about specific driver for each card. I assume that any decent driver will fully utilize hardware capabilities (i.e DMA support in our case), but question about OS is not relevant, since no OS can force the driver to implement DMA support. A high level OS like Windows and Linux provide a primitives to easier implementation of DMA, but implementing is responsibility of the driver.

Resources