I am writing an Ethernet driver. I would like to do it in 2 steps:
write it without DMA (simple memcpy)
rewrite it using DMA.
I would like to ask if it is possible to do it first without using DMA (or is it that the kernel Ethernet framework insist that the driver shall use DMA)?
Kernel's not stopping you from doing anything. But specifically, I can't see it stopping you from writing skbuffs, nor mapping the device memory.
Honestly you might have most difficulty if you want to find examples of network driver code that doesn't use DMA. If I understand correctly, even Linux netpoll (for crash logging over network) doesn't avoid DMA in the drivers.
I wasn't sure memcpy() would work though...
You need to read your docs (e.g. and specifically). Looks like you need to use memcpy_fromio() and memcpy_toio() on IO memory.
Related
I will be asking a very subjective question, but it is important as I am looking to recover from failure to effectively use BlueZ programatically.
Basically I envision an IoT edge device that runs on a miniature computer (Ex: Raspberry pi or Intel Compute Stick). The device would then run AlpineLinux OS and interact with Cloud.
Since it is IoT environment, it is needless to mention the importance of Bluetooth BLE over ISM band. Hence the central importance of being able to customize and work with BlueZ.
I am looking to do several things with BlueZ BLE including but not limited to
Advertising
Pairing
Characteristic
Broadcast
Secure transport of data etc...
Since I will be needing full control over data, for data-processing and interacting with cloud (Edge AI or Data-science on Cloud) I am looking at three ways of using BlueZ:
Make DBus API calls to BlueZ Methods.
Modify BlueZ codebase and make install a custom bin.
(So that callback handlers can be registered and wealth of other bluez
methods can be invoked)
Invoke BlueZ using command line utils like hcitool/bluetoothctl inside a program using system() calls.
No 1 is where I have failed. It is exorbitant amount of effort to construct and export DBus objects and then to invoke BlueZ methods. Plus there is no guarantee that you will be able to take care of all BLE issues.
No 2 looks very promising and I want to fully explore how feasible it is to modify the BlueZ code to my needs.
No 3 is the least desirable option, but I want to have it as a fallback option nevertheless.
Given my problem statement, what is the most viable strategy forward? I am asking this aloud so that I do not make more missteps and cost myself time and efforts.
Your best strategy is to start with the second way (which you already found promising) as this is a viable solution and many developers go about this method in order to create their BlueZ programs. Here is what I would do:-
Write all the functionality of the system in some sort of flowchart or state machine. This helps you visualise your whole system and what needs to be done to reach your end goal.
Try to perform all the above functionality manually using bluetoothctl and btmgmt. This includes advertising, pairing, etc. I recommend steering away from legacy commands such as hcitool and hciconfig as these have been deprecated and have a very different code structure.
When stumbling upon something that is not the default in bluetoothctl/btmgmt or you want to tweak the functionality, update the source to do so.
Finally, once you manually get the system to perform the functionality that you need (it doesn't have to be all, it can just be a subset of the functions), you can move to automating the whole process. This involves modifying the source for bluetoothctl/btmgmt commands so that instead of manual intervention, everything would be event-driven.
This is a bonus, but if you can create automated tests using python or some other scripting language, then this would ensure that your system is robust and that previous functionality doesn't break when adding new ones.
By the end of this process, you'll have a much better understanding of the internals of bluetoothctl/btmgmt and D-BUS APIs that you might be able to completely detach your code from the original bluetoothctl/btmgmt or create the program from scratch.
You probably already know this, but when modifying the tools, this is the starting point for the source code:-
bluetoothctl - client/main.c
btmgmt - tools/btmgmt.c
For more references on using bluetoothctl commands and btmgmt, please see the links below:-
BlueZ D-Bus C or C++ Sample
Bluetoothctl set passkey
https://stackoverflow.com/a/51876272/2215147
Bluez Programming
Linux command line howto accept pairing for bluetooth device without pin
https://stackoverflow.com/a/52982329/2215147
Bluetooth Low Energy in C - using Bluez to create a GATT server
I hope this helps.
I want to implement a fast DAQ (10Mbit/s) system for capturing data from an array of sensors. So far I have implemented my system based on Wiznet W5300 and an FPGA and I am able to communicate through TCP/IP with my computer. FPGA working as server and PC as a client and I am using only one socket on port 5000. So far Ive tested various applications for capturing and saving the data on Windows with no success (some of them are crashing and some of them are lack in terms of speed). As I am not an expert on Network programming what would be the best way to capture and save the data on PC side as fast and reliable as possible? I am always aiming for something simple. Any type of guidance would be welcomed.
I am using tcpdump and very happy with it in linux and know that there is a Windows equivalent of it called windump. Here is the url for it :
http://www.winpcap.org/windump/
I am working on a simulation which is running on a host and use the GPU for the computation. Once the computation is done, the host copy the memory from the device to itself and then send the computed data to a distant host.
Basically the data will do : GPU -> HOST -> NETWORK CARD
Since the simulation is in real time, time is very important, and I would like to have something like that : GPU -> NETWORKCARD, in order to reduce the delay of data transfer.
Is it possible?
If no, is it something that we might see someday?
Edit : Distant host => CPU
Yes, this is possible in CUDA 4.0 and later using the GPUDirect facility on platforms which support unified direct addressing (which I think is basically linux with Fermi or Kepler Telsa cards at this stage). You haven't said much about what you mean by "distant host", but if you have a network where MPI is feasible, there is probably a ready solution for you to use.
At least mvapich2 already has support for GPU-GPU transfers using either Infiniband or TCP/IP, including RDMA directly to the Infiniband adapter over the PCI express bus. Other MPI implementations probably also have support by now, although I haven't look too closely at it recently to know for sure.
How (GA) Global array library (an implementation of ARMCI) is used for communication between two process located on different remote machines.
Is that something similar to TCP socket programming where one process wait for data and the other transfers it ?
I try to see the documentation that ga_put() and ga_get() are two operation that used for inter-process communication. till now I only able to come up with a program running on the same machine that use shared-Memory architecture (I have used ga_put() and ga_get() to put data in Global array and to get it respectively ).
Now, I want use this program for communicating data (basically performaning one-sided communication) between two remote processes. Obiviously putting the program that I am running on single machine on the remote side will work out. It needs some way to tell which machine should we access and get the right data. And here is where I need your help. how can I do this? (what is its equivalent of TCP/IP listen, accept and connect ... on GA ? )
Or is that the case that GA also uses TCP/IP socket underneath ?
can some one please explain to me? and sample code of two remote processes communicating is also appreciable.
thanks,
I am answering my question after all. May be it will help some one looking for the same issue.
GA Library is implemented to work with MPI. So we have something like:
MPI_Init(..)
GA_Initialize()
MA_Init(..)
// .... do sothing here
GA_Terminate()
MPI_Finalize()
The answer to my question is:
MPI has the following primitives to be able to support client-server commuication:
//in the server side
MPI_Open_port()
MPI_Comm_accept()
//do MPI_Send() or MPI_Recv()
MPI_Close_port()
//client Side
MPI_Comm_connect()
//do MPI_Recv() or MPI_Send()
depending on the hardware support and the MPI implementation used, MPI might use sockets, or other mechanisms (e.g SAN (System area network)).
In general, most MPI implementations use sockets for TCP based communication.
So, yes GA also uses sockets underneath (of course depending on the MPI implementation used)
cheers,
A bit of history: We have an application, which was originally written many years ago (1998 is the first date in PVCS but the app is about 5 years older than that as it originally was a DOS program). This application communicates with a piece of hardware via serial. When we got to Windows XP we started receiving reports of the app dying after a short time of running. It seems that the serial comms just 'died' and the app was left in a stuck state. The only way to recover from this situation was to restart the application.
The only information I can find regarding this problem was apparently the Windows Message system would miss that information was received, the buffer would fill and the system would get stuck. This snippet of information was left in a old word document, but there's no evidence to back this up. It also mentions that this is only prevalent at high baud rates (115200+).
The solution was to provide customers with USB->Serial converters along with the hardware.
Today: We are working on a new version of the hardware that will run across a network as well as serial ports. So to allow me to work on the network code, minus the actual hardware we are using a VSCOM NetCom113 device. It also installs a virtual comm port on the users (ie: mine) machine.
Now I have got the network code integrated with the app, it appears that the NetCom device exhibits the same behaviour as a physical commport. This is undesirable as I need the app to run longer than ~30 seconds.
Google turns up zero problems that we experience.
I was wondering:
Has anyone experienced this before? If so what did you do to fix/workaround the problem?
Does anyone have any suggestions as to whether the original author of the document is correct and what I can do to test the theory?
Unfortunately I can't post code as the serial code is tightly couple with the rest of the system, though if you have questions regarding it I can answer questions about it.
Updates:
The code is written using Win32 Comm routines - so I am using CreateFile, ReadFile. There's also judicious calls to GetOverlappedResult.
It's not hanging per se, it's just that the comms stops. You can access the menus, click the buttons, but nothing can interact with the connected hardware. Using realterm you can see that no data is coming in or going out.
I think the reference to the windows message is that the problem is internal to windows. Data has arrived but the kernal has missed it and thus not told the rest of the system about it.
Flow control is not used.
Writing a 'simple' test is difficult due the the fact that the code is tightly coupled and the underlying protocol is quite complex and would require a lot of work.
Are you using DOS-style serial code, or the Win32 CreateFile approach?
If the former, be very suspicious: if at all possible I'd convert to the latter.
If the latter, do you know on what kind of system call it's hanging? Are you in a blocking read call? or an overlapped I/O call? or waiting on an event? (I'm not sure I have enough experience to help, but those are the kinds of questions that come to mind)
You might also check into the queue size, which you can set with the SetupComm function.
I don't buy the "Windows Message system" stuff -- it sounds fishy; you can write good Win32 serial i/o code that never uses Windows messages.
edit: does your Overlapped I/O use events? I seem to remember something about auto-reset events occasionally missing their trigger... check your overlapped I/O calls very carefully to see whether you're handling the possible outcomes properly. Perhaps there's a way to make your code more robust by automatically cancelling the overlapped i/o and restarting another read. (I assume the problem is in the read half, not the write half?)
edit 2: A suggestion: assuming the win32 side has missed a byte or packet, and your devices are in deadlock because they're both expecting each other to respond to something, can you tweak the other side of the serial I/O to regularly send some type of "ping" packet with an incrementing counter? (and log the ping packets on the PC side; that way you can see whether you've missed any)
Are you sure you have your flow control set up correctly? DTR, RTS, etc...
-Adam
i have written apps that use usb / bluetooth serial ports and have never had an issue. with bluetooth i have seen bit rates (sustained) of 800,000 bps for long periods of time. most people don't properly implement the port.
My serial port
Not sure if this is a possibility for you, but if you could re-write the code using C#.NET you'd have access to the SerialPort class there. It might remedy your problem. I know a lot of legacy code based around the Win32 API for hardware I/O ports tended to fail in XP due to timing (had a small bit of experience with MIDI).
In addition, I don't know if you can use the Win32 method of Serial Port access in Vista, so that might shut out future MS OSes from being able to use your code.