Qt Multi-Touch with qTUIO and mtdev2tuio embedded linux - qt

I have an embedded linux project where I want to use multi-touch with Qt.
I've been looking at qTUIO ( https://github.com/x29a/qTUIO ) and it looks great.
I cross-compiled the qTUIO library and deployed to the board.
I also cross-compiled and deployed the requirements for the TUIO 'server':
http://bitmath.org/code/mtdev/
http://liblo.sourceforge.net/
http://repo.or.cz/w/mtdev2tuio.git
On the board I fired up the 'server':
./mtdev2tuio /dev/input/touchscreen osc.udp://127.0.0.1:3333/
Sending OSC/TUIO packets to osc.udp://127.0.0.1:3333/
Just to make sure that it was reading the input device I also did the following and saw the 'failure in name resolution' when I moved my finger on the touchscreen:
./mtdev2tuio /dev/input/touchscreen osc.udp://localhost:3333/
Sending OSC/TUIO packets to osc.udp://localhost:3333/
...
OSC error -3: Temporary failure in name resolution
OSC error -3: Temporary failure in name resolution
OSC error -3: Temporary failure in name resolution
OSC error -3: Temporary failure in name resolution
OSC error -3: Temporary failure in name resolution
...
I then ran the qTUIO version of the 'pinchzoom' example on the board and it is running like below with this output:
# ./pinchzoom -qws
graphicsview initialized
listening to TUIO messages on UDP port 3333
So I have a server claiming to be interpreting my touches and sending them UDP to port 3333, and the qt application claiming to be reading those TUIO events and passing them to Qt. When I touch the screen nothing happens. Does anybody have ideas on this?

can you actually fire up a network logger (like tcpdump, Wireshark) and see, if OSC packets actually get sent from your server?
The error
OSC error -3: Temporary failure in name resolution
looks like an issue on your server side, so to eliminate the client as an error source, choose a serve (tracker) different then yours. http://tuio.org/?software features a good overview, if you happen to have an Android Fon around, try http://code.google.com/p/tuiodroid/ to simulate OSC packets.
Now to the client. qTUIO is actually far from done, so there is a good chance, that it is the culprit. A good way to test, if packets are received and forwarded correctly is to look at the overloaded event() method in your code, and see, if it triggers and if yes, with which type. I can only tell you, that it worked okay with a CCV 1.4 as tracker. Also, use the paint example if possible, as it practically translates the touchevents to paintevents, less magic that could go wrong.
Working in an embedded field just adds another special flavour to error sources. Do you maybe have endianess problems? Timing issues?
Can you provide more info on which versions of libs, OS, hardware, etc. you are using?
I will gladly update this post to provide a real solution, once its clear, what component causes the error. Good luck!

Related

Stream instrumentation data lossless trough non-reliable 4G

I have some data aquisition devices in industrial machinery that have 4G connectivity. Right now I make them to stream the intrumentation data in real time to my server through raw TCP/IP protocol. But this has some problems:
The machinery sometimes work in places where there is low or null mobile connectivity. If there is no connectivity for too long it can happen two things: a) the machine gets shutted down and the tcp/ip buffer it's lost along with the instrumentation data or b) the tcp/ip buffer overflows, which has the same results.
The same as point 1, but for the server side, due to maintenance or if something in the server fails in the weekend when nobody is going to notice it but the machinery can be ON and working. Then we can have data loss in the same way as point 1.
I have to manage authentication and the connection of all the clients into a server single TCP port. I have done some temporary hack that works for the moment but isn't the best. But this is another problem and it's not the reason of this question, so take it only for context.
So, I should code an application layer acknowledge where the server tells the client when a high-level message (not the individual TCP packets) has been received and processed. And in the client side to have a buffer writted in-disk where data is being deleted as is being confirmed by the server. This, to solve points 1 & 2.
But I'm afraid that I'm reinventing the wheel or that I don't know the correct tools, because I think that this problem should be more or less common but I fail to google for it and I can't find a library or tool that does this job.
What I was thinking about is something that in the remote client is listenning in a local TCP port for incomming data from the DAQ software, once it receives a message it streams it to the server and writes it to the local disk. In the server, the tool receives the message and re-streams it over local network to the final server. Then, notifies the client that is able to delete the message from its disk buffer.
So, the question is, there is something already done? I would prefer an already compiled / language agnostic solution because I code in LabView and I know there isn't like that in its ecosystem, but I'm open to everything. If there isn't anything like that, any advice in what to do / to avoid when developing it myself?
Thanks for your time.

TcpClient/NetworkStream not detecting disconnection

I created a simple persistent socket connection for our game using TcpClient and NetworkStream. There's no problem connecting, sending messages, and disconnecting normally (quitting app/server shuts down/etc).
However, I'm having some problems where, in certain cases, the client isn't detecting a disconnection from the server. The easiest way I have of testing this is to pull out the network cable on the wifi box, or set the phone to airplane mode, but it's happened in the middle of a game on what should otherwise be a stable wifi.
Going through the docs for NetworkStream etc, it says that the only way to detect a disconnection is to try to write to the socket. Fair enough, except, when I try, the write passes as if nothing is wrong. I can write multiple messages like this, and everything seems fine. It's only when I plug the cable back in that it sees that it's disconnected (all messages are buffered?).
The TcpClient is set to NoDelay, and there's a Flush() called after every write anyway.
What I've tried:
Writing a message to the NetworkStream - no joy
CanWrite, Connected, etc all return true
TcpClient.Client.Poll( 1000, SelectMode.SelectWrite ); - returns true
TcpClient.Client.Poll( 1000, SelectMode.SelectRead ) && TcpClient.Client.Available == 0 - returns true
TcpClient.Client.Receive(buffer, SocketFlags.Peek) == 0 - when connected, blocks for about 10-20s, then returns true. When no server, blocks forever(?)
NetworkStream.Write() - doesn't throw an error
NetworkStream.BeginWrite() - doesn't throw an error (not even when calling EndWrite())
Setting a WriteTimeout - had no effect
Having a specific time where we haven't received a message from the server (normally there's a keep-alive) - I had this, but removed it, as we were getting a lot of false-positives due to lag etc (some clients would see between 10-20s of lag)
So am I doing something wrong here? Is there any way to get the NetworkStream to throw an error (like it should) when writing to a socket that should be disconnected?
I've no problem with a keep-alive (the default case is the server will notify the client that it hasn't received anything in a while, and the client will send a heartbeat), but at the minute, according to the NetworkStream everything's hunky-dory.
It's for a game, so ideally the detection should be quick enough (as the user can still move through the game until they need to make a server call, some of which will block the UI, so the game seems broken).
I'm using Unity, so it's .Net 2.0
is to pull out the network cable on the wifi box
That's a good test. If you do that the remote party is not notified. How could it possibly find out? It can't.
when I try, the write passes as if nothing is wrong
Writes can (and are) buffered. They eventually enter a block hole... No reply comes back. The only way to detect this is a timeout.
So am I doing something wrong here?
You have tried a lot of things but fundamentally you cannot find out about disconnects if no reply comes back telling you that. Use a timeout.

Is there a reply to AT+GCAP & co. to tell "I'm not a modem, go away"?

I'm working on the firmware of a device that is going to be connected to PCs using Bluetooth in serial port emulation mode.
During testing, I found out that modem-manager on Linux "helpfully" tries to detect it as a modem, sending the AT+GCAP command; to this, currently my device replies with something like INVALIDCMD AT+GCAP. That is the correct response for my protocol, but obviously isn't an AT reply, so modem-manager isn't satisfied and tries again with AT+GCAP and other modem-related stuff.
Now, I found some workarounds for modem-manager (see here and thus here, in particular the udev rule method), but:
they are not extremely robust (I have to make a custom udev rule that may break if we change the Bluetooth module);
I fear that not only modem-manager, but similar software/OS features (e.g. on Windows or OS X) may give me similar annoyances.
Also, having full control over the firmware, I can add a special case for AT+GCAP and similar stuff; so, coming to my question:
Is there a standard/safe reply to AT+GCAP and other similar modem-probing queries to tell "I'm not a modem, go away and leave me alone?"
(making an answer out of the comments)
In order to indicate I do not understand any AT commands at all (aka I am not a modem) the correct response to any received AT commands should be silence.
In order to indicate I do not understand this particular AT command the correct response should be ERROR.
Anything between will trigger implementation defined behaviour of the entity sending AT commands. Some will possibly give up right away while modem-manager apparently is set on retrying sending the command until it gets a "proper" response.

How to detect if a client has crashed (or exit) for a server using Qt

The client use ssh login and start up a server on remote machine, then the clinet create a tcp connect to the server.
The server need exit when the client has exit normally or crashed or network is dropped.
So the question is how to detect if the client which the server has connected to is crashed.
The first try is using error() signal, catch QAbsoluteSocket::NetworkError to determine the network has dropped. But I can't receive error() signal at all even if i pull out the network cable.
The second try is using the SocketState, i think whenever SocketState is UnconnectedState,the client may has exit normally and the server should exit too. This way works fine for "normal exit", but I don't know how to deal with "crash" and "dead network".
Help me, thanks!
I'd recommend using TCP keep alive. It is not exposed through the public QTcpSocket interface, but you can use setsockopt with QAbstractSocker::socketDescriptor to activate the SO_KEEPALIVE feature.
EDIT: It appears that keep alive was added to QAbstractSocket at some point. So, simply call QAbstractSocket::setSocketOption with QAbstractSocket::KeepAliveOption.
You can find information about adjusting the timeout of keep alive request here: http://www.gnugk.org/keepalive.html
Most of the time, the only way you will know there is a problem with a socket connection is when you try to read or write with it. There are some exceptions: Windows will change the state of sockets if the network cable is unplugged, Linux (in my experience) will not.
The most reliable way to detect connection problems is to have the client regularly send a small message at an agreed upon interval with the server. If the server does not see this message within a reasonable time, it should consider the client dead and drop the connection. This will also give both sides regular opportunities to detect a problem via reads and writes.

Delay before sending message over socket - how does that help?

I have a tcpip socket interface to a third party software app. I've implemented this interface for several customer sites with no problem. The latest customer, though... problems. We've turned on logging in the apps on either end, and also installed Wireshark on the PC to log raw tcpip traffic. With that, we've proved that my server app successfully sends the message out, the pc receives the message, but the client app doesn't see it. (This is a totally intermittent problem, which is why it's such a pain to troubleshoot.)
The socket details are as simple as they come: one socket handling two way communications between the server and the pc. The messages are plain ascii text and fairly short (not XML). The server initiates communications by sending the first message, and then the client responds with several messages. The socket is kept open at all times while the apps are running. The client app is designed so that the end user can only process one case at a time, which prevents message collisions from happening. They have some sort of polling set up, their app "hibernates" until it sees the initiating message from the server.
The third party vendor has advised me to add a few second delay before I send them the initiating message. I can't see how that helps. If the client is "sleeping", just polling the socket waiting for a message, how does adding a delay before the first message help? It's not like we send two messages and the second one gets lost. It's losing the first message. So I don't see how it matters if we send that message now or two seconds from now.
I've asked them and they haven't given me details. It could be some proprietary details in their coding that they don't want to disclose to me, and that's fair. So I'm asking here because I'm always learning new things about socket programming. Maybe you guys can shed some light on how polling a tcpip socket can be affected by message timing?
Since its someone else's client and they won't tell you what its doing (other than saying 'insert a delay'), the answer is probably that their client is reading and discarding the message because its not yet in a state to deal with it. The delay will allow the client time to get into a state where it can respond to the message properly.
In other words, the client has a race condition. One easy way this can happen is if they have one thread for reading messages and another for dealing with them.
Short of running strace(1) on the client to see what system calls it is making, its tough to tell what the client is actually doing.

Resources