How to use another event loop in win32 gui application - win32gui

I am new to win32 api progamming, and I am tring writing a xmpp client for windows platform, using win32 api and gloox xmpp library. gloox has its own event loop, while windows GUI has message loop too. I am not very clear how to use these two loops together.
From the gloox document:
Blocking vs. Non-blocking Connections
For some kind of bots a blocking connection (the default behaviour) is ideal. All the bot does is react to events coming from the server. However, for end user clients or anything with a GUI this is far from perfect.
In these cases non-blocking connections can be used. If ClientBase::connect( false ) is called, the function returnes immediately after the connection has been established. It is then the resposibility of the programmer to initiate receiving of data from the socket.
The easiest way is to call ClientBase::recv() periodically with the desired timeout (in microseconds) as parameter. The default value of -1 means the call blocks until any data was received, which is then parsed automatically.
Window message loop:
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return msg.wParam;
Window proc:
LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
TCHAR str[100];
StringCbPrintf(str, _countof(str), TEXT("Message ID:%-6x:%s"), msg, GetStringMessage(msg));
OutputDebugString(str);
HDC hdc;
PAINTSTRUCT ps;
RECT rect;
switch (msg)
{
case WM_CREATE:
return 0;
case WM_PAINT:
hdc = BeginPaint(hWnd, &ps);
GetClientRect(hWnd, &rect);
DrawText(hdc, TEXT("DRAW TEXT ON CLIENT AREA"), -1, &rect, DT_CENTER | DT_SINGLELINE | DT_VCENTER);
EndPaint(hWnd, &ps);
return 0;
case WM_DESTROY:
PostQuitMessage(0);
return 0;
default:
break;
}
return DefWindowProc(hWnd, msg, wParam, lParam);
}
gloox blocking connection
JID jid( "jid#server/resource" );
Client* client = new Client( jid, "password" );
client->registerConnectionListener( this );
client->registerPresenceHandler( this );
client->connect();// here will enter event loop
gloox non-blocking connection
Client* client = new Client( ... );
ConnectionTCPClient* conn = new ConnectionTCPClient( client, client->logInstance(), server, port );
client->setConnectionImpl( conn );
client->connect( false );
int sock = conn->socket();
[...]
I am not very clear how can I
call ClientBase::recv() periodically with the desired timeout (in microseconds) as parameter
With a timer ? or multi thread programming ? or there is a better solution ?
Any suggestions appreciated
Thank you

The best IO strategy for that is overlapped IO. Unfortunately, the method is windows only, not supported by the cross-platform library you’ve picked.
You can use SetTimer() API, and periodically call recv() method of the library with zero timeout, in WM_TIMER handler. This will introduce extra latency (your PC receives a message but it has to wait for the next timer event to handle it), or if you’ll use small intervals like 20 ms, will consume battery on laptops or tablets.
You can use blocking API with a separate thread. More efficient performance-wise, but harder to implement, you’ll have to marshal messages and other events to the GUI thread. WM_USER+n custom windows messages is usually the best way to do that, BTW.

Related

How to Perform Concurrent Request-Reply for Asynchronous Tasks with ZeroMQ?

Intention
I want to allow a client to send a task to some server at a fixed address.
The server may take that task and perform it at some arbitrary point in the future, but may still take requests from other clients before then.
After performing the task, the server will reply to the client, which may have been running a blocking wait on the reply.
The work and clients come dynamically, so there can't be a fixed initial number.
The work is done in a non-thread-safe context, so workers can't exist on different threads, so all work should take place in a single thread.
Implementation
The following example1 is not a complete implementation of the server, only a compilable section of the sequence that should be able to take place (but is in reality hanging).
Two clients send an integer each, and the server takes one request, then the next request, echo replies to the first request, then echo replies to the second request.
The intention isn't to get the responses ordered, only to allow for the holding of multiple requests simultaneously by the server.
What actually happens here is that the second worker hangs waiting on the request - this is what confuses me, as DEALER sockets should route outgoing messages in a round-robin strategy.
#include <unistd.h>
#include <stdio.h>
#include <zmq.h>
#include <sys/wait.h>
int client(int num)
{
void *context, *client;
int buf[1];
context = zmq_ctx_new();
client = zmq_socket(context, ZMQ_REQ);
zmq_connect(client, "tcp://localhost:5559");
*buf = num;
zmq_send(client, buf, 1, 0);
*buf = 0;
zmq_recv(client, buf, 1, 0);
printf("client %d receiving: %d\n", num, *buf);
zmq_close(client);
zmq_ctx_destroy(context);
return 0;
}
void multipart_proxy(void *from, void *to)
{
zmq_msg_t message;
while (1) {
zmq_msg_init(&message);
zmq_msg_recv(&message, from, 0);
int more = zmq_msg_more(&message);
zmq_msg_send(&message, to, more ? ZMQ_SNDMORE : 0);
zmq_msg_close(&message);
if (!more) break;
}
}
int main(void)
{
int status;
if (fork() == 0) {
client(1);
return(0);
}
if (fork() == 0) {
client(2);
return 0;
}
/* SERVER */
void *context, *frontend, *backend, *worker1, *worker2;
int wbuf1[1], wbuf2[1];
context = zmq_ctx_new();
frontend = zmq_socket(context, ZMQ_ROUTER);
backend = zmq_socket(context, ZMQ_DEALER);
zmq_bind(frontend, "tcp://*:5559");
zmq_bind(backend, "inproc://workers");
worker1 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker1, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf1 = 0;
zmq_recv(worker1, wbuf1, 1, 0);
printf("worker1 receiving: %d\n", *wbuf1);
worker2 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker2, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf2 = 0;
zmq_recv(worker2, wbuf2, 1, 0);
printf("worker2 receiving: %d\n", *wbuf2);
zmq_send(worker1, wbuf1, 1, 0);
multipart_proxy(backend, frontend);
zmq_send(worker2, wbuf2, 1, 0);
multipart_proxy(backend, frontend);
wait(&status);
zmq_close(frontend);
zmq_close(backend);
zmq_close(worker1);
zmq_close(worker2);
zmq_ctx_destroy(context);
return 0;
}
Other Options
I have looked at CLIENT and SERVER sockets and they appear to be capable on paper, however in practice they're sufficiently new that the system version of ZeroMQ that I have doesn't yet support them.
If it is not possible to perform this in ZeroMQ, any alternative suggestions are very welcome.
1 Based on the Shared Queue section of the ZeroMQ guide.
Let me share a view on how ZeroMQ could meet the above defined Intention.
Let's rather use ZeroMQ Scalable Formal Communication Pattern Archetypes ( as they are RTO now, not as we may wish them to be at some, yet unsure, point in (a just potentially happening) future evolution state ).
We need not hesitate to use many more ZeroMQ-based connections among a herd of coming/leaving client-instance(s) and the server
For example :
Client .connect()-s a REQ-socket to Server-address:port-A to ask for a "job"-ticket processing over this connection
Client .connect()-s a SUB-socket to Server-address:port-B to listen ( if present ) about published announcements about already completed "job"-tickets that are Server-ready to deliver results for
Client exposes another REQ-socket to request upon an already broadcast "job"-ticket completion announcement message, it has just heard about over the SUB-socket, to get "job"-ticket results finally delivered, if proving itself, by providing a proper / matching job-ticket-AUTH-key to proof its right to receive the publicly announced results' availability, using this same socket to deliver a POSACK-message to Server upon client has correctly received this "job"-ticket results "in hands"
Server exposes REP-socket to respond each client ad-hoc upon a "job"-ticket request, notifying this way about having "accepted"-job-ticket, delivering also a job-ticket-AUTH-key for later pickup of results
Server exposes PUB-socket to announce any and all not yet picked-up "finished"-job-tickets
Server exposes another REP-socket to receive any possible attempt to request to deliver "job"-ticket-results. Upon verifying there delivered job-ticket-AUTH-key, Server decides whether the respective REQ-message had matching job-ticket-AUTH-key to indeed deliver a proper message with results, or whether a match did not happen, in which case a message will carry some other payload data ( logic is left for further thoughts, so as to prevent potential bruteforcing or eavesdropping and similar, less primitive attacks on stealing the results )
Clients need not stay waiting for results live/online and/or can survive certain amounts of LoS, L2/L3-errors or network-storm stresses
Clients need just to keep some kind of job-ticket-ID and job-ticket-AUTH-key for later retrieving of the Server-processes/maintained/auth-ed results
Server will keep listening for new jobs
Server will accept new job-tickets with providing a privately added job-ticket-AUTH-key
Server will process job-tickets as it will to do so
Server will maintain a circular-buffer of completed job-tickets to be announced
Server will announce, in due time and repeated as decided in public, job-tickets, that are ready for client-initiated retrieval
Server will accept new retrieval requests
Server will verify client-requests for matching any announced job-ticket-ID and testing if job-ticket-AUTH-key match either
Server will respond to either matching / non-matching job-ticket-ID results retrieval request(s)
Server will remove a job-ticket-ID from a circular-buffer only upon both POSACK-ed AUTH-match before a retrieval and a POSACK-message re-confirmed delivery to client

Async server does not process requests while a request is stuck

I am new to GRPC so please let me know if I am doing something wrong here. I am looking at the greeter_async_server.cc example code. This seems to work fine for normal requests but I wanted to simulate a request getting stuck on the server so I added a sleep in the processing loop. I added this right before Finish is called on the responder so that it was in the actual processing logic of the request. While the server thread is sleeping it will not accept any new requests until the thread is free. I attempted to create another client request while the original request on the server is sleeping but the grpc server would not process the request. The client seemed to be stuck until the server came out of the sleep.
I also broke this process into debugger as well but the only request I saw was the one that was sleeping. The other threads were waiting on the completion queue.
I am new to grpc so if I am doing this wrong please let me know what I need to do to handle request while another request is stuck.
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
Sleep((DWORD)-1);
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}

Sending TCP data without recieving (boost asio)

I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));

Windows BLE UWP disconnect

How one forces Windows to disconnect from BLE device being used in UWP app? I receive notifications from some characteristics but at some point I want to stop receiving them and make sure I disconnect from the BLE device to save BLE device's battery?
Assuming your application is running as a gatt client and you have the following instances your are working with in your code:
GattCharacteristic myGattchar; // The gatt characteristic you are reading or writing on your BLE peripheral
GattDeviceService myGattServ; // The BLE peripheral' gatt service on which you are connecting from your application
BluetoothLEDevice myBleDev; // The BLE peripheral device your are connecting to from your application
When you are already connected to your BLE peripheral, if you call the Dispose() methods like this :
myBleDev.Dispose(); and/or myGattServ.Dispose(); and/or myGattchar.Service.Dispose()
you surely will free resources in your app but will not cleanly close the BLE connection: The application looses access to control resources for the connection. Nevertheless, connection remains established on the lower levels of the stack (On my peripheral device the Bluetooth connection active LED remains ON after calling any of Dispose() methods).
Forcing disconnection is done by first disabling notifications and indications on the concerned characteristic (i.e. myGattchar in my example above) by writing a 0 (zero) to the Client Characteristic Configuration descriptor for that characteristic through call to method WriteClientCharacteristicConfigurationDescriptorAsync with parameter GattClientCharacteristicConfigurationDescriptorValue.None :
GattCommunicationStatus status =
await myGattchar.WriteClientCharacteristicConfigurationDescriptorAsync(
GattClientCharacteristicConfigurationDescriptorValue.None);
Just dispose all objects related to the device. That will disconnect the device, unless there are other apps connected to it.
For my UWP app, even though I've used Dispose() methods, I still received notifications. What helped me was setting my device and characteristics to null. Example:
device.Dispose();
device = null;
Not all to certain of how "correct" this programming is, but it's been working fine for me so far.
The UWP Bluetooth BLE sample code from Microsoft (dispose the BLE device) didn't work for me. I had to add code (dispose the service) to disconnect the device.
private async Task<bool> ClearBluetoothLEDeviceAsync()
{
if (subscribedForNotifications)
{
// Need to clear the CCCD from the remote device so we stop receiving notifications
var result = await registeredCharacteristic.WriteClientCharacteristicConfigurationDescriptorAsync(GattClientCharacteristicConfigurationDescriptorValue.None);
if (result != GattCommunicationStatus.Success)
{
return false;
}
else
{
selectedCharacteristic.ValueChanged -= Characteristic_ValueChanged;
subscribedForNotifications = false;
}
}
selectedService?.Dispose(); //added code
selectedService = null; //added code
bluetoothLeDevice?.Dispose();
bluetoothLeDevice = null;
return true;
}
Remember you must call -= for events you have called += or Dispose() will never really garbage collect correctly. It's a little more code, I know. But it's the way it is.
Not just with bluetooth stuff, I will remind you - with everything. You can't have hard referenced event handlers and get garbage collection to work as expected.
Doing all the disposing and null references suggested didn't achieve the Windows (Windows Settings) disconnection I was looking for.
But dealing with IOCTL through DeviceIoControl did the job.
I found that after calling GattDeviceService.GetCharacteristicsAsync(), BluetoothLeDevice.Dispose() does not work. So I dispose the Service I don't need.
GattCharacteristicsResult characteristicsResult = await service.GetCharacteristicsAsync();
if (characteristicsResult.Status == GattCommunicationStatus.Success)
{
foreach (GattCharacteristic characteristic in characteristicsResult.Characteristics)
{
if (characteristic.Uuid.Equals(writeGuid))
{
write = characteristic;
}
if (characteristic.Uuid.Equals(notifyGuid))
{
notify = characteristic;
}
}
if (write == null && notify == null)
{
service.Dispose();
Log($"Dispose service: {service.Uuid}");
}
else
{
break;
}
}
Finally, when I want to disconnect the Bluetooth connection
write.Service.Dispose();
device.Dispose();
device = null;

Qt Signal and slots not working as expected

When the socket times out while waiting for a read it occasionally fails. But when it does fail, it continuously fails, and the log message in slotDisconnected never gets reported despite mpSocket's disconnected signal being connected to slotDisconnect(). It's as if the return statement in slotConnected isn't being hit and it's going round in a continous loop.
void Worker::slotDisconnected()
{
// Attempt to reconnect
log("Disconnected from Server. Attempting to reconnect...");
// fires mpSocket's connect signal (which is connected to slotConnected)
connectToServer();
}
void Worker::slotConnected()
{
// Loop forever while connected and receiving messages correctly
while(1)
{
if(mpSocket->bytesAvailable())
{
// A message is ready to read
}
else if(!mpSocket->waitForReadyRead(mSocketTimeOut))
{
// waitForReadyRead returned false - instead of continuing and trying again, we must disconnect as sometimes
// (for some unknown reason) it gets stuck in an infinite loop without disconnecting itself as it should
log("Socket timed out while waiting for next message.\nError String: " + mpSocket->errorString());
msleep(3000);
mpSocket->disconnect();
return;
}
}
}
The signals/slots are connected like so:
connect(mpSocket, &QAbstractSocket::disconnected, this, &TRNGrabberWorker::slotDisconnected);
connect(mpSocket, &QAbstractSocket::connected, this, &TRNGrabberWorker::slotConnected);
Anyone have any idea's what's going on? Would be much appreciated
To disconnect from server use mpSocket->disconnectFromHost(); instead of mpSocket->disconnect();.
Actually mpSocket->disconnect(); disconnects all signals/slots of object mpSocket.

Resources