I have a test program as below:
io_service io;
deadline_timer t1(io);
deadline_timer t2(io);
t1.expires_from_now(boost::posix_time::seconds(10));
t1.async_wait([](const boost::system::error_code &error) {
if (error == boost::asio::error::operation_aborted) {
cout << "timer1 canceled" << endl;
} else {
cout << "timer1 expired" << endl;
}
});
t2.expires_from_now(boost::posix_time::seconds(2));
t2.async_wait([&t1](const boost::system::error_code &error) {
if (error == boost::asio::error::operation_aborted) {
cout << "timer2 canceled" << endl;
} else {
t1.cancel();
for (int i = 0; i < 5000000; i++) {
cout << i << endl;
}
# usleep(1000000);
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
cout << "timer2 expired" << endl;
}
});
io.run();
I was wondering when timer2 expires and cancels timer1, which one of "timer1 canceled" and "timer2 expired" will print as first?
The result is "timer2 expired". It makes sense as the single thread program will execution until some "block" happens.
But after inserting the line "sleep_for(1000ms)" (which should block the execution, trans the process to "Sleep"), the line "timer2 expired" still print before the line "timer1 canceled".
In my imagination, the boost:asio is something written around the "epoll" which can deal between "events" like incoming messages from network (the timer2 expires) and "blocks" like writing to disk (the sleep) in a single thread. But why didn't the line "timer1 canceled" reveal after "sleep_for"?
Question2: Assuming a program handling network request and Chrono jobs, in one thread.
void fun1(){
timer1.expire_from_now(3s);
timer1.async_wait([](const & error){
cout<<"timer1 expired";
heavy_job();
});
}
The fun1 expired, and executed cout<<"timer1 expired", but not executed. heavy_job yet. At this time, a request from network triggered fun2:
int wait_funcs= timer1.expire_from_now(3s);
if (wait_funcs == 0 ){
job2();
}else{
job3();
}
which of the following situation will happen?
heavy_job done -> job2:
which means func1 will not be interrupted by fun2, fun2 will run after fun1 completes(or blocks?)
job2 done -> heavy_job:
which means the wait_funcs check detected the unsafe behavior
Sorry about my demonstration, I am new to boost:asio and confused.
You can think of io_service as a producer-consumer queue: whenever an asyncronous operation gets completed (or aborted), its completion handler is pushed into the queue; while, on the other hand, io_service::run() fetches these handlers from the queue and invokes them.
Also note that as long as your program is single threaded (and does not use co-routines), all the completion handlers are always executed sequentially, one by one.
So, in you first example t2 timer expires first, and the completion handler gets invoked. The handler of t1 will not be fetched from the queue until the previous one completes - it doesn't matter how long it will take.
The same applies to your second example: the completion handler of timer1 is being run in the context of io_service::run, so the latter cannot fetch any subsequent handler until the previous one is done. So, if heavy_job() takes too long to complete, all other handlers will be stuck in the queue.
Related
I have a simple function that is served with grpc, the starting method of this service is like this:
void start_server() {
string addr("0.0.0.0:50002");
ImageServiceImpl service;
ServerBuilder builder;
builder.AddListeningPort(addr, grpc::InsecureServerCredentials());
builder.RegisterService(&service);
builder.SetMaxSendMessageSize(1L << 31);
builder.SetMaxReceiveMessageSize(1L << 31);
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
std::cout << "service started, listening to: " << addr << std::endl;
server->Wait();
}
It is a standard grpc server. My problem is that, I need the server to execute other programs when there is no client request called. And when a client request comes, the server program would be 'interrupted' and the server will deal with the request. After dealing with the request, the server continues executing its 'leisure time' program. The problem is that the program would block at the server->Wait(), thus when there is not client request comes, the server can do nothing but wait, which is not what I need it to do.
I hope I have expressed myself clearly. How could I do this with grpc ?
Does running a thread before server->Wait() not solve this problem?
On .NET C #, I solved it like this:
...
// Start other work
MainTask mainTask = new MainTask(configurator, _gRPCManager);
Thread mainThread = new Thread(mainTask.Run);
mainThread.IsBackground = true;
mainThread.Start();
// Start gRPS
CreateHostBuilder(args, configurator).Build().Run();
Here is a small block of code above a read() statement in a synchronous TCP client that I've written.
std::cout << "available? " << socket->bytesAvailable() << std::endl;
socket->waitForReadyRead();
std::cout << "reading..." << std::endl;
bytesRead = socket->read(message + totalBytesRead, messageSize - totalBytesRead);
The following line:
socket->bytesAvailable()
returns 4, so there is obviously data available to be read. The problem that I'm having is that waitForReadyRead() is blocking until the default timeout of 30 seconds. Read() then proceeds to read 0 bytes on the following line.
So if there are bytes available to be read, why does waitForReadyRead() block
from QIODevice::waitForReadyRead documentation:
Blocks until new data is available for reading and the readyRead()
signal has been emitted, or until msecs milliseconds have passed. If
msecs is -1, this function will not time out.
Returns true if new data is available for reading; otherwise returns
false (if the operation timed out or if an error occurred).
The best why to handle network connection is to use signal/slots mechanism (the asynchronous why)
I've got some class to interfere with HTTP-server.
Here is meaningfull code parts:
const QString someClass::BASEURL = QString("http://127.0.0.1:8000/?");
someClass::someClass():
manager(new QNetworkAccessManager(this))
{
}
QNetworkReply *someClass::run(QString request)
{
qDebug() << request;
QEventLoop loop;
QObject::connect(manager, SIGNAL(finished(QNetworkReply*)), &loop, SLOT(quit()));
QNetworkReply *res = manager->get(QNetworkRequest(QUrl(BASEURL + request)));
loop.exec();
return res;
}
When I call method run(), sometimes (not every time) the are two identical GET-requests
(I looked with tcpdump). qDebug() executes 1 time.
Is there some error in my code? I can't see any possible explanation.
UPDATE:
After some tcpdump ouptut research.
After second request it sends packet with RST flag as an answer to FIN.
But I still can see no difference in TCP-streams that triggers the problem and that doesn't.
F.e. here is wireshark's output. Stream 8 went well. Stream 11 was duplicated with Stream 12.
I'm stuck with this. Maybe it's some protocol errors from server-size, I'm not sure. Or maybe it's a bug in QNetworkAccessManager.
Have you tried rewriting you code to be more asynchronous without using QEventLoop in a local scope? Your code looks good to me, but there might be some weird QT bug that you running into in the way it queues up requests for processing and using QEventLoop in the local scope. I usually use QNetworkAccessManager in the following manner to send GET and POST requests:
void someClass::run(QString request)
{
qDebug() << request;
QObject::connect(manager, SIGNAL(finished(QNetworkReply*)), this, SLOT(on_request_complete(QNetworkReply*)));
QNetworkReply *res = manager->get(QNetworkRequest(QUrl(BASEURL + request)));
}
void someClass::on_request_complete(QNetworkReply* response)
{
// Do stuff with your response here
}
I am trying to connect my application with a web service and here ,a user suggested to send custom headers back to my application.
I am using this code
void Coonnec::serviceRequestFinished(QNetworkReply *reply)
{
QByteArray bytes = reply->readAll();
if (reply->error() != QNetworkReply::NoError) {
qDebug() << "Reply error: " + reply->errorString();
}
else
{
qDebug() << "Uploaded: " + QDateTime::currentDateTime().toString();
qDebug() << reply->rawHeaderList();
}
reply->close();
bytes.clear();
reply->deleteLater();
}
from php i send this header
header('XAppRequest-Status: complete');
When running the application i can see that i get this header but i can't take the value of it cause
reply->rawHeader(bytes);
returns nothing.
How can i take the value 'complete'?
I suggest to connect a slot to the void QNetworkReply::metaDataChanged () signal of your reply.
The Qt doc says
This signal is emitted whenever the metadata in this reply changes.
metadata is any information that is not the content (data) itself,
including the network headers. In the majority of cases, the metadata
will be known fully by the time the first byte of data is received.
However, it is possible to receive updates of headers or other
metadata during the processing of the data.
I do use web-services/client with Qt and I noticed that some header's information are not available when I expected it to be ! I had to 'wait' for this signal to check the header content.
I'm developing an application that uses IPC between a local server and a client application. There is nothing particular to it, as it's structured like the Qt documentation and examples.
The problem is that the client sends packets frequently and connecting/disconnecting from the server local socket (named pipe on NT) is very slow. So what I'm trying to achieve is a "persistent" connection between the two applications.
The client application connects to the local server (QLocalServer) without any problem:
void IRtsClientImpl::ConnectToServer(const QString& name)
{
connect(_socket, SIGNAL(connected()), this, SIGNAL(connected()));
_blockSize = 0;
_socket->abort();
_socket->connectToServer(name, QIODevice::ReadWrite);
}
And sends requests also in the traditional Qt manner:
void IRtsClientImpl::SendRequest( quint8 cmd, const QVariant* const param_array,
unsigned int cParams )
{
// Send data through socket
QByteArray hdr(PROTO_BLK_HEADER_PROJ);
QByteArray dataBlock;
QDataStream out(&dataBlock, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_4_5);
quint8 command = cmd;
out << blocksize_t(0) // block size
<< hdr // header
<< quint32(PROTO_VERSION_PROJ) // protocol version
<< command // command
<< cParams; // number of valid parameters
for (unsigned int i = 0; i < cParams; ++i)
out << param_array[i];
// Write the current block size
out.device()->seek(0);
out << dataBlock.size() - sizeof(blocksize_t);
_socket->write(dataBlock);
}
No problem. But the trick resides on the readyRead() signal in the server-side. Here's the current implementation of the readyRead() handling slot:
void IRtsServerImpl::onReadyRead()
{
QDataStream in(_lsock);
in.setVersion(QDataStream::Qt_4_5);
if (_blocksize == 0)
{
qDebug("Bytes Available on socket: %d", _lsock->bytesAvailable());
if (_lsock->bytesAvailable() < sizeof(blocksize_t))
return;
in >> _blocksize;
}
// We need more data?
if (_lsock->bytesAvailable() < _blocksize)
return;
ReadRequest(in);
// Reset
_blocksize = 0;
}
Without setting _blocksize to zero I could not receive more data, only the first block group (I would expect an entire block to arrive without segmentation since this is through a pipe, but it does not, go figure). I expect that behavior, sure, since the _blocksize does not represent the current stream flow anymore. All right, resetting _blocksize does the trick, but I can't resend another packet from the client without getting an increasing array of bytes on the socket. What I want is to process the request in ReadRequest and receive the next data blocks without resorting to connecting/reconnecting the applications involved.
Maybe I should 'regulate' the rate of the incoming data?
Thank you very much.