Sending TCP data without recieving (boost asio) - tcp

I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.

The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));

Related

How to Perform Concurrent Request-Reply for Asynchronous Tasks with ZeroMQ?

Intention
I want to allow a client to send a task to some server at a fixed address.
The server may take that task and perform it at some arbitrary point in the future, but may still take requests from other clients before then.
After performing the task, the server will reply to the client, which may have been running a blocking wait on the reply.
The work and clients come dynamically, so there can't be a fixed initial number.
The work is done in a non-thread-safe context, so workers can't exist on different threads, so all work should take place in a single thread.
Implementation
The following example1 is not a complete implementation of the server, only a compilable section of the sequence that should be able to take place (but is in reality hanging).
Two clients send an integer each, and the server takes one request, then the next request, echo replies to the first request, then echo replies to the second request.
The intention isn't to get the responses ordered, only to allow for the holding of multiple requests simultaneously by the server.
What actually happens here is that the second worker hangs waiting on the request - this is what confuses me, as DEALER sockets should route outgoing messages in a round-robin strategy.
#include <unistd.h>
#include <stdio.h>
#include <zmq.h>
#include <sys/wait.h>
int client(int num)
{
void *context, *client;
int buf[1];
context = zmq_ctx_new();
client = zmq_socket(context, ZMQ_REQ);
zmq_connect(client, "tcp://localhost:5559");
*buf = num;
zmq_send(client, buf, 1, 0);
*buf = 0;
zmq_recv(client, buf, 1, 0);
printf("client %d receiving: %d\n", num, *buf);
zmq_close(client);
zmq_ctx_destroy(context);
return 0;
}
void multipart_proxy(void *from, void *to)
{
zmq_msg_t message;
while (1) {
zmq_msg_init(&message);
zmq_msg_recv(&message, from, 0);
int more = zmq_msg_more(&message);
zmq_msg_send(&message, to, more ? ZMQ_SNDMORE : 0);
zmq_msg_close(&message);
if (!more) break;
}
}
int main(void)
{
int status;
if (fork() == 0) {
client(1);
return(0);
}
if (fork() == 0) {
client(2);
return 0;
}
/* SERVER */
void *context, *frontend, *backend, *worker1, *worker2;
int wbuf1[1], wbuf2[1];
context = zmq_ctx_new();
frontend = zmq_socket(context, ZMQ_ROUTER);
backend = zmq_socket(context, ZMQ_DEALER);
zmq_bind(frontend, "tcp://*:5559");
zmq_bind(backend, "inproc://workers");
worker1 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker1, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf1 = 0;
zmq_recv(worker1, wbuf1, 1, 0);
printf("worker1 receiving: %d\n", *wbuf1);
worker2 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker2, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf2 = 0;
zmq_recv(worker2, wbuf2, 1, 0);
printf("worker2 receiving: %d\n", *wbuf2);
zmq_send(worker1, wbuf1, 1, 0);
multipart_proxy(backend, frontend);
zmq_send(worker2, wbuf2, 1, 0);
multipart_proxy(backend, frontend);
wait(&status);
zmq_close(frontend);
zmq_close(backend);
zmq_close(worker1);
zmq_close(worker2);
zmq_ctx_destroy(context);
return 0;
}
Other Options
I have looked at CLIENT and SERVER sockets and they appear to be capable on paper, however in practice they're sufficiently new that the system version of ZeroMQ that I have doesn't yet support them.
If it is not possible to perform this in ZeroMQ, any alternative suggestions are very welcome.
1 Based on the Shared Queue section of the ZeroMQ guide.
Let me share a view on how ZeroMQ could meet the above defined Intention.
Let's rather use ZeroMQ Scalable Formal Communication Pattern Archetypes ( as they are RTO now, not as we may wish them to be at some, yet unsure, point in (a just potentially happening) future evolution state ).
We need not hesitate to use many more ZeroMQ-based connections among a herd of coming/leaving client-instance(s) and the server
For example :
Client .connect()-s a REQ-socket to Server-address:port-A to ask for a "job"-ticket processing over this connection
Client .connect()-s a SUB-socket to Server-address:port-B to listen ( if present ) about published announcements about already completed "job"-tickets that are Server-ready to deliver results for
Client exposes another REQ-socket to request upon an already broadcast "job"-ticket completion announcement message, it has just heard about over the SUB-socket, to get "job"-ticket results finally delivered, if proving itself, by providing a proper / matching job-ticket-AUTH-key to proof its right to receive the publicly announced results' availability, using this same socket to deliver a POSACK-message to Server upon client has correctly received this "job"-ticket results "in hands"
Server exposes REP-socket to respond each client ad-hoc upon a "job"-ticket request, notifying this way about having "accepted"-job-ticket, delivering also a job-ticket-AUTH-key for later pickup of results
Server exposes PUB-socket to announce any and all not yet picked-up "finished"-job-tickets
Server exposes another REP-socket to receive any possible attempt to request to deliver "job"-ticket-results. Upon verifying there delivered job-ticket-AUTH-key, Server decides whether the respective REQ-message had matching job-ticket-AUTH-key to indeed deliver a proper message with results, or whether a match did not happen, in which case a message will carry some other payload data ( logic is left for further thoughts, so as to prevent potential bruteforcing or eavesdropping and similar, less primitive attacks on stealing the results )
Clients need not stay waiting for results live/online and/or can survive certain amounts of LoS, L2/L3-errors or network-storm stresses
Clients need just to keep some kind of job-ticket-ID and job-ticket-AUTH-key for later retrieving of the Server-processes/maintained/auth-ed results
Server will keep listening for new jobs
Server will accept new job-tickets with providing a privately added job-ticket-AUTH-key
Server will process job-tickets as it will to do so
Server will maintain a circular-buffer of completed job-tickets to be announced
Server will announce, in due time and repeated as decided in public, job-tickets, that are ready for client-initiated retrieval
Server will accept new retrieval requests
Server will verify client-requests for matching any announced job-ticket-ID and testing if job-ticket-AUTH-key match either
Server will respond to either matching / non-matching job-ticket-ID results retrieval request(s)
Server will remove a job-ticket-ID from a circular-buffer only upon both POSACK-ed AUTH-match before a retrieval and a POSACK-message re-confirmed delivery to client

gRPC server request_iterator do not finish in loop (in case C# client Python Server)

We are trying to apply gRPC bidirectional streaming. Proto:
message Request {
oneof requestTypes{
ConfigRequest configRequest = 1;
DataRequest dataRequest = 2;
}
}
message ConfigRequest {
request_quantity = 1;
...
}
message DataRequest {
string id = 1;
bytes data = 2;
...
}
service Service {
rpc FuncService(stream Request) returns (stream Response);
}
The client side is written in C# and contains asynchrony.
If necessary, I can clarify the client code.
The server side is in python. Python code:
def FuncServe(self, request_iterator, context):
i = 0
for request in request_iterator:
...
if (i==request_quantity):
break
i+=1
The problem is the server hangs in a loop on requests, so I introduced an if-break statement. So far, everything is working, but we would like to work out the option when not all the requests stated in the config have arrived. In this case, we cannot avoid suspension in the loop and catch it. Also, I was unable to move the loop into a separate thread to control the execution time, it ends due to an empty iterator.
I would be very grateful for help in this loop problem.

Async server does not process requests while a request is stuck

I am new to GRPC so please let me know if I am doing something wrong here. I am looking at the greeter_async_server.cc example code. This seems to work fine for normal requests but I wanted to simulate a request getting stuck on the server so I added a sleep in the processing loop. I added this right before Finish is called on the responder so that it was in the actual processing logic of the request. While the server thread is sleeping it will not accept any new requests until the thread is free. I attempted to create another client request while the original request on the server is sleeping but the grpc server would not process the request. The client seemed to be stuck until the server came out of the sleep.
I also broke this process into debugger as well but the only request I saw was the one that was sleeping. The other threads were waiting on the completion queue.
I am new to grpc so if I am doing this wrong please let me know what I need to do to handle request while another request is stuck.
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
Sleep((DWORD)-1);
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}

QT socket does no read all data

I want to read the data through socket in Qt. I am using QBytearray to store the data. Actually server sends 4095 bytes in a single stretch, but in the QT client side I am receiving in different chunks because of my application design.
void Dialog::on_pushButton_clicked()
{
socket=new QTcpSocket(this);
socket->connectToHost("172.17.0.1",5000);
if(socket->waitForConnected(-1))
qDebug()<<"Connected";
Read_data();
}
void Dialog::Read_data()
{
QString filename(QString("%1/%2.bin").arg(path,device));
qDebug()<<"filename"<<filename;
QFile fileobj(filename);
int cmd,file_size,percentage_completed;
if(!fileobj.open(QFile::WriteOnly | QFile::Text))
{
qDebug()<<"Cannot open file for writting";
return;
}
QTextStream out(&fileobj);
while(1)
{
socket->waitForReadyRead(-1);
byteArray=socket->read(4);
qDebug()<<"size of bytearray"<<byteArray.size();
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
int rem;
byteArray=socket->read(length);
while(byteArray.size()!=length)
{
rem=length-byteArray.size();
byteArray.append( socket->read(rem));
}
fileobj.write(byteArray);
fileobj.flush();
byteArray.clear();
}
}
server code:
#include<stdio.h>
#include<stdlib.h>
#include<fcntl.h>
#include<sys/ioctl.h>
#include<mtd/mtd-user.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <arpa/inet.h>
#include<math.h>
#include <netinet/tcp.h>
static int msb,lsb,size,listenfd = 0, connfd = 0,len;
main()
{
struct sockaddr_in serv_addr;
serverlen=sizeof(serv_addr);
listenfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
if(bind(listenfd,(struct sockaddr*)&serv_addr, sizeof(serv_addr))<0)
{
perror("\n Error in binding");
exit(1);
}
size=100000;
listen(listenfd, 1);
fd=fopen(new.bin,"r");
len=4089;
while(1)
{
buff[0]=25;
buff[1]=2;
buff[2]=60;
buff[3]=47;
n=fread(buff+4,1,length, fd);
buff[len+4]=5;
buff[len+5]='\n';
if(n>0)
sent_bytes=send(connfd,buff,n+6,0);
size =size-len;
if(size==0)
break;
}
}
If I execute the code in localhost(127.0.0.1) I can receive the data fully. The problem arises only when I connect to different host IP. Kindly help me in this regard
EDIT 1:
The problem is when bytesAvailable() returns the maximum bytes I am waiting for waitForReadyRead() times out. It works fine if the bytesAvailable() is less than as expected. Does bytesAvailable() allocate any buffer annoyed by this behaviour.
while(1)
{
while(socket->bytesAvailable()<4)
{
if (!socket->waitForReadyRead())
{
qDebug() << "waitForReadyRead() timed out";
return;
}
}
byteArray=socket->read(4);
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
int rem_bytes=length+2;
qDebug()<<"bytes available"<<socket->bytesAvailable();
while(socket->bytesAvailable()<=rem_bytes)
{
qDebug()<<"reading";
if (!socket->waitForReadyRead(10000))//times out here if bytesAvailable() == rem_bytes but executes well in other cases
{
qDebug() << "waitForReadyRead() timed out";
return;
}
qDebug()<<"ready";
byteArray.append(socket->read(rem_bytes));
qDebug()<<"size of bytearray"<<byteArray.size();
if(byteArray.size()==length+2)
{
for(int j=0;j<length;j++)
newarray.append(byteArray[j]);
fileobj.write(newarray);
fileobj.flush();
newarray.clear();
byteArray.clear();
break;
}
else
{
rem_bytes -=byteArray.size();
}
}
Send();
}
I have tried by sending different data sizes cannot figure it out why?. Please provide me a solution pointing where I have gone wrong
Your problem stems from your misunderstanding of how TCP works.
When data is transmitted from a sender, it is broken into packets and then each packet is transmitted one by one until all the data has finished sending. If packets go missing, they are re-transmitted until either they reach their destination, or a timeout is reached.
As an added complication, each packet might follow various routes before arriving at the destination. The receiver has the task of acknowledging to the sender that packets have been received and then making sure that the packets are joined back together in the correct order.
For this reason, the longer the network route, the greater the chance of getting a delay in getting the data re-assembled. This is what you've been experiencing with your localhost versus networked-computer tests.
The IP stack on your computer does not wait for the complete data to arrive before passing it to your application but it will pause if it's missing a packet in sequence.
e.g. If you have 10 packets and packet 4 arrives last, the IP stack will pass the data to your application in two sets: 1-2-3, [[wait for 4 to arrive]], 4-5-6-7-8-9-10.
For this reason, when waitForReadyRead() returns true, you cannot expect that all your data has arrived, you must always check how many bytes have been actually received.
There are two places in your code where you wait for data. The first thing you wait for is a four-byte number to tell you how much data has been sent. Even though it's highly likely that you will have received all four bytes, it's good practice to check.
while(socket.bytesAvailable() < 4){
if (!socket.waitForReadyRead()) { // timeout after 30 second, by default
qDebug() << "waitForReadyRead() timed out";
return;
}
}
byteArray=socket->read(4);
qDebug()<<"size of bytearray"<<byteArray.size();
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
The next thing you need to do is keep cycling through a wait-read-wait-read loop until all your data has arrived, each time keeping track of how many bytes you still expect to receive.
int bytesRemaining = length;
while(socket->bytesAvailable() < bytesRemaining){
if (!socket->waitForReadyRead()){
qDebug() "waitForReadyRead() timed out";
return;
}
// calling read() with the bytesRemaining argument will not guarantee
// that you will receive all the data. It only means that you will
// receive AT MOST bytesRemaining bytes.
byteArray = socket->read(bytesRemaining);
bytesRemaining -= byteArray.size();
fileobj.write(byteArray);
fileobj.flush();
}
All this said, you should not use the blocking API in your main thread or your GUI could freeze up. I suggest either using the asynchronous API, or create a worker thread to handle the downloading (and use the blocking API in the worker thread).
To see examples of how to use the two different APIs, looking in the documentation for the Fortune Client Example and the Blocking Fortune Client Example.
EDIT:
My apologies, there's a bug in the code above that doesn't take an number of possibilities into account, most importantly, if all data has already been received, and the end game once all data has finally arrived.
The following one-line change should clear that up:
Change
while(socket->bytesAvailable() < bytesRemaining){
To
while (bytesRemaining > 0) {
So you are saying that waitForReadyRead() returns false regardless of the time given once your the buffer has all 3000 expected bytes. What other behavior would you want? Perhaps you need rethink the trigger logic here. Many TCP/IP app protocols have some sort of frame start detection logic they combine with the required message size to then trigger processing. This lets them cope with widely different package sizes that the intermediate networks will impose, as well as truncated/partial messages. Once you have it working, connect to it by way of your cell phone and you will get different set of packet fragmentation examples to test with.

How can I send a simple HTTP request with a lwIP stack?

Please move/close this if the question isn't relevant.
Core: Cortex-M4
Microprocessor: TI TM4C1294NCPDT.
IP Stack: lwIP 1.4.1
I am using this microprocessor to do some data logging, and I want to send some information to a separate web server via a HTTP request in the form of:
http://123.456.789.012:8800/process.php?data1=foo&data2=bar&time=1234568789
and I want the processor to be able to see the response header (i.e if it was 200 OK or something went wrong) - it does not have to do display/recieve the actual content.
lwIP has a http server for the microprocessor, but I'm after the opposite (microprocessor is the client).
I am not sure how packets correlate to request/response headers, so I'm not sure how I'm meant to actually send/recieve information.
This ended up being pretty simple to implement, forgot to update this question.
I pretty much followed the instructions given on this site, which is the Raw/TCP 'documentation'.
Basically, The HTTP request is encoded in TCP packets, so to send data to my PHP server, I sent an HTTP request using TCP packets (lwIP does all the work).
The HTTP packet I want to send looks like this:
HEAD /process.php?data1=12&data2=5 HTTP/1.0
Host: mywebsite.com
To "translate" this to text which is understood by an HTTP server, you have to add "\r\n" carriage return/newline in your code. So it looks like this:
char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n ";
Note that the end has two lots of "\r\n"
You can use GET or HEAD, but because I didn't care about HTML site my PHP server returned, I used HEAD (it returns a 200 OK on success, or a different code on failure).
The lwIP raw/tcp works on callbacks. You basically set up all the callback functions, then push the data you want to a TCP buffer (in this case, the TCP string specified above), and then you tell lwIP to send the packet.
Function to set up a TCP connection (this function is directly called by my application every time I want to send a TCP packet):
void tcp_setup(void)
{
uint32_t data = 0xdeadbeef;
/* create an ip */
struct ip_addr ip;
IP4_ADDR(&ip, 110,777,888,999); //IP of my PHP server
/* create the control block */
testpcb = tcp_new(); //testpcb is a global struct tcp_pcb
// as defined by lwIP
/* dummy data to pass to callbacks*/
tcp_arg(testpcb, &data);
/* register callbacks with the pcb */
tcp_err(testpcb, tcpErrorHandler);
tcp_recv(testpcb, tcpRecvCallback);
tcp_sent(testpcb, tcpSendCallback);
/* now connect */
tcp_connect(testpcb, &ip, 80, connectCallback);
}
Once a connection to my PHP server is established, the 'connectCallback' function is called by lwIP:
/* connection established callback, err is unused and only return 0 */
err_t connectCallback(void *arg, struct tcp_pcb *tpcb, err_t err)
{
UARTprintf("Connection Established.\n");
UARTprintf("Now sending a packet\n");
tcp_send_packet();
return 0;
}
This function calls the actual function tcp_send_packet() which sends the HTTP request, as follows:
uint32_t tcp_send_packet(void)
{
char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n ";
uint32_t len = strlen(string);
/* push to buffer */
error = tcp_write(testpcb, string, strlen(string), TCP_WRITE_FLAG_COPY);
if (error) {
UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_write)\n", error);
return 1;
}
/* now send */
error = tcp_output(testpcb);
if (error) {
UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_output)\n", error);
return 1;
}
return 0;
}
Once the TCP packet has been sent (this is all need if you want to "hope for the best" and don't care if the data actually sent), the PHP server return a TCP packet (with a 200 OK, etc. and the HTML code if you used GET instead of HEAD). This code can be read and verified in the following code:
err_t tcpRecvCallback(void *arg, struct tcp_pcb *tpcb, struct pbuf *p, err_t err)
{
UARTprintf("Data recieved.\n");
if (p == NULL) {
UARTprintf("The remote host closed the connection.\n");
UARTprintf("Now I'm closing the connection.\n");
tcp_close_con();
return ERR_ABRT;
} else {
UARTprintf("Number of pbufs %d\n", pbuf_clen(p));
UARTprintf("Contents of pbuf %s\n", (char *)p->payload);
}
return 0;
}
p->payload contains the actual "200 OK", etc. information. Hopefully this helps someone.
I have left out some error checking in my code above to simplify the answer.
Take a look at the HTTP example in Wikipedia. The client will send the GET and HOST lines. The server will respond with many lines for a response. The first line will have the response code.
I managed to create an HTTP client for raspberry pi Pico W using the example here.
It uses the httpc_get_file or httpc_get_file_dns functions from the sdk.
However, that example is incomplete since it has a memory leak.
You will need to free the memory taken by the struct pbuf *hdr in the headers function and struct pbuf *p in the body function with respectively pbuf_free(hdr); and pbuf_free(p);
Without those modifications, it will stop working after about 20 calls (probably depends on the size of the response).

Resources