I am using this Adafruit_Protomatter library with an Adafruit MatrixPortal M4 and PlatformIO.
Since I do blocking HTTP requests with WiFiNINA (adafruit fork), I thought I can update the matrix in an timer interrupt to be able to do a scollable text.
I am using the SAMD_TimerInterrupt library for the interrupts:
SAMDTimer ITimer0(TIMER_TC3);
Unfortunately, the program execution crashes if I call matrix.show() in the TimerHandler. The setCursor() and print() methods do not lead to a crash and I don't know why.
void TimerHandler0()
{
static uint32_t curMillis = 0;
curMillis = millis();
if (curMillis > TIMER0_INTERVAL_MS)
{
matrix.fillScreen(0);
matrix.setCursor(textX, textY);
matrix.print(str);
if ((--textX) < textMin)
textX = matrix.width();
matrix.show();
}
preMillisTimer0 = curMillis;
}
Maybe there is a better way of doing HTTP requests/blocking operations during an scrolltext? I was not able to find a way of doing e.g. async http requests with the MatrixPortal...
Thanks in advance and best regards,
Daniel
Related
Intention
I want to allow a client to send a task to some server at a fixed address.
The server may take that task and perform it at some arbitrary point in the future, but may still take requests from other clients before then.
After performing the task, the server will reply to the client, which may have been running a blocking wait on the reply.
The work and clients come dynamically, so there can't be a fixed initial number.
The work is done in a non-thread-safe context, so workers can't exist on different threads, so all work should take place in a single thread.
Implementation
The following example1 is not a complete implementation of the server, only a compilable section of the sequence that should be able to take place (but is in reality hanging).
Two clients send an integer each, and the server takes one request, then the next request, echo replies to the first request, then echo replies to the second request.
The intention isn't to get the responses ordered, only to allow for the holding of multiple requests simultaneously by the server.
What actually happens here is that the second worker hangs waiting on the request - this is what confuses me, as DEALER sockets should route outgoing messages in a round-robin strategy.
#include <unistd.h>
#include <stdio.h>
#include <zmq.h>
#include <sys/wait.h>
int client(int num)
{
void *context, *client;
int buf[1];
context = zmq_ctx_new();
client = zmq_socket(context, ZMQ_REQ);
zmq_connect(client, "tcp://localhost:5559");
*buf = num;
zmq_send(client, buf, 1, 0);
*buf = 0;
zmq_recv(client, buf, 1, 0);
printf("client %d receiving: %d\n", num, *buf);
zmq_close(client);
zmq_ctx_destroy(context);
return 0;
}
void multipart_proxy(void *from, void *to)
{
zmq_msg_t message;
while (1) {
zmq_msg_init(&message);
zmq_msg_recv(&message, from, 0);
int more = zmq_msg_more(&message);
zmq_msg_send(&message, to, more ? ZMQ_SNDMORE : 0);
zmq_msg_close(&message);
if (!more) break;
}
}
int main(void)
{
int status;
if (fork() == 0) {
client(1);
return(0);
}
if (fork() == 0) {
client(2);
return 0;
}
/* SERVER */
void *context, *frontend, *backend, *worker1, *worker2;
int wbuf1[1], wbuf2[1];
context = zmq_ctx_new();
frontend = zmq_socket(context, ZMQ_ROUTER);
backend = zmq_socket(context, ZMQ_DEALER);
zmq_bind(frontend, "tcp://*:5559");
zmq_bind(backend, "inproc://workers");
worker1 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker1, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf1 = 0;
zmq_recv(worker1, wbuf1, 1, 0);
printf("worker1 receiving: %d\n", *wbuf1);
worker2 = zmq_socket(context, ZMQ_REP);
zmq_connect(worker2, "inproc://workers");
multipart_proxy(frontend, backend);
*wbuf2 = 0;
zmq_recv(worker2, wbuf2, 1, 0);
printf("worker2 receiving: %d\n", *wbuf2);
zmq_send(worker1, wbuf1, 1, 0);
multipart_proxy(backend, frontend);
zmq_send(worker2, wbuf2, 1, 0);
multipart_proxy(backend, frontend);
wait(&status);
zmq_close(frontend);
zmq_close(backend);
zmq_close(worker1);
zmq_close(worker2);
zmq_ctx_destroy(context);
return 0;
}
Other Options
I have looked at CLIENT and SERVER sockets and they appear to be capable on paper, however in practice they're sufficiently new that the system version of ZeroMQ that I have doesn't yet support them.
If it is not possible to perform this in ZeroMQ, any alternative suggestions are very welcome.
1 Based on the Shared Queue section of the ZeroMQ guide.
Let me share a view on how ZeroMQ could meet the above defined Intention.
Let's rather use ZeroMQ Scalable Formal Communication Pattern Archetypes ( as they are RTO now, not as we may wish them to be at some, yet unsure, point in (a just potentially happening) future evolution state ).
We need not hesitate to use many more ZeroMQ-based connections among a herd of coming/leaving client-instance(s) and the server
For example :
Client .connect()-s a REQ-socket to Server-address:port-A to ask for a "job"-ticket processing over this connection
Client .connect()-s a SUB-socket to Server-address:port-B to listen ( if present ) about published announcements about already completed "job"-tickets that are Server-ready to deliver results for
Client exposes another REQ-socket to request upon an already broadcast "job"-ticket completion announcement message, it has just heard about over the SUB-socket, to get "job"-ticket results finally delivered, if proving itself, by providing a proper / matching job-ticket-AUTH-key to proof its right to receive the publicly announced results' availability, using this same socket to deliver a POSACK-message to Server upon client has correctly received this "job"-ticket results "in hands"
Server exposes REP-socket to respond each client ad-hoc upon a "job"-ticket request, notifying this way about having "accepted"-job-ticket, delivering also a job-ticket-AUTH-key for later pickup of results
Server exposes PUB-socket to announce any and all not yet picked-up "finished"-job-tickets
Server exposes another REP-socket to receive any possible attempt to request to deliver "job"-ticket-results. Upon verifying there delivered job-ticket-AUTH-key, Server decides whether the respective REQ-message had matching job-ticket-AUTH-key to indeed deliver a proper message with results, or whether a match did not happen, in which case a message will carry some other payload data ( logic is left for further thoughts, so as to prevent potential bruteforcing or eavesdropping and similar, less primitive attacks on stealing the results )
Clients need not stay waiting for results live/online and/or can survive certain amounts of LoS, L2/L3-errors or network-storm stresses
Clients need just to keep some kind of job-ticket-ID and job-ticket-AUTH-key for later retrieving of the Server-processes/maintained/auth-ed results
Server will keep listening for new jobs
Server will accept new job-tickets with providing a privately added job-ticket-AUTH-key
Server will process job-tickets as it will to do so
Server will maintain a circular-buffer of completed job-tickets to be announced
Server will announce, in due time and repeated as decided in public, job-tickets, that are ready for client-initiated retrieval
Server will accept new retrieval requests
Server will verify client-requests for matching any announced job-ticket-ID and testing if job-ticket-AUTH-key match either
Server will respond to either matching / non-matching job-ticket-ID results retrieval request(s)
Server will remove a job-ticket-ID from a circular-buffer only upon both POSACK-ed AUTH-match before a retrieval and a POSACK-message re-confirmed delivery to client
I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));
I want to read the data through socket in Qt. I am using QBytearray to store the data. Actually server sends 4095 bytes in a single stretch, but in the QT client side I am receiving in different chunks because of my application design.
void Dialog::on_pushButton_clicked()
{
socket=new QTcpSocket(this);
socket->connectToHost("172.17.0.1",5000);
if(socket->waitForConnected(-1))
qDebug()<<"Connected";
Read_data();
}
void Dialog::Read_data()
{
QString filename(QString("%1/%2.bin").arg(path,device));
qDebug()<<"filename"<<filename;
QFile fileobj(filename);
int cmd,file_size,percentage_completed;
if(!fileobj.open(QFile::WriteOnly | QFile::Text))
{
qDebug()<<"Cannot open file for writting";
return;
}
QTextStream out(&fileobj);
while(1)
{
socket->waitForReadyRead(-1);
byteArray=socket->read(4);
qDebug()<<"size of bytearray"<<byteArray.size();
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
int rem;
byteArray=socket->read(length);
while(byteArray.size()!=length)
{
rem=length-byteArray.size();
byteArray.append( socket->read(rem));
}
fileobj.write(byteArray);
fileobj.flush();
byteArray.clear();
}
}
server code:
#include<stdio.h>
#include<stdlib.h>
#include<fcntl.h>
#include<sys/ioctl.h>
#include<mtd/mtd-user.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <arpa/inet.h>
#include<math.h>
#include <netinet/tcp.h>
static int msb,lsb,size,listenfd = 0, connfd = 0,len;
main()
{
struct sockaddr_in serv_addr;
serverlen=sizeof(serv_addr);
listenfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
if(bind(listenfd,(struct sockaddr*)&serv_addr, sizeof(serv_addr))<0)
{
perror("\n Error in binding");
exit(1);
}
size=100000;
listen(listenfd, 1);
fd=fopen(new.bin,"r");
len=4089;
while(1)
{
buff[0]=25;
buff[1]=2;
buff[2]=60;
buff[3]=47;
n=fread(buff+4,1,length, fd);
buff[len+4]=5;
buff[len+5]='\n';
if(n>0)
sent_bytes=send(connfd,buff,n+6,0);
size =size-len;
if(size==0)
break;
}
}
If I execute the code in localhost(127.0.0.1) I can receive the data fully. The problem arises only when I connect to different host IP. Kindly help me in this regard
EDIT 1:
The problem is when bytesAvailable() returns the maximum bytes I am waiting for waitForReadyRead() times out. It works fine if the bytesAvailable() is less than as expected. Does bytesAvailable() allocate any buffer annoyed by this behaviour.
while(1)
{
while(socket->bytesAvailable()<4)
{
if (!socket->waitForReadyRead())
{
qDebug() << "waitForReadyRead() timed out";
return;
}
}
byteArray=socket->read(4);
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
int rem_bytes=length+2;
qDebug()<<"bytes available"<<socket->bytesAvailable();
while(socket->bytesAvailable()<=rem_bytes)
{
qDebug()<<"reading";
if (!socket->waitForReadyRead(10000))//times out here if bytesAvailable() == rem_bytes but executes well in other cases
{
qDebug() << "waitForReadyRead() timed out";
return;
}
qDebug()<<"ready";
byteArray.append(socket->read(rem_bytes));
qDebug()<<"size of bytearray"<<byteArray.size();
if(byteArray.size()==length+2)
{
for(int j=0;j<length;j++)
newarray.append(byteArray[j]);
fileobj.write(newarray);
fileobj.flush();
newarray.clear();
byteArray.clear();
break;
}
else
{
rem_bytes -=byteArray.size();
}
}
Send();
}
I have tried by sending different data sizes cannot figure it out why?. Please provide me a solution pointing where I have gone wrong
Your problem stems from your misunderstanding of how TCP works.
When data is transmitted from a sender, it is broken into packets and then each packet is transmitted one by one until all the data has finished sending. If packets go missing, they are re-transmitted until either they reach their destination, or a timeout is reached.
As an added complication, each packet might follow various routes before arriving at the destination. The receiver has the task of acknowledging to the sender that packets have been received and then making sure that the packets are joined back together in the correct order.
For this reason, the longer the network route, the greater the chance of getting a delay in getting the data re-assembled. This is what you've been experiencing with your localhost versus networked-computer tests.
The IP stack on your computer does not wait for the complete data to arrive before passing it to your application but it will pause if it's missing a packet in sequence.
e.g. If you have 10 packets and packet 4 arrives last, the IP stack will pass the data to your application in two sets: 1-2-3, [[wait for 4 to arrive]], 4-5-6-7-8-9-10.
For this reason, when waitForReadyRead() returns true, you cannot expect that all your data has arrived, you must always check how many bytes have been actually received.
There are two places in your code where you wait for data. The first thing you wait for is a four-byte number to tell you how much data has been sent. Even though it's highly likely that you will have received all four bytes, it's good practice to check.
while(socket.bytesAvailable() < 4){
if (!socket.waitForReadyRead()) { // timeout after 30 second, by default
qDebug() << "waitForReadyRead() timed out";
return;
}
}
byteArray=socket->read(4);
qDebug()<<"size of bytearray"<<byteArray.size();
length=0xffff & ((byteArray[3]<<8)|(0x00ff & byteArray[2]));
The next thing you need to do is keep cycling through a wait-read-wait-read loop until all your data has arrived, each time keeping track of how many bytes you still expect to receive.
int bytesRemaining = length;
while(socket->bytesAvailable() < bytesRemaining){
if (!socket->waitForReadyRead()){
qDebug() "waitForReadyRead() timed out";
return;
}
// calling read() with the bytesRemaining argument will not guarantee
// that you will receive all the data. It only means that you will
// receive AT MOST bytesRemaining bytes.
byteArray = socket->read(bytesRemaining);
bytesRemaining -= byteArray.size();
fileobj.write(byteArray);
fileobj.flush();
}
All this said, you should not use the blocking API in your main thread or your GUI could freeze up. I suggest either using the asynchronous API, or create a worker thread to handle the downloading (and use the blocking API in the worker thread).
To see examples of how to use the two different APIs, looking in the documentation for the Fortune Client Example and the Blocking Fortune Client Example.
EDIT:
My apologies, there's a bug in the code above that doesn't take an number of possibilities into account, most importantly, if all data has already been received, and the end game once all data has finally arrived.
The following one-line change should clear that up:
Change
while(socket->bytesAvailable() < bytesRemaining){
To
while (bytesRemaining > 0) {
So you are saying that waitForReadyRead() returns false regardless of the time given once your the buffer has all 3000 expected bytes. What other behavior would you want? Perhaps you need rethink the trigger logic here. Many TCP/IP app protocols have some sort of frame start detection logic they combine with the required message size to then trigger processing. This lets them cope with widely different package sizes that the intermediate networks will impose, as well as truncated/partial messages. Once you have it working, connect to it by way of your cell phone and you will get different set of packet fragmentation examples to test with.
I am invoking multiple async calls of thrift from my code. I would like to wait
for all of them to complete before going on with my next stage.
for (...) {
TNonblockingTransport transport = new TNonblockingSocket(host, port);
TAsyncClientManager clientManager = new TAsyncClientManager();
TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
AsyncClient c = new AsyncClient(protocolFactory, clientManager, transport);
c.function(params, callback);
}
// I would like to wait for all the calls to be complete here.
I can have a countdown in the callback like wait/notify and get this done. But does the thrift system allow a way for me to wait on my async function call, preferably with a timeout ?
I didnt see any in the TAsyncClientManager or in the AsyncClient. Please help.
Given that it was not possible to do this, I used the sync api client and managed the launch and wait using executors and launchAll. I am leaving this as my answer for people to have an alternative.
I'm currently working with an Arduino trying to build an ad hoc network to which a device can connect to and send web requests to. The problem I am currently having is that I can only set up one connection and then when that connection is terminated (with client.stop()), all subsequent connections are not picked up by the server, even a cURL command just sits there spinning. The first connection I start when I reset the server works fine, and I am able to talk to the server; but after that, the Arduino can no longer find new clients (even though it's trying with the library given).
I`m using the SparkFun library for the WiFly shield cloned from GitHub, along with an Arduino Uno.
My current code is based off their default example 'WiFly_AdHoc_Example', but I had to remove a few things to get the network to start up which might be the cause of this problem.
Here is the .ino file that I am running.
#include <SPI.h>
#include <WiFly.h>
//#include <SoftwareSerial.h>
//SoftwareSerial mySerial( 5, 4); //Part from example not used (see below)
WiFlyServer server(80); //Use telnet port instead, if debugging with telnet
void setup()
{
Serial.begin(9600);
//The code below is from the example, but when I run it the WiFly will hang
// on Wifly.begin(). Without it, the WiFly starts up fine.
//mySerial.begin(9600);
//WiFly.setUart(&mySerial); // Tell the WiFly library that we are not
// using the SPIUart
Serial.println("**************Starting WiFly**************");
// Enable Adhoc mod
WiFly.begin(true);
Serial.println("WiFly started, creating network.");
if (!WiFly.createAdHocNetwork("wifly"))
{
Serial.print("Failed to create ad hoc network.");
while (1)
{
// Hang on failure.
}
}
Serial.println("Network created");
Serial.print("IP: ");
Serial.println(WiFly.ip());
Serial.println("Starting Server...");
server.begin();
Serial.print("Server started, waiting for client.");
}
void loop()
{
delay(200);
WiFlyClient client = server.available();
if (client)
{
Serial.println("Client Found.");
// A string to store received commands
String current_command = "";
while (client.connected())
{
if (client.available())
{
//Gets a character from the sent request.
char c = client.read();
if (c=='#' || c=='\n') //End of extraneous output
{
current_command = "";
}
else if(c!= '\n')
{
current_command+=c;
}
if (current_command== "get")
{
// output the value of each analog input pin
for (int i = 0; i < 6; i++)
{
client.print("analog input ");
client.print(i);
client.print(" is ");
client.print(analogRead(i));
client.println("<br />");
}
}
else if(current_command== "hello")
{
client.println("Hello there, I'm still here.");
}
else if (current_command== "quit")
{
client.println("Goodbye...");
client.stop();
current_command == "";
break;
}
else if (current_command == "*OPEN*")
{
current_command == "";
}
}
}
// Give the web browser time to receive the data
delay(200);
// close the connection
client.stop();
}
}
This script is just a mini protocol I set up to test. Once connected with the wifly module you can send text such as "get" "hello" or "quit" and the wifly module should respond back.
Using Telnet I can successfully connect (the first time) and send commands to the Arduino including "quit" to terminate the connection (calls the client.stop() method). But when I try to reconnect though Telnet, it says the connection was successful, but on the Arduino it's still looping thinking the client is still false. What??
I know right, I'm getting mixed messages from Telnet vs Arduino. None of the commands work obviously since the Ardunio is still looping waiting for a client that evaluates to true. I'm going to take a look at WiFlyServer from the library I imported and see if I can dig up the problem, because somehow that server.available() method isn't finding new clients.
I am noticing a lot of TODO's in the library code....
So I found the reason for the problem. It was in the WiFlyServer.cpp file from the SparkFun library. The code that was causing the reconnect issue was in fact the server.availible() method. Right at the top of the method, there is a check:
// TODO: Ensure no active non-server client connection.
if (!WiFly.serverConnectionActive) {
activeClient._port = 0;
}
For some reason when I comment this out, I can connect and reconnect perfectly fine and everything works as it should. I will now dive into the library and see if I can fix this, I'm not exactly sure what this is doing, but it gets called when the server connection is not active and is somehow blocking subsequent connections. The problem with this solution is that the Arduino always thinks it has found a client since client and client.connected() evaluate to true even if one doesn't exist. Even client.available() evaluates to true right when the connection is terminated and the ghost "client" is found, but after that first run through the if-statement the ghost "client" is no longer available(). Even with this flaw it still picks up a new client when it comes along which is why it works.
How might I get to the root of this problem without using this commenting hack?
Are their any risks or future problems I might run into doing it this way?
What is the purpose of the block that I commented out in the first place?
Well, when you're calling client.stop(); how does the Arduino know whether the client has to start again?
Remember setup() executes only once.
Have you tried to include the following code in your loop to tell the Arduino to create the WiFly AdHoc network again? This may or may not work. I don't have one myself and haven't played with the Wifly shield but it's worth a try.
Remember to only ever execute the code once every time you need to connect again since it's sitting inside a loop that's always going to be running.
WiFly.begin(true);
Serial.println("WiFly started, creating network.");
if (!WiFly.createAdHocNetwork("wifly"))
{
Serial.print("Failed to create ad hoc network.");
while (1)
{
// Hang on failure.
}
}