How can I send a simple HTTP request with a lwIP stack? - http

Please move/close this if the question isn't relevant.
Core: Cortex-M4
Microprocessor: TI TM4C1294NCPDT.
IP Stack: lwIP 1.4.1
I am using this microprocessor to do some data logging, and I want to send some information to a separate web server via a HTTP request in the form of:
http://123.456.789.012:8800/process.php?data1=foo&data2=bar&time=1234568789
and I want the processor to be able to see the response header (i.e if it was 200 OK or something went wrong) - it does not have to do display/recieve the actual content.
lwIP has a http server for the microprocessor, but I'm after the opposite (microprocessor is the client).
I am not sure how packets correlate to request/response headers, so I'm not sure how I'm meant to actually send/recieve information.

This ended up being pretty simple to implement, forgot to update this question.
I pretty much followed the instructions given on this site, which is the Raw/TCP 'documentation'.
Basically, The HTTP request is encoded in TCP packets, so to send data to my PHP server, I sent an HTTP request using TCP packets (lwIP does all the work).
The HTTP packet I want to send looks like this:
HEAD /process.php?data1=12&data2=5 HTTP/1.0
Host: mywebsite.com
To "translate" this to text which is understood by an HTTP server, you have to add "\r\n" carriage return/newline in your code. So it looks like this:
char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n ";
Note that the end has two lots of "\r\n"
You can use GET or HEAD, but because I didn't care about HTML site my PHP server returned, I used HEAD (it returns a 200 OK on success, or a different code on failure).
The lwIP raw/tcp works on callbacks. You basically set up all the callback functions, then push the data you want to a TCP buffer (in this case, the TCP string specified above), and then you tell lwIP to send the packet.
Function to set up a TCP connection (this function is directly called by my application every time I want to send a TCP packet):
void tcp_setup(void)
{
uint32_t data = 0xdeadbeef;
/* create an ip */
struct ip_addr ip;
IP4_ADDR(&ip, 110,777,888,999); //IP of my PHP server
/* create the control block */
testpcb = tcp_new(); //testpcb is a global struct tcp_pcb
// as defined by lwIP
/* dummy data to pass to callbacks*/
tcp_arg(testpcb, &data);
/* register callbacks with the pcb */
tcp_err(testpcb, tcpErrorHandler);
tcp_recv(testpcb, tcpRecvCallback);
tcp_sent(testpcb, tcpSendCallback);
/* now connect */
tcp_connect(testpcb, &ip, 80, connectCallback);
}
Once a connection to my PHP server is established, the 'connectCallback' function is called by lwIP:
/* connection established callback, err is unused and only return 0 */
err_t connectCallback(void *arg, struct tcp_pcb *tpcb, err_t err)
{
UARTprintf("Connection Established.\n");
UARTprintf("Now sending a packet\n");
tcp_send_packet();
return 0;
}
This function calls the actual function tcp_send_packet() which sends the HTTP request, as follows:
uint32_t tcp_send_packet(void)
{
char *string = "HEAD /process.php?data1=12&data2=5 HTTP/1.0\r\nHost: mywebsite.com\r\n\r\n ";
uint32_t len = strlen(string);
/* push to buffer */
error = tcp_write(testpcb, string, strlen(string), TCP_WRITE_FLAG_COPY);
if (error) {
UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_write)\n", error);
return 1;
}
/* now send */
error = tcp_output(testpcb);
if (error) {
UARTprintf("ERROR: Code: %d (tcp_send_packet :: tcp_output)\n", error);
return 1;
}
return 0;
}
Once the TCP packet has been sent (this is all need if you want to "hope for the best" and don't care if the data actually sent), the PHP server return a TCP packet (with a 200 OK, etc. and the HTML code if you used GET instead of HEAD). This code can be read and verified in the following code:
err_t tcpRecvCallback(void *arg, struct tcp_pcb *tpcb, struct pbuf *p, err_t err)
{
UARTprintf("Data recieved.\n");
if (p == NULL) {
UARTprintf("The remote host closed the connection.\n");
UARTprintf("Now I'm closing the connection.\n");
tcp_close_con();
return ERR_ABRT;
} else {
UARTprintf("Number of pbufs %d\n", pbuf_clen(p));
UARTprintf("Contents of pbuf %s\n", (char *)p->payload);
}
return 0;
}
p->payload contains the actual "200 OK", etc. information. Hopefully this helps someone.
I have left out some error checking in my code above to simplify the answer.

Take a look at the HTTP example in Wikipedia. The client will send the GET and HOST lines. The server will respond with many lines for a response. The first line will have the response code.

I managed to create an HTTP client for raspberry pi Pico W using the example here.
It uses the httpc_get_file or httpc_get_file_dns functions from the sdk.
However, that example is incomplete since it has a memory leak.
You will need to free the memory taken by the struct pbuf *hdr in the headers function and struct pbuf *p in the body function with respectively pbuf_free(hdr); and pbuf_free(p);
Without those modifications, it will stop working after about 20 calls (probably depends on the size of the response).

Related

VUE Front end to go server (http) and clients connected to go server (tcp) error

I'm currently creating a go TCP server that handles file sharing between multiple go clients, that works fine. However, I'm also building a front end using vue.js showing some server stats like the number of users, bytes sent, etc.
The problem occurs when I include the 'http.ListenAndServe(":3000", nil)' function handles the requests from the front end of the server. Is it impossible to have a TCP and an HTTP server on the same go file?
If so, how can a link the three (frontend, go-server, clients)
Here is the code of the 'server.go'
func main() {
// Create TCP server
serverConnection, error := net.Listen("tcp", ":8085")
// Check if an error occured
// Note: because 'go' forces you to use each variable you declare, error
// checking is not optional, and maybe that's good
if error != nil {
fmt.Println(error)
return
}
// Create server Hub
serverHb := newServerHub()
// Close the server just before the program ends
defer serverConnection.Close()
// Handle Front End requests
http.HandleFunc("/api/thumbnail", requestHandler)
fs := http.FileServer(http.Dir("../../tcp-server-frontend/dist"))
http.Handle("/", fs)
fmt.Println("Server listening on port 3000")
http.ListenAndServe(":3000", nil)
// Each client sends data, that data is received in the server by a client struct
// the client struct then sends the data, which is a request to a 'go' channel, which is similar to a queue
// Somehow this for loop runs only when a new connection is detected
for {
// Accept a new connection if a request is made
// serverConnection.Accept() blocks the for loop
// until a connection is accepted, then it blocks the for loop again!
connection, connectionError := serverConnection.Accept()
// Check if an error occurred
if connectionError != nil {
fmt.Println("1: Woah, there's a mistake here :/")
fmt.Println(connectionError)
fmt.Println("1: Woah, there's a mistake here :/")
// return
}
// Create new user
var client *Client = newClient(connection, "Unregistered_User", serverHb)
fmt.Println(client)
// Add client to serverHub
serverHb.addClient(client)
serverHb.listClients()
// go client.receiveFile()
go client.handleClientRequest()
}
}

Why did TCP-BSD server stuck in read() even data receives?

I've created a TCP server application using BSD sockets and NUCLEO-H743ZI2 development board with STM32CubeMX 5.6.0 & LwIP 2.0.3 in Keil-MDKARM.
I noticed that:
If a client connects and sends 11 bytes or more at first, server
receives the data correctly and read() responds displaying the data.
However, if client sends the first data lower than 11
bytes, read() function blocks even next received data is higher than 11 bytes, until client disconnects. After the disconnection, all the data queued is displayed.
Namely, if first data sent from a client to my server is lower than 11 bytes, event_callback for a rcvevent is not triggered until disconnection.
My aim is to make the server available to one byte reception.
I've pasted my Server task/thread below. Let me have your kind response at your earliest convenience and feel free to request other related files/libraries(lwip.h, lwipopts.h..).
Kind Regards
void StartTask01(void const * argument)
{
/* USER CODE BEGIN StartTask01 */
MX_LWIP_Init();
/*start a listening tcp server*/
int iServerSocket;
struct sockaddr_in address;
if ((iServerSocket = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
printf("Socket could not be created\n");
}
else
{
address.sin_family = AF_INET;
address.sin_port = htons(80);
address.sin_addr.s_addr = INADDR_ANY;
if (bind(iServerSocket, (struct sockaddr *)&address, sizeof (address)) < 0)
{
printf("socket could not be bound\n");
}
else
{
listen(iServerSocket, MEMP_NUM_NETCONN);
}
}
/*server started listening*/
struct sockaddr_in remoteHost;
int newconn;
char caReadBuffer[1500];
memset(caReadBuffer, 0, 1500);
for(;;)
{
/*block until accepting an incoming connection*/
newconn = accept(iServerSocket, (struct sockaddr *)&remoteHost, (socklen_t *)(sizeof(remoteHost)));
if (newconn != -1)/*if accepted well*/
{
/*block until data arrives*/
read(newconn, caReadBuffer, sizeof(caReadBuffer));
printf("data read: %s\n", caReadBuffer);
memset(caReadBuffer, 0, 1500);
}
}
/* USER CODE END StartTask01 */
}
The problem that's causing this issue is that you only call read once on each connection. If you don't happen to receive all the data from that single call to read (which is entirely unpredictable), you will never call read on that connection again.
When you call read on a blocking TCP connection, it will only block if there is no data available. Otherwise, it will give you whatever data is available up to the maximum number of bytes you ask for. It will not wait for more data if only some is available. It's up to you to call read again if you didn't receive all the data you expected.
One your second iteration of the for loop, you overwrite newconn with a new connection. You don't close the old connection. So you have a socket leak.
SOLVED:
The problem is, my server was listening port 80. I changed it to port 7 and thankfully bug is resolved, now read() works as expected.
This bug let me think that LwIP had problems on listening that web(80) port instead of others. There should be a some kind of discrimination between listening some spectacular ports even for unimplemented protocols.

Sending TCP data without recieving (boost asio)

I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));

dev_queue_xmit randomly returns NET_XMIT_CN with tun/tap device

I have a userspace program which construct its own packet (App, UDP, IP) and write()s it to the TUN device. The packet is intercepted by my own Netfilter module, which it checks if the packet it received is one which we want to process. Then my Netfilter module will skb_clone() the original skb and create a response packet which I fill in with some data to be returned to the user-space program. To send the response, I use dev_queue_xmit(). It randomly returns NET_XMIT_CN even though I just created a fresh TUN device and there is no other traffic passing through. If I keep executing the user-space program (sending new packets to the TUN device), eventually the TUN device will respond, but not consistently. I can't seem to track down why it is behaving so erratically.
Essentially I am using the TUN device as a mechanism to communicate from user-space to kernel-space, and vice versa.
Here's my user-space app:
tun_fd = tun_alloc(dev_name);
packet = ... /* Construct request...*/
nwrite = write(tun_fd, packet, packet_len);
...
unsigned recv_buf[1500];
int received = 0;
while (!received) {
nread = read(tun_fd, recv_buf, 1500);
...
}
...
close(tun_fd);
Here's my Netfilter module:
static struct nf_hook_ops nfho;
static int __init my_hook(void)
{
nfho.hook = hook_func;
nfho.hooknum = 0;
nfho.hook = PF_INET;
nfho.hook = NF_IP_PRI_FIRST;
nf_register_hook(&nfho);
}
unsigned int hook_func(void *priv, struct sk_buff *skb, const struct nf_hook_state *state)
{
struct sk_buff *clone_skb = skb_clone(skb, GFP_KERNEL):
...
/*
* Check if packet is for us.
* Check IP & UDP header, etc
* If so, parse request, put together response clone_skb
*/
...
if ((err = dev_queue_xmit(clone_skb)) != 0) {
printk(....)
/* Either it return 0 (success) or 2, meaning NET_XMIT_CN */
}
return NF_STOLEN;
}
I can't seem to figure out this behavior. Am I misusing the TUN device? Is there an easier way than this?
Please let me know if I should provide any extra details or clarify something.

Connecting Qt with SSL to a jetty server

I have some problems with connecting a qt client to an embedded jetty server.
At first, I use the following components:
Qt 4.4.3 (compiled with active openssl support)
jetty 8.8.1
java 6
I know, the versions are not most recent, but because of licencing issues and customer wishes I can not use newer one.
So, the scenario is that a qt client has to send http GET and POST requests to the jetty server. As long I use simple http with the QHttp object it works fine, the problems start when I switch to SSL.
My first try was to use the QSslSocket object for the GET request:
// Load certs + private key to socket
_pSocket = new QSslSocket(this);
_pSocket->setLocalCertificate(_certificate);
_pSocket->setPrivateKey(_privatekey);
_pSocket->addDefaultCaCertificate(_cacertificate);
connect (_pSocket, SIGNAL(encrypted()), this, SLOT(_encrypted()));
_pSocket->connectToHostEncrypted("localhost", 8000);
with the following slot function for the encrypted state:
void TestClient::_encrypted() {
QString _path("/testpath/list");
QByteArray buffer("GET ");
buffer.append(_path).append(" HTTP/1.1\r\n");
_pSocket->write(buffer);
}
Here I have my first problem:
This results in the following string, which is as far as I see compliant to RFC 2616:
"GET /testpath/list HTTP/1.1\r\n"
For some reason, the jetty server has a problem with that, keeping in a loop till the client close the connection because of a time out.
But if I use the following string, it works perfect:
"GET /testpath/list\r\n"
Here is my first question: Do you now an explanation for this behaviour ? I can live with it, but I want to know the reason
My second problem is the POST request, this fails always.
These examples I already tried:
"POST /testpath/receive/\r\n{"data":"hello world ?!"}\r\n"
"POST /testpath/receive/ HTTP/1.1\r\n{"data":"hello world ?!"}\r\n"
"POST /testpath/receive/\r\n\r\n{"data":"hello world ?!"}\r\n"
"POST /testpath/receive/ HTTP/1.1\r\n\r\n{"data":"hello world ?!"}\r\n"
I have the feeling, that the body is every time empty, so my server crashes because he tries to parse an empty string as json.
At least, the following log shows that:
2013-11-19 17:11:51.671, INFO, foo.bar.RepositoryHandler, qtp11155366-16 - /testpath/receive request type : /receive
2013-11-19 17:11:51.811, ERROR, foo.bar.RepositoryHandler, qtp11155366-16 - /testpath/receive missing or unknown elements in JSON request. Check JSON against documentation
2013-11-19 17:11:51.874, WARN, org.eclipse.jetty.server.AbstractHttpConnection, qtp11155366-16 - /testpath/receive /testpath/receive
java.lang.NullPointerException: null
at foo.bar.RepositoryHandler.decodeViewingRequest(RepositoryHandler.java:366) ~[MyServer.jar:na]
at foo.bar.RepositoryHandler.handle(RepositoryHandler.java:182) ~[MyServer.jar:na]
So, all together, I think I have several major errors in my requests. But which ?
My second try was to use the QHttp object and change the QSocket it uses with a QSslSocket I already initiated.
Here's the code of the main function:
QSslSocket* _pSocket;
QHttp* _pHttp;
int _id;
QBuffer* _pBuffer;
QByteArray _data;
_pSocket = new QSslSocket(this);
_pSocket->setLocalCertificate(_certificate);
_pSocket->setPrivateKey(_privatekey);
_pSocket->addDefaultCaCertificate(_cacertificate);
QUrl url;
url.setScheme("https");
url.setHost("localhost");
url.setPort(8001);
url.setPath("/testpath/receive");
connect (_pSocket, SIGNAL(encrypted()), this, SLOT(_encrypted()));
connect(_pHttp,SIGNAL(requestFinished(int,bool)),this,SLOT(_requestFinished(int,bool)));
connect(_pHttp,SIGNAL(done(bool)),this,SLOT(_done(bool)));
_pBuffer = new QBuffer(&_data);
_pHttp->setSocket(_pSocket);
_pSocket->connectToHostEncrypted(strHost, strPort.toInt());
_id = _pHttp->get(url.toString(),_pBuffer);
And the callbacks:
void _requestFinished(int id, bool error) {
if(id = _id)
qDebug() << "data=" << _data;
}
void _encrypted() {
qDebug() << "encrypted";
}
void _done(bool error) {
logInfo() << "_done";
if(_pHttp) {
_pHttp->abort();
delete _pHttp;
_pHttp = 0;
}
if(_pBuffer) {
delete _pBuffer;
_pBuffer = 0;
}
if(_pSocket) {
_pSocket->disconnectFromHost();
delete _pSocket;
_pSocket = 0;
}
}
I think, I only have to change the position of the _pHttp->get call, perhaps in the _encrypted callback, but I'm not sure.
Some good advise ?
Thanks,
Robert
Your HTTP request is incomplete, per RFC2616.
"GET /testpath/list HTTP/1.1\r\n"
That is invalid.
Try this instead.
"GET /testpath/list HTTP/1.1\r\n" + /* request line (required) */
"Host: localhost\r\n" + /* host header (required minimum) */
"\r\n" /* terminating CR + LF (required) */
As outlined in Section 5.1.2
The most common form of Request-URI is that used to identify a
resource on an origin server or gateway. In this case the absolute
path of the URI MUST be transmitted (see section 3.2.1, abs_path) as
the Request-URI, and the network location of the URI (authority) MUST
be transmitted in a Host header field. For example, a client wishing
to retrieve the resource above directly from the origin server would
create a TCP connection to port 80 of the host "www.w3.org" and send
the lines:
GET /pub/WWW/TheProject.html HTTP/1.1
Host: www.w3.org
The Request-URI line and Host header Header are mandated.

Resources