Lantronix XPORT - TCP/IP tunnel to send HTTP POST requests - http

I have a XPORT (TCP/IP serial tunnel device) connected to my microcontroller (PIC18), this way I can send serial messages which are transformed into TCP/IP packages. I've enabled TCP/IP packaging, so that all characters will be put in one TCP/IP package until nothing is received for a short period of time.
I've succesfully sent a HTTP GET request through the tcp/ip tunnel.
Though when I try to send a HTTP POST request, I either get a "400 Bad request" or my apache server crashes...
I think this behaviour is because of the line-ends not being, "right".
My code:
Delay1KTCYx(160);
xportSendTextNoLine("C192.168.200.18/80\n");//Manual connect to server (xport command)
Delay1KTCYx(160);//Wait for TCP/IP packaging.
xportSendTextNoLine("POST /debug.php HTTP/1.1");
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
xportSendTextNoLine("Host: 192.168.200.18");
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
xportSendTextNoLine("Content-Type: application/x-www-form-urlencoded");
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
xportSendTextNoLine("Grower=2&SiteId=99&Time=2015021108291700&Usertag=testuser&Action=0");
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
xportSend(0x0D);//Carriage return.
xportSend(0x0A);//New line
Delay1KTCYx(160);//Wait for TCP/IP packaging.
Wireshark output (simplified):
>POST /debug.php HTTP/1.1[0x0D][0x00][0x0A]
Host: 192.168.200.18[0x0D][0x00][0x0A]
etc.
<HTTP/1.1 400 Bad Request
Wireshark output (complete):
It seems that a 0x0D (Carriage Return) to the Lantronix Xport causes a 0x00 to be sent after it. Which would lead to the webserver not being able to interpret the request.
I've just sent a tech support question to the Lantronix support, but I'd also like to know if anyone can tell me if:
The Lantronix XPORT sends a 0x00 after a 0x0D?
My HTTP POST request is wrong in another way? (Missing content length?)
Verifying the output of the microcontroller:
Settings of the XPORT are the same as TERMINAL and should be the same as the MCU (otherwise it wouldn't receive the same):
MCU serial setup:
RCSTA2bits.SPEN = 1; //Serial port enable
TRISGbits.TRISG2 = 1; //RG2 input (RX)
TRISGbits.TRISG1 = 0; //RG1 output (TX)
ANCON2bits.ANSEL18= 0; //DIGITAL!
ANCON2bits.ANSEL19= 0;
IPR3bits.RC2IP = 1;//High-priority Rx interrupts
PIE3bits.RC2IE = 1;//Enable Rx interrupt
TXSTA2 = 0b00100000;
RCSTA2 = 0b10010000;
BAUDCON2 = 0b01000000;//Receive operation active.
SPBRG2 = 12; // 9615 (0.16% error) (8Mhz)

I have finally found the solution!
After some (logical) thinking I found that it would be the problem of the XPORT.
But what could be the problem? I disabled all features of which I thought that might interfere (even though I thought this would be unlogical to cause this kind of error) and after saving, the POST request just popped up nicely in my server...
See the settings here (might want to open it in a new tab):

Related

SIM5360A - HTTP bad request

I'm developing a device with an ESP32 connected through a level shifter to a SIM5360A.
The system is supposed to make a periodic HTTP post with it's sensor readings.
Even though I have a working setup with a SIM5360E breakout board, when I shifted to a custom PCB with a SIM5360A (because of carrier frequency), I'm not able of making an HTTP post/get.
Using a server hosted in AWS and doing a TCPDump, I detected that before the GET payload the SIM5360A inserts two spurious characters (0x01 0xF0).
The commands I'm sending to the modem are:
AT+CIPOPEN=0,"TCP","XX.XXX.XXX.XXX",80
AT+CIPSEND=0,39
GET /login HTTP/1.1<CR><LF>
Host: XX.XXX.XXX.XXX:80<CR><LF>
<CR><LF>
<CR><LF>
Using Wireshark to analyze the query on the server side, the data received is:
{0x01} {0xF0} GET ....
Those two characters confuse the apache server (and Wireshark) which doesn't interpret this as a HTTP message driving a 400: Bad Request.
I verified using PostMan that the query is correct. I also use the exact same firmware on my SIM5360E breakout successfully.
Using a scope I verified that the two characters are dumped into the UART channel by the SIM5360A and not by the level shifter or the ESP32.
I wanted to do a firmware upgrade on the SIM5360A but SIMCOM only has the 'E' firmware update available on it's website (just including this consideration for if someone has the firmware update for this version).
Any thoughts?
Thanks in advance
Bests

How to get device information from a remote modbus service?

We need to send a message to a remote modbus service listening on por 502 and get as a response the device information, the same way shodan (https://www.shodan.io) does when you search from an IP address running a modbus service. We have read modbus specifications and tried to build a message but we send it to the server over TCP and it never responds.
For example, the following message should do the trick but does not work for us:
002B0E0104
00: address, not used.
2B: function code for get information
0E: additional function code for get device information
01: read device ID code
04: object ID.
How should we do to build a correct message and get the device information as a response?
There's no requirement that a Modbus device actually supports function code 0x2B.
In my experience it's very uncommon.
I have found that modbus protocol has 2 modes of building messages:
ASCII and RTU. I was using ASCII but it was bad because I have found
that a modbus service over TCP uses RTU mode.
Also, when it is over
TCP, the modbus messages must not have address byte neither error
check byte and I was building the messages with that bytes on it.
The
third thing I was doing bad was that when modbus is over TCP, its
messages must include a 7-byte header at the beginning that I was not
inserting.
All of this is described on:
https://scadahacker.com/library/Documents/ICS_Protocols/Acromag%20-%20Introduction%20to%20Modbus-TCP.pdf
For example, a well formed message (represented in hexadecimal) could be:
000000000005002B0E0106
At least, server is giving me a readable response. The message must be converted from hexadecimal to binary and then inserted into the data section of a TCP packet which will be sent to the server to the 502 port and over an IP packet which will contain the IP to the server.
Linux nc command lets you to send messages inside of TCP packets so you don't have to deal with the OSI layers.
My problem was that the messages I was sending to the server were not meeting the modbus/TCP protocol rules.

Building a webserver, client doesn't acknowledge HTTP 200 OK frame

I'm building my own webserver based on a tutorial.
I have found a simple way to initiate a TCP connection and send one segment of http data (the webserver will run on a microcontroller, so it will be very small)
Anyway, the following is the sequence I need to go through:
receive SYN
send SYN,ACK
receive ACK (the connection is now established)
receive ACK with HTTP GET command
send ACK
send FIN,ACK with HTTP data (e.g 200 OK)
receive FIN,ACK <- I don't recieve this packet!
send ACK
Everything works fine until I send my acknowledgement and HTTP 200 OK message.
The client won't send an acknowledgement to those two packages and thus
no webpage is being displayed.
I've added a pcap file of the sequence how I recorded it with wireshark.
Pcap file: http://cl.ly/5f5/httpdump2.pcap
All sequence and acknowledgement numbers are correct, checksum are ok. Flags are also right.
I have no idea what is going wrong.
I think that step 6. should be just FIN, without ACK. What packet from the client are you ACKing at that place? Also I don't see why 4. should be an ACK instead of just a normal data packet - the client ACKed the connection at 3.
This diagram on TCP states might help.
WireShark says (of the FIN packet):
Broken TCP: The acknowledge field is
nonzero while the ACK flag is not set
I don't know for sure that's what's causing your problem, but if WireShark doesn't like that packet, maybe the client doesn't either. So, it should be FIN+ACK, or you should set the acknowledge field to 0.
If that doesn't solve it, you might also try sending the data first, then a separate FIN packet. It's valid to include data with the FIN, but it's more common to send the FIN by itself (as seen in the other pcap trace you posted earlier).
Also, you should probably be setting the PUSH flag in the packet with the 200 OK
Finally, I don't see any retransmission attempts for the FIN packet - is that because you stopped the capture right away?
The IP length field was consequently counting 8 bits too much. I made a mistake in my calculations. Everythings works like a charm now!

Disconnect and Reconnect a connected datagram socket

Iam trying to create an iterative server based on datagram sockets (UDP).
It calls connect to the first client which it gets from the first recvfrom() call (yes I know this is no real connect).
After having served this client, I disconnect the UDP socket (calling connect with AF_UNSPEC)
Then I call recvfrom() to get the first packet from the next client.
Now the problem is, that the call of recvfrom() in the second iteration of the loop is returning 0. My clients never send empty packets, so what could be going on.
This is what Iam doing (pseudocode):
s = socket(PF_INET, SOCK_DGRAM, 0)
bind(s)
for(;;)
{
recvfrom(s, header, &client_address) // get first packet from client
connect(s,client_address) // connect to this client
serve_client(s);
connect(s, AF_UNSPEC); // disconnect, ready to serve next client
}
EDIT: I found the bug in my client accidently sending an empty packet.
Now my problem is how to make the client wait to get served instead of sending a request into nowhere (server is connected to another client and doesn't serve any other client yet).
connect() is really completely unnecessary on SOCK_DGRAM.
Calling connect does not stop you receiving packets from other hosts, nor does it stop you sending them. Just don't bother, it's not really helpful.
CORRECTION: yes, apparently it does stop you receiving packets from other hosts. But doing this in a server is a bit silly because any other clients would be locked out while you were connect()ed to one. Also you'll still need to catch "chaff" which float around. There are probably some race conditions associated with connect() on a DGRAM socket - what happens if you call connect and packets from other hosts are already in the buffer?
Also, 0 is a valid return value from recvfrom(), as empty (no data) packets are valid and can exist (indeed, people often use them). So you can't check whether something has succeeded that way.
In all likelihood, a zero byte packet was in the queue already.
Your protocol should be engineered to minimise the chance of an errant datagram being misinterpreted; for this reason I'd suggest you don't use empty datagrams, and use a magic number instead.
UDP applications MUST be capable of recognising "chaff" packets and dropping them; they will turn up sooner or later.
man connect:
...
If the initiating socket is not connection-mode, then connect()
shall set the socket’s peer address, and no connection is made.
For SOCK_DGRAM sockets, the peer address identifies where all
datagrams are sent on subsequent send() functions, and limits
the remote sender for subsequent recv() functions. If address
is a null address for the protocol, the socket’s peer address
shall be reset.
...
Just a correction in case anyone stumbples across this like I did. To disconnect connect() needs to be called with the sa_family member of sockaddr set to AF_UNSPEC. Not just passed AF_UNSPEC.

How does TCP/IP report errors?

How does TCP/IP report errors when packet delivery fails permanently? All Socket.write() APIs I've seen simply pass bytes to the underlying TCP/IP output buffer and transfer the data asynchronously. How then is TCP/IP supposed to notify the developer if packet delivery fails permanently (i.e. the destination host is no longer reachable)?
Any protocol that requires the sender to wait for confirmation from the remote end will get an error message. But what happens for protocols where a sender doesn't have to read any bytes from the destination? Does TCP/IP just fail silently? Perhaps Socket.close() will return an error? Does the TCP/IP specification say anything about this?
TCP/IP is a reliable byte stream protocol. All your bytes will get to the receiver or you'll get an error indication.
The error indication will come in the form of a closed socket. Regardless of what the communication pattern (who does the sending), if the bytes can't be delivered, the socket will close.
So the question is, how do you see the socket close? If you're never reading, you'd eventually get an error trying to write to the closed socket (with ECONNRESET errno, I think).
If you have a need to sleep or wait for input on another file handle, you might want to do your waiting in a select() call where you include the socket in the list of sources you're waiting on (even if you never expect to receive anything). If the select() indicates that the socket is ready for a read call, you may get a -1 return (with ECONNRESET, I think). An EOF would indicate an orderly close (other side did a shutdown() or close().
How to distinguish this error close from a clean close (other program exiting, for example)? The errno values may be enough to distinguish error from orderly close.
If you want an unambiguous indication of a problem, you'll probably need to build some sort of application level protocol above the socket layer. For example, a short "ack" message sent by the receiver back to the sender. Then the violation of that higher level application protocol (sender didn't see an ack) would be a confirmation that it was an error close vs a clean close.
The sockets API has no way of informing the writer exactly how many bytes have been received as acknowledged by the peer. There are no guarantees made by the presence of a successful shutdown or close either.
The TCP/IP specification says nothing about the application interface (which is nearly always the sockets API).
SCTP is an alternative to TCP which attempts to address these shortcomings, among others.
In C, if you write to a socket that has failed with send(), you will get back the number of bytes that were sent. If this does not match the number of bytes you meant to send, then you have a problem. But also, when you write to a failed socket, you get SIGPIPE back. Before you start socket handling, you need to have a signal handler in place that will alert you when you get SIGPIPE.
If you are reading from a socket, you really should wrap it with an alarm so you can timeout. Like "alarm(timeout_val); recv(); alarm(0)". Check the return code of recv, and if it's 0, that indicates that the connection has been closed. A negative return result indicates a read failure and you need to check errno.
TCP is built upon the IP protocol, which is the centerpiece for the Internet, providing much of the interoperability that drives Routing, which is what determines how to get packets from their source to their destination. The IP protocol specifies that error messages should be sent back to the sender via Internet Control Message Protocol(ICMP) in the case of a packet failing to get to the sender. Some of these reasons include the Time To Live(TTL) field being decremented to zero, often meaning that the packet got stuck in a routing loop, or the packet getting dropped due to switch contention causing buffer overruns. As others have said, it is the responsibility of the Socket API that is being used to relay these errors at the IP layer up to the application interacting with the network at the TCP layer.
TCP/IP packets are either raw, UDP, or TCP. TCP requires each byte to be acked, and it will re-transmit bytes that are not acked in time. raw, and UDP are connectionless (aka best effort), so any lost packets (barring some ICMP cases, but many of these get filtered for security) are silently dropped. Upper layer protocols can add reliability, such as is done with some raw OSPF packets.

Resources