I am sending the following HTTP request:
POST /input/8dZ8bgapvjfYzmwWno6W.txt HTTP/1.1
Host: data.sparkfun.com
Phant-Private-Key: pz5ga4pkydHgpEb8v608
Connection: close
Content-Type: application/x-www-form-urlencoded
Content-Length: 7
temp=44
In my code, I am sending it using UART tx requests to the xbee module, for which the translates to:
POST /input/8dZ8bgapvjfYzmwWno6Wr.txt HTTP/1.1\r\n
Host: data.sparkfun.com\r\n
Phant-Private-Key: pz5ga4pkydHgpEb8v608\r\n
Connection: close\r\n
Content-Type: application/x-www-form-urlencoded\r\n
Content-Length: 7\r\n
\r\n
temp=44
This is to communicate to the phant dataserver at data.sparkfun.com, and it responds with the following data:
HTTP/1.0 400 Bad request
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>
I found the answer:
The packet is correct.
While configuring the Xbee Wifi module using XCTU, I had to give the correct port numbers of the server and client Xbee, which were wrong.
Server was 80, client was any port I think.
Related
I'm trying to test sending HTTP requests from my Arduino. I decided to use Free RESTful web service - http://services.groupkt.com. But something goes wrong and I don't understand what.
GET request:
GET /country/get/all HTTP/1.1
Host: 45.79.172.152
Connection: keep-alive
Serial Monitor:
AT+CIPMUX=0
OK
AT+CIPSTART="TCP","45.79.172.152",80
CONNECT
OK
AT+CIPSEND=74
OK
>
busy s...
Recv 74 bytes
SEND OK
+IPD,493:HTTP/1.1 408 Request Timeout
Date: Thu, 07 Jun 2018 16:10:59 GMT
Server: Apache/2.4.25 (Debian)
Content-Length: 307
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>408 Request Timeout</title>
</head><body>
<h1>Request Timeout</h1>
<p>Server timeout waiting for the HTTP request from the client.</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at services.groupkt.com Port 80</address>
</body></html>
CLOSED
What I'm doing wrong?
HTTP is not like Telnet. You can't enter a HTTP request line by line in Serial Monitor.
HTTP requests are meant to be sent by a program and the timeout to receive the
complete request on servers is one or two seconds. Write a sketch the sends the request.
My client is sending:
POST /xxx/yyy HTTP/1.1
Host: localhost:9009
User-Agent: gSOAP/2.8
Content-Type: text/xml; charset=utf-8
Content-Length: 2442
Connection: keep-alive
SOAPAction: ""
But the server replies:
HTTP/1.1 200 OK
Content-Type: text/xml;charset=UTF-8
Content-Length: 11182
Server: Jetty(8.1.14.v20131031)
Isn't the server supposed to return "Connection: keep-alive" too?
I see that afterwards the client closes the connection although it is configure to keep the connection open.
I assumed it is because the server didn't provide the keep-alive in the reply (Is that the RFC?).
In my case the reason gSoap closed the connection wasn't related to the HTTP headers returned from server, but to the fact you need to specify the keep-alive flags on both directions by calling:
soap_set_imode(this, SOAP_IO_KEEPALIVE);
soap_set_omode(this, SOAP_IO_KEEPALIVE);
From what I've read in HTTP 1.1 persistent connections are the default so if the server didn't return "Connection: close" the connection can be used for next request too.
I am trying to call a page in PHP with a http_get :
$url = "http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446";
http_get($url, $appelOptions, $appelInfos);
My problem is that it does not work every time.
I installed Wireshark to see what I'm really sending and I found an odd thing. Sometimes, the port is not used for the HTTP request.
When it works, I have :
Hypertext Transfer Protocol
GET http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446 HTTP/1.1\r\n
Request Method: GET
Request URI: http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446
Request Version: HTTP/1.1
User-Agent: PECL::HTTP/1.6.5 (PHP/5.2.4-2ubuntu5.7)\r\n
Host: mysite.fr:9090\r\n
Pragma: no-cache\r\n
Accept: */*\r\n
Proxy-Connection: Keep-Alive\r\n
Keep-Alive: 300\r\n
Connection: keep-alive\r\n
Date: Fri, 15 Jun 2012 16:40:46 +0200\r\n
Accept-Charset: utf-8\r\n
Accept-Encoding: gzip;q=1.0,deflate;q=0.5\r\n
\r\n
And when it's not :
Hypertext Transfer Protocol
GET http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446 HTTP/1.1\r\n
Request Method: GET
Request URI: http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446
Request Version: HTTP/1.1
User-Agent: PECL::HTTP/1.6.5 (PHP/5.2.4-2ubuntu5.7)\r\n
Host: mysite.fr\r\n
Pragma: no-cache\r\n
Accept: */*\r\n
Proxy-Connection: Keep-Alive\r\n
Keep-Alive: 300\r\n
Connection: keep-alive\r\n
Date: Fri, 15 Jun 2012 16:40:34 +0200\r\n
Accept-Charset: utf-8\r\n
Accept-Encoding: gzip;q=1.0,deflate;q=0.5\r\n
\r\n
I tried to call the page with wget and it's always working :
wget http://mysite.fr:9090/neolane-webservice/campagnesclient/Coclico=1135446
So I'm guessing that my problem id due to Apache config, but I don't know where to look. Could you help me please ?
You will need to set the port in the $appelOptions array.
$appelOptions['port']=9090;
http_get($url, $appelOptions, $appelInfos);
Unfortunately http_get does not seem to respect the :port syntax in the URL
I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.
I am working on a simple download application. While making a request for the following file both firefox and my application doesn't get the content-length field. But if i make the request using wget server does send the content-length field. I did change wgets user agent string to test and it still got the content-length field.
Any ideas why this is happening?
wget request
---request begin---
GET /dc-13/video/2005_Defcon_V2-P_Zimmerman-Unveiling_My_Next_Big_Project.mp4 HTTP/1.0
User-Agent: test
Accept: */*
Host: media.defcon.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.0 200 OK
Server: lighttpd
Date: Sun, 05 Apr 2009 04:40:08 GMT
Last-Modified: Tue, 23 May 2006 22:18:19 GMT
Content-Type: video/mp4
Content-Length: 104223909
Connection: keep-alive
firefox request
GET /dc-13/video/2005_Defcon_V2-P_Zimmerman-Unveiling_My_Next_Big_Project.mp4 HTTP/1.1
Host: media.defcon.org
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.0.8) Gecko/2009032608 Firefox/3.0.8
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://www.defcon.org/html/links/defcon-media-archives.html
Pragma: no-cache
Cache-Control: no-cache
HTTP/1.x 200 OK
Server: lighttpd
Date: Sun, 05 Apr 2009 05:20:12 GMT
Last-Modified: Tue, 23 May 2006 22:18:19 GMT
Content-Type: video/mp4
Transfer-Encoding: chunked
Update:
Is there a header that I can send that will tell Lighthttpd not to use chunked encoding.My original problem is that I am using urlConnection to grab the file in my java application which automatically sends HTTP 1.1 request.
I would like to know the size of the file so i can update my percentage.
GET
/dc-13/video/2005_Defcon_V2-P_Zimmerman-Unveiling_My_Next_Big_Project.mp4
HTTP/1.1
Firefox is performing an HTTP 1.1 GET request. Lighthttpd understands that the client will support chunked-transfer encoding and returns the content in chunks, with each chunk reporting its own length.
GET
/dc-13/video/2005_Defcon_V2-P_Zimmerman-Unveiling_My_Next_Big_Project.mp4
HTTP/1.0
Wget on the other hand performs an HTTP 1.0 GET request. Lighthttpd, understanding that the client doesn't support HTTP 1.1 (and thus chunked-transfer encoding), returns the content in one single chunk, with the length reported in the response header.
Looks like it's because of the chunked transfer encoding:
Transfer-Encoding: chunked
This will send the video down in chunks, each with its own size. This is defined in HTTP 1.1, which is what Firefox is using, while wget is using HTTP 1.0, which doesn't support chunked transfer encoding, so the server has to send the whole file at once.
I was having the same problem and found a solution regardless of which HTTP version:
First use a HEAD request to the server which correctly responds with just the HTTP header and no contents. This header correctly includes the wanted Content-Length: bytes size for the file to download.
Proceed with the GET request to download the file (the header from the GET response fails to include Content-length).
An Objective-C language example:
NSString *zipURL = #"http://1.bp.blogspot.com/_6-cw84gcURw/TRNb3PDWneI/AAAAAAAAAYM/YFCZP1foTiM/s1600/paragliding1.jpg";
NSURL *url = [NSURL URLWithString:zipURL];
// Configure the HTTP request for HEAD header fetch
NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:url];
urlRequest.HTTPMethod = #"HEAD"; // Default is "GET"
// Define response class
__autoreleasing NSHTTPURLResponse *response;
// Send HEAD request to server
NSData *contentsData = [NSURLConnection sendSynchronousRequest:urlRequest returningResponse:&response error:nil];
// Header response field
NSDictionary *headerDeserialized = response.allHeaderFields;
// The contents length
int contents_length = [(NSString*)headerDeserialized[#"Content-Length"] intValue];
//printf("HEAD Response header: %s\n",headerDeserialized.description.UTF8String);
printf("HEAD:\ncontentsData.length: %d\n",contentsData.length);
printf("contents_length = %d\n\n",contents_length);
urlRequest.HTTPMethod = #"GET";
// Send "GET" to download file
contentsData = [NSURLConnection sendSynchronousRequest:urlRequest returningResponse:&response error:nil];
// Header response field
headerDeserialized = response.allHeaderFields;
// The contents length
contents_length = [(NSString*)headerDeserialized[#"Content-Length"] intValue];
printf("GET Response header: %s\n",headerDeserialized.description.UTF8String);
printf("GET:\ncontentsData.length: %d\n",contentsData.length);
printf("contents_length = %d\n",contents_length);
return;
And the output:
HEAD:
contentsData.length: 0
contents_length = 146216
GET:
contentsData.length: 146216
contents_length = 146216
(Note: This example URL does correctly provides the header Content-Length from the GET response, but shows the idea if it failed to)