I am working on a project which sends GPS longtitude and latitude to a server using Http POST method. I use GPRS of sim908 module and AT+Commands to communicate with this module.
Here are the commands related to Http Post:
AT+HTTPPARA="URL","http://'server'/'path':tcpPort'"
AT+HTTPACTION=1
AT+HTTPDATA= 'size','time'
The first command is used to set http parameters:
'server' = FQDN or IP address
'path' = path of file or directoy
'tcpPort' = default is 80
The second commands tells the module whether to use GET or POST method which is POST here.
The third one is used to recieve server response:
'size' = number of characters to read
'time' = set enough time to input all data with length of 'size'
I know how to send data using GET method. I must put a string like: "?var1=value1&var2=value2" at the end of the url. Here is an example: "http://www.example.com/test/getdata.php?TI=12.1&TO=22.2&TR=33.3"
But how is the POST method? Could anyone help me please?
Thanks alot.
[SOLVED]:
SIM908 does actually support post even though it is poorly documented:
AT+HTTPINIT
AT+HTTPPARA="CID",1
AT+HTTPPARA="URL","http://108.167.133.20/.../index.php"
NB*** AT+HTTPDATA=, ****NB
Wait for DOWNLOAD response then send data (bytes needs to be exact)
AT+HTTPACTION=1
AT+HTTPREAD=1,100000
Related
I'm trying to download a firmware.bin file that is produced in a private Github repository. I have the code that is finding the right asset url to download the file and per Github instructions the accept header needs to be set to accept: application/octet-stream in order to get the binary file. I'm only getting JSON in response. If I run the same request through postman I'm getting a binary file as the body. I've tried downloading it using HTTPClient and I get the same JSON request. It seems the headers aren't being set as requested to tell Github to send the binary content as I'm just getting JSON. As for the ArduinoOTA abstraction, I can't see how to even try to set headers and in digging into the esp_https_ota functions and http_client functions there doesn't appear to be a way to set headers for any of these higher level abstractions because the http_config object has no place for headers as far as I can tell. I might file a feature request to allow for this, but am new to this programming area and want to check to see if I'm missing something first.
Code returns JSON, not binary. URL is github rest api url to the asset (works in postman)
HTTPClient http2;
http2.setAuthorization(githubname,githubpass);
http2.addHeader("Authorization","token MYTOKEN");
http2.addHeader("accept","application/octet-stream");
http2.begin( firmwareURL, GHAPI_CERT); //Specify the URL and certificate
With the ESP IDF HTTP client you can add headers to an initialized HTTP client using function esp_http_client_set_header().
esp_http_client_handle_t client = esp_http_client_init(&config);
esp_http_client_set_header(client, "HeaderKey", "HeaderValue");
err = esp_http_client_perform(client);
If using the HTTPS OTA API, you can register for a callback which gives you a handle to the underlying HTTP client. You can then do the exact same as in above example.
I want to do the following using NGINX Module :
Nginx receives a request, checks if it has the key to decode the request in the cache(custom)
if YES, then decode request, obtain an ID from it and check if there is a value against this ID in a key-value store (asynchronously) and return it in the response
if NO, then get the new key from the key-value store (asynchronously) and then store this key in the cache and use it to decode the request. Obtain the ID and check if there is a value against this ID in the key-value store(asynchronously) and send it in the response.
I was able to figure out how to do step 1, i wrote an upstream module by referring openresty's nginx module from github. For achieving step 2 functionality, i tried creating a new upstream request in the process_header() function of the first upstream call (i.e getting the key from the store), but this didn't work. How to achieve this ?
Thanks in advance.
I see 2 approaches:
You may do it all in Lua using lua-nginx-module and lua-resty-redis library. Here you may find some info Configure-nginx-to-get-url-from-redis-with-key-and-proxy-the-url-to-other-server
Write nginx C module, use redis2-nginx-module as upstream module, send subrequest. Take a look at my answer to Subrequests are not sent or the request hangs It shows how to send subrequests.
Nginx with its Lua extension makes it very easy to examine and manipulate a request prior to having it dispatched as appropriate. In my current project I am trapping all requests that arrive at a specified folder location, examining the request and then having it executed by an entirely different script. An outline of what I do:
/etc/nginx/sites-available/default Configuration
location /myfolder{
rewrite_by_lua_file "/path/to/rewrite.lua";
lua_need_request_body "on";
}
In rewrite.lua
ngx.req.set_header('special','my_special-header');
local data = ngx.req.get_body_data();
ngx.req.set_body_data(data);
and finally redirecting to another location:
ngx.req.set_uri("/myother/index.php",true);
With simple GET requests or POST requests with one or two items of attached POST data this works well. The issue I have been unable to resolve is this. Say for instance I am sending out multipart from data in my original request.
ngx.req.get_body_data()
actually gets the raw request body. If I forward this to /myother/index.php I can retrieve this as file_get_contents('php://input'). This is OK but not good enough. I don't want to have to deal with the raw input here. I would rather be able to work with standard PHP $_POST and $_FILES variables. However, those are empty and the their contents are present in the body as a text string.
Is there a way to tell Nginx that when I subject a user request to some treatment by Lua prior to forwarding it to another URL, it should just pass on the POST/PUT request fields as well as the whole of the original $_FILES array?
maybe you guys can help me with this. I am trying to implement
reCAPTCHA in my node.js application and no matter what I do, I keep
getting "invalid-site-private-key" as a response.
Here are the things I double and double checked and tried:
Correct Keys
Keys are not swapped
Keys are "global keys" as I am testing on localhost and thought it might be an issue with that
Tested in production environment on the server - same problem
The last thing I can think of is that my POST request to the reCAPTCHA
API itself is incorrect as the concrete format of the body is not
explicitly documented (the parameters are documented, I know). So this
is the request body I am currently sending (the key and IP is changed
but I checked them on my side):
privatekey=6LcHN8gSAABAAEt_gKsSwfuSfsam9ebhPJa8w_EV&remoteip=10.92.165.132& challenge=03AHJ_Vuu85MroKzagMlXq_trMemw4hKSP648MOf1JCua9W-5R968i2pPjE0jjDGX TYmWNjaqUXTGJOyMO3IKKOGtkeg_Xnn2UVAfoXHVQ-0VCHYPNwrj3PQgGj22EFv7RGSsuNfJCyn mwTO8TnwZZMRjHFrsglar2zQ&response=Coleshill areacce
Is there something wrong with this format? Do I have to send special
headers? Am I completely wrong? (I am working for 16 hours straight
now so this might be ..)
Thank you for your help!
As stated in the comments above, I was able to solve the problem myself with the help of broofa and the node-recaptcha module available at https://github.com/mirhampt/node-recaptcha.
But first, to complete the missing details from above:
I didn't use any module, my solution is completely self-written based on the documentation available at the reCAPTCHA website.
I didn't send any request headers as there was nothing stated in the documentation. Everything that is said concerning the request before they explain the necessary parameters is the following:
"After your page is successfully displaying reCAPTCHA, you need to configure your form to check whether the answers entered by the users are correct. This is achieved by doing a POST request to http://www.google.com/recaptcha/api/verify. Below are the relevant parameters."
-- "How to Check the User's Answer" at http://code.google.com/apis/recaptcha/docs/verify.html
So I built a querystring myself (which is a one-liner but there is a module for that as well as I learned now) containing all parameters and sent it to the reCAPTCHA API endpoint. All I received was the error code invalid-site-private-key, which actually (as we know by now) is a wrong way of really sending a 400 Bad Request. Maybe they should think about implementing this then people would not wonder what's wrong with their keys.
These are the header parameters which are obviously necessary (they imply you're sending a form):
Content-Length which has to be the length of the query string
Content-Type which has to be application/x-www-form-urlencoded
Another thing I learned from the node-recaptcha module is, that one should send the querystring utf8 encoded.
My solution now looks like this, you may use it or built up on it but error handling is not implemented yet. And it's written in CoffeeScript.
http = require 'http'
module.exports.check = (remoteip, challenge, response, callback) ->
privatekey = 'placeyourprivatekeyhere'
request_body = "privatekey=#{privatekey}&remoteip=#{remoteip}&challenge=#{challenge}&response=#{response}"
response_body = ''
options =
host: 'www.google.com'
port: 80
method: 'POST'
path: '/recaptcha/api/verify'
req = http.request options, (res) ->
res.setEncoding 'utf8'
res.on 'data', (chunk) ->
response_body += chunk
res.on 'end', () ->
callback response_body.substring(0,4) == 'true'
req.setHeader 'Content-Length', request_body.length
req.setHeader 'Content-Type', 'application/x-www-form-urlencoded'
req.write request_body, 'utf8'
req.end()
Thank you :)
+1 to #florian for the very helpful answer. For posterity, I thought I'd provide some information about how to verify what your captcha request looks like to help you make sure that the appropriate headers and parameters are being specified.
If you are on a Mac or a Linux machine or have access to one of these locally, you can use the netcat command to setup a quick server. I guess there are netcat windows ports but I have no experience with them.
nc -l 8100
This command creates a TCP socket listening on pot 8100 and will wait for a connection. You then can change the captcha verify URL from http://www.google.com/recaptcha/... in your server code to be http://localhost:8100/. When your code makes the POST to the verify URL you should see your request outputted to the scree by netcat:
POST / HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 277
Host: localhost:8100
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1 (java 1.5)
privatekey=XXX&remoteip=127.0.0.1&challenge=03AHJYYY...&response=some+words
Using this, I was able to see that my private-key was corrupted.
I am setting up a back end API in a script of mine that contacts one of my sites by sending XML to my web server in the form of POST data. This script will be used by many and I want to limit the bandwidth waste for people that accidentally turn the feature on without a proper access key.
I will be denying requests that do not have the correct access key by maybe generating a 403 access code.
Lets say the POST data is ~500kb of data. Does the server receive all 500kb of data when this attempt is made regardless of the status code?
How about if I made the url contain the key mydomain/api/123456789 and generate 403 status on all bad access keys.
Does the POST data still get sent/received regardless or is it negotiated before the data is finally sent.
Thanks in advance!
Generally speaking, the entire request will be sent, including post data. There is often no way for the application layer to return a response like a 403 until it has received the entire request.
In reality, it will depend on the language/framework used and how closely it is linked to the HTTP server. Section 8.2.2 of RFC2616 HTTP/1.1 specification has this to say
An HTTP/1.1 (or later) client sending
a message-body SHOULD monitor the
network connection for an error status
while it is transmitting the request.
If the client sees an error status, it
SHOULD immediately cease transmitting
the body. If the body is being sent
using a "chunked" encoding (section
3.6), a zero length chunk and empty trailer MAY be used to prematurely
mark the end of the message. If the
body was preceded by a Content-Length
header, the client MUST close the
connection.
So, if you can find a language environemnt closely linked with the HTTP server (for example, mod_perl), you could do this in a way which does comply with standards.
An alternative approach you could take is to make an initial, smaller request to obtain a URL to use for the larger POST. The application can then deny providing the URL to clients without an appropriate key.
Here is great book about RESTful Web Services, where it's explained how HTTP works: http://oreilly.com/catalog/9780596529260
You can consider any request as envelope, where on top of it it's written address (URL), some properties (HTTP Headers) and inside it there's some data (if request is initiated by post method). So as you might guess you can't receive envelope partially.
Oh I forgot, it's when you are using HTTP Post with standard HTTP header "application/x-www-form-urlencoded" but if you are uploading files (correspondingly using ""multipart/form-data") Django gives you control over streamed chunks of files using Middleware classes: http://docs.djangoproject.com/en/dev/topics/http/middleware/