I have a program that activate an chip for racing results. (Its just a piece of hardware).
I watch with Fiddler (Sniffing program) the in and outgoing traffic from my pc when I connect the chip with my computer.
The program sends the following HTTP Request:
POST http://example.com/index.php HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 185
Content-Type: application/x-www-form-urlencoded
Host: example.com
Pragma: no-cache
User-Agent: SomeProgram 1.2.3
Data==%0D%0AAjlFNEEw-SOMELONGSECRETKEY-RGAw%3D%3D%0D%0A
I receive the following response:
<?xml version="1.0" encoding="utf-8"?>
<message type="3" result="1" txid="someid" activationdate="" availablecredits="732" firstname="John" lastname="Doe" email="JohnDoe#outlook.com" phonenumber="00123445" notification_email="1" notification_text="1"/>
Is it possible to edit the response so that when the programs check for the availablecredits variable, he get the value 9999 instead of 732.
Im working on a Windows 8 laptop.
Definitely - Fiddler allows you to modify requests and responses by adding rules to FiddlerScript. Citing Fiddler documentation:
To make custom changes to web requests and responses, use
FiddlerScript to add rules to Fiddler's OnBeforeRequest or
OnBeforeResponse function. Which function is appropriate depends on
the objects your code uses: OnBeforeRequest is called before each
request, and OnBeforeResponse is called before each response.
So, all you have to do is to add to OnBeforeResponse the logic for replacing the availablecredits attribute value with any value you desire.
Related
I found that a get message header looks like:
:method: GET
:scheme: https
:authority: server.net
:path: /config
accept: */*
accept-encoding: gzip,deflate
What a connect message header should look like?
This example is from the RFC of http2:
GET /resource HTTP/1.1 HEADERS
Host: example.org ==> + END_STREAM
Accept: image/jpeg + END_HEADERS
:method = GET
:scheme = https
:path = /resource
host = example.org
accept = image/jpeg
I want to know the equivalent of the connect header in http2.
In Http1 is:
CONNECT example.org:443 HTTP/1.1
Host: example.org:443
The format of the CONNECT method in HTTP/2 is specified in section 8.3.
With the formatting you used above looks like:
:method: CONNECT
:authority: proxy.net:8080
As specified, :scheme and :path must be omitted.
The HTTP/2 CONNECT method can also be used for bootstrapping other protocols (see for example WebSocket over HTTP/2), so that, additionally, the :protocol pseudo-header may also be present.
Remember however that this is only a textual representation of HTTP/2; the bytes that actually travel over the network are different since you must encode them using HPACK.
Unless you are actually writing an HTTP/2 implementation, it is better that you use existing libraries (available in virtually any programming language) to send HTTP/2 requests (of any kind): the libraries will take care of converting your CONNECT request into the proper bytes to send over the network.
I need to interact with a remote HTTP server at the lowest possible level (i.e.: at socket level) because my target is a very small embedded system with no support for higher level libraries (it's a bare-metal uController wit no O.S. at all and talking to a GSM modem via serial line; modem has some support for sockets, but nothing above that).
Basic need is to upload a "file" using POST.
I have all needed Header/Body in place and it "usually works".
Problem is I randomly get a "HTTP/1.1 502 Bad gateway error" response and this is more likely to happen as the size of "file" increases.
I understand this means there's some problem between the reverse proxy frontend (nginx, apparently) and the backends, but I have absolutely no control on those (actually I dont't really know how the atual setup besides what can be gleamed from (light) probing).
My current strategy is to open a plain socket and send the folowing sequence (dots represent binary data):
POST /path/to/websend.php HTTP/1.0
Host: host.domain.tld
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:33.0) Gecko/20100101 Firefox/33.0
Connection: Keep-Alive
Proxy-Connection: Keep-Alive
Content-Type: multipart/form-data; boundary=AaB03x
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Accept: */*
Content-Length: <full_length>
--AaB03x
Content-Disposition: form-data; name="IV"
Content-Type: application/data
Content-Transfer-Encoding: binary
000102030405060708090A0B0C0D0E0F
--AaB03x
Content-Disposition: form-data; name="S_TXT_FILE"; filename="FILENAME_s.txt"
Content-Type: application/data
Content-Transfer-Encoding: binary
..............................................................
..............................................................
...... several 512byte blocks ................................
..............................................................
..............................................................
--AaB03x--
Is there something I could do to enhance reliability?
I already do multiple retries and this actually works, but sometimes I need to retry six or more times to have a positive answer (200 OK).
Note I send exactly the same sequence on rety and it succeeds... eventually.
I need to send two parts because content is encrypted and first part is the neded "Initialization Vector".
Very strange issue. I am trying to connect to some API (inside organization) which require first to POST key & code to some url and then use that cookie to get the desired data (json).
Running the POST request returns status 200 which is good - but no cookie is returned.
Running the same request in Firefox using "httprequester" returns a cookie as expected and works fine.
url<-"https://some_url"
login <- list(
Key="some_key",
Code="some_code"
)
try_temp<-POST(url = url,body=login,encode="form",verbose())
Result is:
-> POST /api/Service/Login HTTP/1.1
-> Host: **************
-> User-Agent: libcurl/7.53.1 r-curl/2.5 httr/1.2.1
-> Accept-Encoding: gzip, deflate
-> Accept: application/json, text/xml, application/xml, */*
-> Content-Type: application/x-www-form-urlencoded
-> Content-Length: 43
->
>> Key=*****&Code=*******
<- HTTP/1.1 200 OK
<- Content-Type: text/html; charset="utf-8"
<- Content-Length: 6908
<- Connection: Close
<-
Thing is, that the same request works when down in browser.
GET request (after I know the cookie, I use GET in httr passing the cookie i've got. I get the same log as above.
BTW When instead of GET i use BROWSE - R opens the default browser and i see the expected data returned.
I suspect that some of the settings for R are not the same as for Firefox (or any other browser). We don't use PROXY but rather automatic configuration script.
Tnx
I am trying to send a file using HTTP from a C++ application (no HTML-boxes). The server keeps answering Code 400/ Bad Request.
To keep it simple, I have changed manually the content of the file to a simple string (later on, I will need to upload real binary files).
the POST request is the following:
POST /post.php HTTP/1.0
Host: posttestserver.com
Accept: */*
Content-Type: multipart/form-data; boundary=BOUNDARY
--BOUNDARY
Content-Disposition: form-data; name="userfile"; filename="example.txt"
Content-Type:text/plain
123ABC
--BOUNDARY--
Connection: close
Any idea what is going on?
I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.