I need to interact with a remote HTTP server at the lowest possible level (i.e.: at socket level) because my target is a very small embedded system with no support for higher level libraries (it's a bare-metal uController wit no O.S. at all and talking to a GSM modem via serial line; modem has some support for sockets, but nothing above that).
Basic need is to upload a "file" using POST.
I have all needed Header/Body in place and it "usually works".
Problem is I randomly get a "HTTP/1.1 502 Bad gateway error" response and this is more likely to happen as the size of "file" increases.
I understand this means there's some problem between the reverse proxy frontend (nginx, apparently) and the backends, but I have absolutely no control on those (actually I dont't really know how the atual setup besides what can be gleamed from (light) probing).
My current strategy is to open a plain socket and send the folowing sequence (dots represent binary data):
POST /path/to/websend.php HTTP/1.0
Host: host.domain.tld
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:33.0) Gecko/20100101 Firefox/33.0
Connection: Keep-Alive
Proxy-Connection: Keep-Alive
Content-Type: multipart/form-data; boundary=AaB03x
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Accept: */*
Content-Length: <full_length>
--AaB03x
Content-Disposition: form-data; name="IV"
Content-Type: application/data
Content-Transfer-Encoding: binary
000102030405060708090A0B0C0D0E0F
--AaB03x
Content-Disposition: form-data; name="S_TXT_FILE"; filename="FILENAME_s.txt"
Content-Type: application/data
Content-Transfer-Encoding: binary
..............................................................
..............................................................
...... several 512byte blocks ................................
..............................................................
..............................................................
--AaB03x--
Is there something I could do to enhance reliability?
I already do multiple retries and this actually works, but sometimes I need to retry six or more times to have a positive answer (200 OK).
Note I send exactly the same sequence on rety and it succeeds... eventually.
I need to send two parts because content is encrypted and first part is the neded "Initialization Vector".
is it a bug in the server if it sends content gzip-compressed to clients that did not specify Accept-Encoding: gzip ? is it breaking the http specs? or is it legal?
i'm curious because https://www.amazon.com always sends content gzip-compressed, regardless of the Accept-Encoding header, as a simple test to confirm:
$ curl https://www.amazon.com
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
$ curl https://www.amazon.com -I
HTTP/2 405
content-type: text/html; charset=UTF-8
server: Server
date: Sat, 03 Nov 2018 11:27:35 GMT
set-cookie: skin=noskin; path=/; domain=.amazon.com
strict-transport-security: max-age=47474747; includeSubDomains; preload
x-amz-id-1: 2M3HZHHA9J21D3MTHH4K
allow: POST, GET
vary: Accept-Encoding,User-Agent,X-Amazon-CDN-Cache
content-encoding: gzip
x-amz-rid: 2M3HZHHA9J21D3MTHH4K
x-frame-options: SAMEORIGIN
x-cache: Error from cloudfront
via: 1.1 1cc4305a3ce000ca199328864ca1c98e.cloudfront.net (CloudFront)
x-amz-cf-id: OKz61IdKmCBfC97pPg-zmDhQnJzK3THXL2iYwegU5EtDaRf6yjBGzw==
curl complains that it's recieving binary data here because it's not responding with HTML, but gzip-compressed html, which is binary data. to actually see the html, add the --compressed argument, which tells curl to add the header Accept-Encoding: gzip, deflate and automatically decompress the response.
A request without an Accept-Encoding header field implies that the user agent has no preferences regarding content-codings. Although this allows the server to use any content-coding in a response, it does not imply that the user agent will be able to correctly process all encodings.
-- https://greenbytes.de/tech/webdav/rfc7231.html#rfc.section.5.3.4.p.4
I am having a terrible time trying to get my server to accept requests from another server (local, but given a domain name in my hosts file) without triggering the dreaded
XMLHttpRequest cannot load https://dev.mydomain.org/api/user?uid=1. Origin http://home.domain.org is not allowed by Access-Control-Allow-Origin.
my dev server (internet) is running nginx, my home server(local) is running apache.
I have tried several solutions found on the internet, to no avail. I have tried modifying the headers in the nginx configs to allow my home.mydomain.org server, I have also added htaccess rules locally to allow all origins (*).
My nginx server block has these lines currently:
add_header Access-Control-Allow-Origin http://home.mydomain.org;
add_header Access-Control-Allow-Headers Authorization;
Adding just the first one did change my response slightly (from simple Origin not allowed by Access-Control-Allow-Origin to Request header field Authorization is not allowed by Access-Control-Allow-Headers.) but adding the second line just reverted the error to the original one and I am still blocked.
At this point, I am not sure what else to try.
UPDATES:
Launching Chrome with flag --disable-web-security allows me to test, and my site and code is working fine in Chrome.
However, this revealed another strange problem, which is that if I try adding the add_header lines to a location directive, both my no-web-security Chrome and my unmodified Safari fail to load info from my api. So now I am not sure if my add_header directives in the server block are working correctly at all.
If it helps any, here is my client code (including things I have tried/commented out):
var xhr = new XMLHttpRequest();
var self = this;
xhr.open('GET', apiURL + self.currentIssue);
xhr.setRequestHeader('Access-Control-Allow-Origin','http://home.mydomain.org');
//xhr.setRequestHeader('Access-Control-Allow-Credentials', 'true');
xhr.withCredentials = true;
//xhr.setRequestHeader('Access-Control-Request-Method','*');
xhr.setRequestHeader('Authorization','Bearer longstringoflettersandnumbers');
xhr.onload = function () {
self.posts = JSON.parse(xhr.responseText);
};
xhr.send();
ANOTHER UPDATE AFTER TRYING SUGGESTION BELOW:
After a bunch of trial and error on both client and server, I still am stuck. Here is my latest response from the server using curl (although I have toggled on and off various options client and server for things like Credentials and changing origin to exactly mine or * to no avail):
HTTP/1.1 204 No Content
Server: nginx
Date: Sun, 06 Aug 2017 10:11:57 GMT
Connection: keep-alive
Access-Control-Allow-Origin: http://home.mydomain.org
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range
Access-Control-Max-Age: 1728000
Content-Type: text/plain; charset=utf-8
Content-Length: 0
and here are my console errors (Safari):
[Error] Origin http://home.mydomain.org is not allowed by Access-Control-Allow-Origin.
[Error] Failed to load resource: Origin http://home.mydomain.org is not allowed by Access-Control-Allow-Origin. (actions, line 0)
[Error] XMLHttpRequest cannot load https://dev.mydomain.org/api/user?uid=1 due to access control checks.
And here is my console error for Firefox:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://dev.mydomain.org/api/user?uid=1. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
Also in Firefox, here are the results from the network panel for OPTIONS and GET:
Request URL: https://dev.mydomain.org/api/user?uid=1
Request method: OPTIONS
Status code: 204 No Content
Version: HTTP/2.0
Response headers (511 B)
Server "nginx"
Date "Sun, 06 Aug 2017 10:44:22 GMT"
Access-Control-Allow-Origin "http://home.mydomain.org"
access-control-allow-credentials "true"
Access-Control-Allow-Methods "GET, POST, OPTIONS"
Access-Control-Allow-Headers "Authorization,DNT,X-CustomHea…ent-Type,Content-Range,Range"
Access-Control-Max-Age "1728000"
Content-Type "text/plain; charset=utf-8"
Content-Length "0"
X-Firefox-Spdy "h2"
Request headers (501 B)
Host "dev.mydomain.org"
User-Agent "Mozilla/5.0 (Macintosh; Intel… Gecko/20100101 Firefox/54.0"
Accept "text/html,application/xhtml+x…lication/xml;q=0.9,*/*;q=0.8"
Accept-Language "en-US,en;q=0.5"
Accept-Encoding "gzip, deflate, br"
Access-Control-Request-Method "GET"
Access-Control-Request-Headers "authorization"
Origin "http://home.mydomain.org"
Connection "keep-alive"
Cache-Control "max-age=0"
Request URL: https://dev.mydomain.org/api/user?uid=1
Request method: GET
Status code: 404 Not Found
Version: HTTP/2.0
Response headers (170 B)
Server "nginx"
Date "Sun, 06 Aug 2017 10:44:22 GMT"
Content-Type "text/html"
Vary "Accept-Encoding"
Content-Encoding "gzip"
X-Firefox-Spdy "h2"
Request headers (723 B)
Host "dev.mydomain.org"
User-Agent "Mozilla/5.0 (Macintosh; Intel… Gecko/20100101 Firefox/54.0"
Accept "*/*"
Accept-Language "en-US,en;q=0.5"
Accept-Encoding "gzip, deflate, br"
Referer "http://home.mydomain.org/"
Authorization "Bearer eyJ0eXAG…BRHmX9VmtYHQOvH7k-Y32wwyeCdk"
Origin "http://home.mydomain.org"
Connection "keep-alive"
Cache-Control "max-age=0"
UPDATE WITH PARTIAL SUCESS:
I think I found the problem (partially): changing my location directive in nginx from location /api to location = /api/* gets it working! But only for Safari and Chrome, FF is now not even trying the GET request, there is NO entry for it in network panel.
UPDATE WITH CRYING AND GNASHING OF TEETH AND PULLING OF HAIR
Safari and Chrome intermittently fail with original error about Origin not allowed, even though they were working fine and no changes have been made to server config. I will be drinking heavily tonight...
Wow, was that ever convoluted. Posting answer here in case some other WP user finds their way here. I kept getting inconsistent results (sometimes working, sometimes not mysteriously) and finally tracked down my problem to headers being set in the PHP code on the server, independently of the nginx settings and sometimes contradicting them (although never in a predictable way that I could see). So the things I needed to resolve were:
Removed all my cors declarations in my nginx configs
I also have code on my server that validates a token in the auth header, and it was failing on OPTIONS preflight (which it should never check) so I had to add an if statement before to have it ignore an OPTIONS call (!$_SERVER['REQUEST_METHOD'] === "OPTIONS")
Since I had cloned this site from another of mine using UpdraftPlus plugin, I had to go in to delete my migrate keys since their existence prevented api calls from working too. Once they were deleted my calls started working again.
Removed and re-added the built in WP filter rest_pre_serve_request
My filter code is here:
add_action('rest_api_init', function() {
/* unhook default function */
remove_filter('rest_pre_serve_request', 'rest_send_cors_headers');
/* then add your own filter */
add_filter('rest_pre_serve_request', function( $value ) {
$origin = get_http_origin();
$my_sites = array( $origin ); // add array of accepted sites if you prefer
if ( in_array( $origin, $my_sites ) ) {
header( 'Access-Control-Allow-Origin: ' . esc_url_raw( $origin ) );
} else {
header( 'Access-Control-Allow-Origin: ' . esc_url_raw( site_url() ) );
}
header( 'Access-Control-Allow-Methods: OPTIONS, GET, POST, PUT, PATCH, DELETE' );
header( 'Access-Control-Allow-Credentials: true' );
header('Access-Control-Allow-Headers: Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Origin,Content-Type,X-Auth-Token,Content-Range,Range');
header('Access-Control-Expose-Headers: Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Origin,Content-Type,X-Auth-Token,Content-Range,Range');
header( 'Vary: Origin' );
return $value;
});
}, 15);
Now finally, everything works everywhere (in every browser and in curl too)!
i have a web service its run locally, but when i hosted on AWS its not running from my client I get the error "System.ServiceModel.ProtocolException: There is a problem with the XML ..."
I tried to call it from SoapUI its working
this is the request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:her="http://HerakiNet.com/">
<soapenv:Header/>
<soapenv:Body>
<her:SayHello>
<!--Optional:-->
<her:name>Ahmed</her:name>
</her:SayHello>
</soapenv:Body>
</soapenv:Envelope>
and the response as raw:
HTTP/1.0 200 OK
Cache-Control: private, max-age=0
Content-Type: text/xml; charset=utf-8
Date: Sun, 29 Dec 2013 11:12:39 GMT
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Content-Length: 359
X-Cache: MISS from UB15-WMJ-080811
Via: 1.1 UB15-WMJ-080811:3128 (Lusca)
Connection: keep-alive
<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><SayHelloResponse xmlns="http://HerakiNet.com/"><SayHelloResult>Hello , Ahmed</SayHelloResult></SayHelloResponse></soap:Body></soap:Envelope>
can any one help??
this is the result of executing a small test from DotNet App:
System.ServiceModel.ProtocolException: There is a problem with the XML that was
received from the network. See inner exception for more details. ---> System.Xml
.XmlException: The data at the root level is invalid. Line 1, position 1 ...
the application code is:
var client = new EstimatorWcfService.EstimatorWebServiceSoapClient();
Console.WriteLine(client.SayHello("Ahmed"));
According to error, you are calling wrong end point url from your client.
There would be two urls in service tag in your wsdl one for http and another for https. May be you are using https one instead of http. Cross check from soap ui (at top of your request).
tl;dr
Do you need to implement the Microsoft WebDav Extension Properties to properly work with Word.
Extended Question
I'm building out a WebDav server ontop of the pre-existing WebDAV.NET open source project. I noticed that with Word 2010 (didn't try other versions) their sample code doesn't correctly handle saving Microsoft Word documents as it will say "Upload failed" even though the document saves correctly and you are the only user of the file. I'm trying to track down the reason why, and one thing that caught my eye was the Microsoft WebDav Extension Properties. The MS page for this states that "A WebDAV server implementing WebDAV Protocol: Microsoft Extensions SHOULD implement the following extended properties." Since it states should, I would assume you do not have to support it to work with Word.
I became suspicious of the extensions when I noted that my propfind request/response looks like the following:
PROPFIND /test123.docx HTTP/1.1
Cache-Control: no-cache
Connection: Keep-Alive
Pragma: no-cache
Content-Type: text/xml; charset="utf-8"
User-Agent: Microsoft Office Core Storage Infrastructure/1.0
Depth: 0
Translate: f
Connection: Keep-Alive
Content-Length: 208
Host: localhost:62954
<?xml version="1.0" encoding="utf-8" ?><D:propfind xmlns:D="DAV:" xmlns:Office="urn:schemas-microsoft-com:office:office"><D:prop><D:creationdate/><D:getlastmodified/><Office:modifiedby/></D:prop></D:propfind>
HTTP/1.1 207 Multi-Status
Server: ASP.NET Development Server/10.0.0.0
Date: Mon, 16 Apr 2012 14:11:55 GMT
X-AspNet-Version: 4.0.30319
MS-Author-Via: DAV
Cache-Control: private
Content-Type: text/xml; charset=utf-8
Content-Length: 576
Connection: Close
<?xml version="1.0" encoding="utf-8"?>
<D:multistatus xmlns:D="DAV:">
<D:response>
<D:href>http://localhost:62954/test123.docx</D:href>
<D:propstat>
<D:prop>
<D:creationdate>2012-04-10T08:00:00Z</D:creationdate>
<D:getlastmodified>2012-04-16T09:09:44Z</D:getlastmodified>
</D:prop>
<D:status>HTTP/1.1 200 OK</D:status>
</D:propstat>
<D:propstat>
<D:status>HTTP/1.1 404 Not Found</D:status>
<D:prop>
<modifiedby xmlns="urn:schemas-microsoft-com:office:office" />
</D:prop>
</D:propstat>
</D:response>
</D:multistatus>
In case anyone is curious, the following is my PUT response.
HTTP/1.1 200 OK
Server: ASP.NET Development Server/10.0.0.0
Date: Mon, 16 Apr 2012 14:18:06 GMT
X-AspNet-Version: 4.0.30319
MS-Author-Via: DAV
Cache-Control: private
Content-Length: 0
Connection: Close
By the way, I found the bug in Sphorium webdav that was causing this. The bug was in the method DavLockBase_InternalProcessDavRequest() and the incorrect line of code was:
string[] _lockTokens = this.RequestLock.GetLockTokens();
which should be:
string[] _lockTokens = this.ResponseLock.GetLockTokens();
No, you don't need to implement the Microsoft WebDAV Extension Properties.
(I figured out the answer to my own question)
The issue with WebDAV.NET is that there is a bug in it that an empty LockToken in the header which confuses Word and makes it not give the LockToken to the PUT request.