I've created a send pipeline with only a custom pipeline component which creates a mime message to POST to a Rest API that requires a multipart/form-data. It works but fails for every 2nd invocation. It alternates between success and failure. When it fails, the boundary I've written to the header appears to be overwritten by the WCF-WebHttp adapter using the boundary of the previously successful message.
I've made sure that I'm writing the correct boundary to the header.
Any streams I've used in the pipeline component have been added to the pipeline resource manager.
If I restart the host instance after the first successful message, the next message will be successful.
Waiting 10 minutes between processing each message has no change in the observed behaviour.
If I send a different file through when the failure is expected to occur, the header content-length is still the same as the previous file. This suggests that the header used is exactly the same as the previous invocation.
The standard BizTalk mime component doesn't write the boundary to the header, so doesn't offer any clue.
Success
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Fail: boundary in header not same as in payload
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Disposition: form-data; name=Files; filename=testdoc.docx; filename*=utf-8''testdoc.docx
My problem will be fixed if I can get the header to use the correct boundary. Any suggestions?
I'm more surprised you actually had some success with this approach. The thing is, the headers aren't officially message properties but are port properties. And ports cache their settings. You have to make it a dynamic send port for it to properly work. Another way is by setting the headers in a custom behavior, but I don't think that suits your scenario.
Related
Our Third Party security software is being triggered by an apparent mismatch between a header of GET and a Content-Type of application/json.
Payload not allowed (Content-Type header not allowed for this method)
/signalr/poll
transport=longPolling&messageId=...&clientProtocol=1.4&etc
application/json; charset=UTF-8
Mozilla/5.0 (Windows NT 6.1;Trident/7.0; rv:11.0) like Gecko
Is this a known issue or have I done something silly?
Thanks,
James
Although largely superfluous, it is the default behaviour of SignalR to send a Content-Type header with GET method Http requests.
Content-Type: application/json; charset=UTF-8
I have confirmed this with a small SignalR test program and Fiddler.
As far as I can tell, our Third Party Security software is just being a little overeager.
I am currently working on a DLNA / UPnP Media Server and while most of it works fine i got some trouble with the following SOAPAction Requests:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 258
Content-Type: text/xml
SOAPAction: "#GetConnectionTypeInfo"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 250
Content-Type: text/xml
SOAPAction: "#GetStatusInfo"
Connection: Close
and
POST /upnp/connection_manager HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 308
Content-Type: text/xml
SOAPAction: "urn:schemas-upnp-org:service:ConnectionManager:1#GetCommonLinkProperties"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 257
Content-Type: text/xml
SOAPAction: "#GetExternalIPAddress"
Connection: Close
last but no least:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 337
Content-Type: text/xml
SOAPAction: "#GetGenericPortMappingEntry"
Connection: Close
I didn't post the Bodys of these Request because the formatting isn't a problem but i don't know how to respond on these Request and can't really find something helpful. To be precise it's not the way on how to respond that makes me wonder but the Content i should provide.
So it would be really nice if someone could explain to me what these request are made for, what a response could look like and / or where i can get some more information (including examples) on these.
Ancient but still unanswered question, so let's try the basics:
where i can get some more information (including examples) on these.
Have you read the Official Specification on UPnP? That website has everything you need, specially the full specification PDF and some great tutorials
To be precise it's not the way on how to respond that makes me wonder but the Content i should provide
Some of those SOAP actions, particularly GetExternalIPAddress and GetGenericPortMappingEntry are meant for Internet Gateway Devices, i.e. Routers and such, not Media Servers.
I wonder why you are receiving such requests. How are you advertising your Device via SSDP? What Services you are listing in your root descriptor XML? Those actions are from WANIPConnection Service, one I doubt a Media Server wants to implement.
So, before ignoring such requests, you should really investigate why you're receiving them in the first place. Likely something is wrong in your SSDP reply.
I have successfully used the .info/serverTimeOffset to manage clock skew value from the Javascript library.
However when trying to access from REST I get an error.
GET https://my-firebase-name.firebaseio.com/.info/serverTimeOffset/.json HTTP/1.1
Content-Length: 0
Accept-Encoding: identity, deflate, compress, gzip
Accept: */*
HTTP/1.1 400
content-length: 32
content-type: application/json; charset=utf-8
cache-control: no-cache
{
"error" : "Invalid path."
}
Is this or any of the .info values available from REST?
The correct values for .info/connected and .info/serverTimeOffset don't really make sense from a REST call's perspective and are therefore unavailable. There isn't a reliable way to for the server to know the client's time while making a REST call to serverTimeOffset so the number cannot be calculated accurately. Similarly, there is no concept of "disconnected" since a HTTP request terminates after completion.
HTTP responses generated by the Pyramid web framework append ; charset=UTF-8 to the Content-Type HTTP header. For example,
Content-Type: application/json; charset=UTF-8
Section 14.17 of RFC 2616 gives an example of this:
Content-Type: text/html; charset=ISO-8859-4
However, there's no description of the role of this charset "property". What scope does this have, and who interprets it?
It defines the character encoding of the entity being transferred, and can be interpreted by the remote user. Pyramid is telling everyone that it only ever talks to people in UTF-8, rather than defaulting to ISO-8859-1.
In HTTP you can specify in a request that your client can accept specific content in responses using the accept header, with values such as application/xml. The content type specification allows you to include parameters in the content type, such as charset=utf-8, indicating that you can accept content with a specified character set.
There is also the accept-charset header, which specifies the character encodings which are accepted by the client.
If both headers are specified and the accept header contains content types with the charset parameter, which should be considered the superior header by the server?
e.g.:
Accept: application/xml; q=1,
text/plain; charset=ISO-8859-1; q=0.8
Accept-Charset: UTF-8
I've sent a few example requests to various servers using Fiddler to test how they respond:
Examples
W3
Request
GET http://www.w3.org/ HTTP/1.1
Host: www.w3.org
Accept: text/html;charset=UTF-8
Accept-Charset: ISO-8859-1
Response
Content-Type: text/html; charset=utf-8
Google
Request
GET http://www.google.co.uk/ HTTP/1.1
Host: www.google.co.uk
Accept: text/html;charset=UTF-8
Accept-Charset: ISO-8859-1
Response
Content-Type: text/html; charset=ISO-8859-1
StackOverflow
Request
GET http://stackoverflow.com/ HTTP/1.1
Host: stackoverflow.com
Accept: text/html;charset=UTF-8
Accept-Charset: ISO-8859-1
Response
Content-Type: text/html; charset=utf-8
Microsoft
Request
GET http://www.microsoft.com/ HTTP/1.1
Host: www.microsoft.com
Accept: text/html;charset=UTF-8
Accept-Charset: ISO-8859-1
Response
Content-Type: text/html
There doesn't seem to be any consensus around what the expected behaviour is. I am trying to look surprised.
Altough you can set media type in Accept header, the charset parameter definition for that media type is not defined anywhere in RFC 2616 (but it is not forbidden, though).
Therefore if you are going to implement a HTTP 1.1 compliant server, you shall first look for Accept-charset header, and then search for your own parameters at Accept header.
Read RFC 2616 Section 14.1 and 14.2. The Accept header does not allow you to specify a charset. You have
to use the Accept-Charset header instead.
Firstly, Accept headers can accept parameters, see RFC 7231 section 5.3.2
All text/* mime-types can accept a charset parameter.
The Accept-Charset header allows a user-agent to specify the charsets it supports.
If the Accept-Charset header did not exist, a user-agent would have to specify each charset parameter for each text/* media type it accepted, e.g.
Accept: text/html;charset=US-ASCII, text/html;charset=UTF-8, text/plain;charset=US-ASCII, text/plain;charset=UTF-8
RFC 7231 section 5.3.2 (Accept) clearly states:
Each media-range might be followed by zero or more applicable media
type parameters (e.g., charset)
So a charset parameter for each content-type is allowed. In theory a client could accept, for example, text/html only in UTF-8 and text/plain only in US-ASCII.
But it would usually make more sense to state possible charsets in the Accept-Charset header as that applies to all types mentioned in the Accept header.
If those headers’ charsets don’t overlap, the server could send status 406 Not Acceptable.
However, I wouldn’t expect fancy cross-matching from a server for various reasons. It would make the server code more complicated (and therefore more error-prone) while in practice a client would rarely send such requests. Also nowadays I would expect everything server-side is using UTF-8 and sent as-is so there’s nothing to negotiate.
According to Mozilla Development Network, you should never use the Accept-Charset header. It's obsolete.
I don't think it matters. The client is doing something dumb; there doesn't need to be interoperability for that :-)