DLNA/UPnP: How to respond to SOAP Actions - http

I am currently working on a DLNA / UPnP Media Server and while most of it works fine i got some trouble with the following SOAPAction Requests:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 258
Content-Type: text/xml
SOAPAction: "#GetConnectionTypeInfo"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 250
Content-Type: text/xml
SOAPAction: "#GetStatusInfo"
Connection: Close
and
POST /upnp/connection_manager HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 308
Content-Type: text/xml
SOAPAction: "urn:schemas-upnp-org:service:ConnectionManager:1#GetCommonLinkProperties"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 257
Content-Type: text/xml
SOAPAction: "#GetExternalIPAddress"
Connection: Close
last but no least:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 337
Content-Type: text/xml
SOAPAction: "#GetGenericPortMappingEntry"
Connection: Close
I didn't post the Bodys of these Request because the formatting isn't a problem but i don't know how to respond on these Request and can't really find something helpful. To be precise it's not the way on how to respond that makes me wonder but the Content i should provide.
So it would be really nice if someone could explain to me what these request are made for, what a response could look like and / or where i can get some more information (including examples) on these.

Ancient but still unanswered question, so let's try the basics:
where i can get some more information (including examples) on these.
Have you read the Official Specification on UPnP? That website has everything you need, specially the full specification PDF and some great tutorials
To be precise it's not the way on how to respond that makes me wonder but the Content i should provide
Some of those SOAP actions, particularly GetExternalIPAddress and GetGenericPortMappingEntry are meant for Internet Gateway Devices, i.e. Routers and such, not Media Servers.
I wonder why you are receiving such requests. How are you advertising your Device via SSDP? What Services you are listing in your root descriptor XML? Those actions are from WANIPConnection Service, one I doubt a Media Server wants to implement.
So, before ignoring such requests, you should really investigate why you're receiving them in the first place. Likely something is wrong in your SSDP reply.

Related

BizTalk 2016 WCF-WebHttp Caching Headers

I've created a send pipeline with only a custom pipeline component which creates a mime message to POST to a Rest API that requires a multipart/form-data. It works but fails for every 2nd invocation. It alternates between success and failure. When it fails, the boundary I've written to the header appears to be overwritten by the WCF-WebHttp adapter using the boundary of the previously successful message.
I've made sure that I'm writing the correct boundary to the header.
Any streams I've used in the pipeline component have been added to the pipeline resource manager.
If I restart the host instance after the first successful message, the next message will be successful.
Waiting 10 minutes between processing each message has no change in the observed behaviour.
If I send a different file through when the failure is expected to occur, the header content-length is still the same as the previous file. This suggests that the header used is exactly the same as the previous invocation.
The standard BizTalk mime component doesn't write the boundary to the header, so doesn't offer any clue.
Success
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Fail: boundary in header not same as in payload
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Disposition: form-data; name=Files; filename=testdoc.docx; filename*=utf-8''testdoc.docx
My problem will be fixed if I can get the header to use the correct boundary. Any suggestions?
I'm more surprised you actually had some success with this approach. The thing is, the headers aren't officially message properties but are port properties. And ports cache their settings. You have to make it a dynamic send port for it to properly work. Another way is by setting the headers in a custom behavior, but I don't think that suits your scenario.

Why tomcat reply HTTP 1.1 respose with an HTTP 1.0 request?

Request:
POST / HTTP/1.0
Content-Type: text/xml; charset=UTF-8
User-Agent: Axis2
Host: localhost:8000
Content-Length: 539
Response from tomcat:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/xml;charset=UTF-8
Date: Sat, 19 Oct 2013 00:28:57 GMT
Connection: close
From tomcat website it says:
If the client (typically a browser) supports only HTTP/1.0, the
Connector will gracefully fall back to supporting this protocol as
well. No special configuration is required to enable this support.
How Tomcat gracefully fall back to HTTP 1.0? From my example it still reply HTTP 1.1. Can anyone explain to me?
The protocol version indicates the protocol capability of the sender. It does not specify the version of the response itself. So as long as the response can be understood by the HTTP 1.0 client, Tomcat is doing exactly what it should.
It's all in RFC2616...
Edit: And it's even in the Tomcat documentation itself, right after the part you quoted:
This Connector supports all of the required features of the HTTP/1.1 protocol, as described in RFC 2616, including persistent connections, pipelining, expectations and chunked encoding. If the client (typically a browser) supports only HTTP/1.0, the Connector will gracefully fall back to supporting this protocol as well. No special configuration is required to enable this support. The Connector also supports HTTP/1.0 keep-alive.
RFC 2616 requires that HTTP servers always begin their responses with the highest HTTP version that they claim to support. Therefore, this Connector will always return HTTP/1.1 at the beginning of its responses.

Can I access .info/serverTimeOffset from REST API?

I have successfully used the .info/serverTimeOffset to manage clock skew value from the Javascript library.
However when trying to access from REST I get an error.
GET https://my-firebase-name.firebaseio.com/.info/serverTimeOffset/.json HTTP/1.1
Content-Length: 0
Accept-Encoding: identity, deflate, compress, gzip
Accept: */*
HTTP/1.1 400
content-length: 32
content-type: application/json; charset=utf-8
cache-control: no-cache
{
"error" : "Invalid path."
}
Is this or any of the .info values available from REST?
The correct values for .info/connected and .info/serverTimeOffset don't really make sense from a REST call's perspective and are therefore unavailable. There isn't a reliable way to for the server to know the client's time while making a REST call to serverTimeOffset so the number cannot be calculated accurately. Similarly, there is no concept of "disconnected" since a HTTP request terminates after completion.

ParseError on Node.js http request

Hello StackOverflow community!
I started to learn Node.js recently, and decided to implement a reverse HTTP proxy as a task. There were a couple of rough places, which I managed to get through on my own, but now I'm a bit of stuck, and need your help. I managed to handle redirects and relative urls, and with implementation of relative url support I faced the problem I'm going to describe.
You can find my code at http://pastebin.com/vZfEfk8r. It's not very big, but still doesn't fit nicely to this page.
So to the problems (there are 2 of them). I'm using http.request to forward client's request to the target server, then waiting for response and sending this response back to client. It works okay for some of the requests, but not for others. This is the first problem: on the web-site I'm using to test the proxy ( http://ixbt.com, cool russian web-site about the tech) I can always get the main page /index.html, but when browser starts to fetch other files referenced from that page (css, img, etc.), most of the requests are ending with ParseError ({"bytesParsed":0}).
While debugging (using Wireshark) I noticed that at some of the requests (if not all) fail with this error when the following HTTP negotiation between proxy and target server occurs:
Request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
Looks like server doesn't send the status code, and no headers. So the question is, can this be the reason of failure (ParseError)?
My another concern is that when I'm trying to get the same file as a standalone request, I have no problems. Just look:
Request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Host: www.ixbt.com
Connection: keep-alive
Response:
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 25 Jun 2012 17:09:51 GMT
Content-Type: image/jpeg
Content-Length: 3046
Last-Modified: Fri, 22 Jun 2012 00:06:27 GMT
Connection: keep-alive
Expires: Wed, 25 Jul 2012 17:09:51 GMT
Cache-Control: max-age=2592000
Accept-Ranges: bytes
... and here goes the body ...
So in the end of the day there may be some mistake in how I do the proxy requests. Maybe it's because I actually do lots of them, when the main page is loaded - it has many images, etc.?
I hope I was clear enough, but please ask about details if I missed something. And the full source code is available (again, at the http://pastebin.com/vZfEfk8r), so if somebody would try it, it would be just great. :)
Much thanks in advance!
P.S. As I said, I'm just learning, so if you'll see some bad practices in my code (even unrelated to the question), it would be nice know them.
UPDATE: As was mentioned in comment, I didn't proxied the original request's headers, which in theory could lead to problems with the following requests. I changed that, but, unfortunately, the behavior remained the same. Here's example of new request and response:
Request
GET css/main_fixed.css HTTP/1.1
Host: www.ixbt.com
connection: keep-alive
cache-control: no-cache
pragma: no-cache
user-agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5
accept: text/css,*/*;q=0.1
accept-encoding: gzip,deflate,sdch
accept-language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
accept-charset: windows-1251,utf-8;q=0.7,*;q=0.3
referer: http://www.ixbt.com/
Response
<html>
<head><title>400 Bad Request</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I had to craft the 'referer' header by hand, since browser is sending it with reverse proxy url. Still behavior is the same, as you can see. Any other ideas?
Thanks to the valuable comments, I was able to find the answer to this problem. It was nothing related to Node or target web-servers, just a coding error.
The answer is that path component of url was wrong for the relative urls. It already can be visible from my logs in the question's body. I'll repeat them here to reiterate:
Wrong request:
GET articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
Right request:
GET /articles/pics2/201206/coolermaster-computex2012_70x70.jpg HTTP/1.1
See the difference? The leading slash. Turns out I missed it on my requests for relative urls, due to my own awkward client's url handling. But with a quick-and-dirty fix it's working now, well enough until I'll do a proper client's url handling.
Much thanks for comments, they were insightful!
If above solutions do not work, try removing content-length header. Content-length mismatch causes body parsers to cause this errors

is an HTTP/1.1 request implicitly keep-alive by default?

Solved: pasting the bytes here made me realise that I was missing empty lines between chunks...
Does an HTTP/1.1 request need to specify a Connection: keep-alive header, or is it always keep-alive by default?
This guide made me think it would; that, when my http server gets a 1.1 request, it is keep-alive unless explicitly receiving a Connection: close header.
I ask since my the different client behaviour of ab and httperf is driving me mad enough to wonder my sanity on this one...
Here's what httperf --hog --port 42042 --print-reply body sends:
GET / HTTP/1.1
User-Agent: httperf/0.9.0
Host: localhost
And here's my server's response:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Length: 18
12
Hello World 1
0
httpref promptly prints out the response, but then just sits there, neither side closing the connection and httpref not exiting.
Where's my bug?
From RFC 2616, section 8.1.2:
A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.

Resources