I have successfully used the .info/serverTimeOffset to manage clock skew value from the Javascript library.
However when trying to access from REST I get an error.
GET https://my-firebase-name.firebaseio.com/.info/serverTimeOffset/.json HTTP/1.1
Content-Length: 0
Accept-Encoding: identity, deflate, compress, gzip
Accept: */*
HTTP/1.1 400
content-length: 32
content-type: application/json; charset=utf-8
cache-control: no-cache
{
"error" : "Invalid path."
}
Is this or any of the .info values available from REST?
The correct values for .info/connected and .info/serverTimeOffset don't really make sense from a REST call's perspective and are therefore unavailable. There isn't a reliable way to for the server to know the client's time while making a REST call to serverTimeOffset so the number cannot be calculated accurately. Similarly, there is no concept of "disconnected" since a HTTP request terminates after completion.
Related
I've been trying to create an app which does some requests on Wizzair api, and found that there is this endpoint as /Api/search/search. While searching for flights in the browser this endpoint returns a list of flights based on the parameters provided as a json response. While accessing the same endpoint from postman and copying the same headers and body as the request I get a 428 response. That seems kinda odd, since the headers and body are exactly the same as the one in the Newtork tab in the Developer tools.
Here's a reference URL: https://wizzair.com/#/booking/select-flight/LTN/VIE/2022-07-23/2022-08-05/1/0/0/null
The added headers are:
Host: be.wizzair.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:101.0) Gecko/20100101 Firefox/101.0
Accept: application/json, text/plain, */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://wizzair.com/
Content-Type: application/json;charset=utf-8
X-RequestVerificationToken: <token>
Content-Length: 254
Origin: https://wizzair.com
Connection: keep-alive
Cookie: <some_cookies>
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-site
TE: trailers
And the body is added as raw json:
{"isFlightChange":false,"flightList":[{"departureStation":"LTN","arrivalStation":"VIE","departureDate":"2022-07-24"},{"departureStation":"VIE","arrivalStation":"LTN","departureDate":"2022-08-05"}],"adultCount":1,"childCount":0,"infantCount":0,"wdc":true}
The response from postman is:
{"sec-cp-challenge": "true","provider":"crypto","branding_url_content":"/_sec/cp_challenge/crypto_message-3-7.htm","chlg_duration":30}
Could anyone explain to me why there is a different behavior on the browser vs postman on the exact same request and if possible replicate the proper response in postman?
Don't know if it is still relevant.
But this one
{"sec-cp-challenge": "true","provider":"crypto","branding_url_content":"/_sec/cp_challenge/crypto_message-3-7.htm","chlg_duration":30}
enter code here
is a fingerprint of akamai bot protection. AFAIK it uses JS to tell real browser from scripted requests. It stores result in cookies, obfuscating it an every possible way. Good thing is that you can copy cookies from your browser session, and that way have several requests with meaningful results. After that akamai starts to try to change cookies again, and you'll have to start all over.
I really want some validation after reading many websites on CORS to see if I have this
OPTIONS /frog/LOTS/upload/php.php HTTP/1.1
Host: staff.curriculum.local
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/14.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Origin: http://frogserver.curriculum.local
Access-Control-Request-Method: POST
Access-Control-Request-Headers: cache-control,x-requested-with
Pragma: no-cache
Cache-Control: no-cache
Here is when I think I respond with 403 ->
If my origin set on the server is not * and not that domain, I response with 403.
If I do not support POST (I may support GET), I respond with 403
If I do not support any ONE of the request headers, I respond with 403
For #1, if the domain is not supported, I will NOT send any Access Control headers in the response. I think that is ok.
for #2 and #3, I would send these headers assuming Origin request header was a match
Access-Control-Allow-Origin: http://frogserver.curriculum.local
Access-Control-Allow-Credentials: {exists and is true IF supported on server}
Access-Control-Allow-Headers: {all request headers we support and not related to incoming 'Access-Control-Request-Headers' in the request}
Access-Control-Allow-Methods: {all methods supported NOT related to incoming method POST that came in?}
Access-Control-Expose-Headers: {all headers we allow browser origin website to read}
Is my assumption correct here? or are some of the response headers related to the request headers in some way I am not seeing? (The names are similar but I don't think the behavior is tied to each other).
I would find it really odd that the request even needs to contain Access-Control-Request-Method & Access-Control-Request-Headers if we did not send back a 403 in the cases where we don't support all requested information on that endpoint, right? This is why I 'suspect' we are supposed to return a 403 along with what we do support?
thanks,
Dean
I've created a send pipeline with only a custom pipeline component which creates a mime message to POST to a Rest API that requires a multipart/form-data. It works but fails for every 2nd invocation. It alternates between success and failure. When it fails, the boundary I've written to the header appears to be overwritten by the WCF-WebHttp adapter using the boundary of the previously successful message.
I've made sure that I'm writing the correct boundary to the header.
Any streams I've used in the pipeline component have been added to the pipeline resource manager.
If I restart the host instance after the first successful message, the next message will be successful.
Waiting 10 minutes between processing each message has no change in the observed behaviour.
If I send a different file through when the failure is expected to occur, the header content-length is still the same as the previous file. This suggests that the header used is exactly the same as the previous invocation.
The standard BizTalk mime component doesn't write the boundary to the header, so doesn't offer any clue.
Success
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--9ccdeb0a-c407-490c-9cce-c5e3be639785
Fail: boundary in header not same as in payload
POST http://somehost/Record HTTP/1.1
Content-Type: multipart/form-data; boundary="9ccdeb0a-c407-490c-9cce-c5e3be639785"
Host: somehost
Content-Length: 11989
Expect: 100-continue
Accept-Encoding: gzip, deflate
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Type: text/plain; charset=utf-8
Content-Disposition: form-data; name=uri
6442
--3fe3e969-8a41-451c-aae7-8458aee0c9f4
Content-Disposition: form-data; name=Files; filename=testdoc.docx; filename*=utf-8''testdoc.docx
My problem will be fixed if I can get the header to use the correct boundary. Any suggestions?
I'm more surprised you actually had some success with this approach. The thing is, the headers aren't officially message properties but are port properties. And ports cache their settings. You have to make it a dynamic send port for it to properly work. Another way is by setting the headers in a custom behavior, but I don't think that suits your scenario.
I am currently working on a DLNA / UPnP Media Server and while most of it works fine i got some trouble with the following SOAPAction Requests:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 258
Content-Type: text/xml
SOAPAction: "#GetConnectionTypeInfo"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 250
Content-Type: text/xml
SOAPAction: "#GetStatusInfo"
Connection: Close
and
POST /upnp/connection_manager HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 308
Content-Type: text/xml
SOAPAction: "urn:schemas-upnp-org:service:ConnectionManager:1#GetCommonLinkProperties"
Connection: Close
and
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 257
Content-Type: text/xml
SOAPAction: "#GetExternalIPAddress"
Connection: Close
last but no least:
POST / HTTP/1.1
HOST: 192.168.1.110:5001
Content-length: 337
Content-Type: text/xml
SOAPAction: "#GetGenericPortMappingEntry"
Connection: Close
I didn't post the Bodys of these Request because the formatting isn't a problem but i don't know how to respond on these Request and can't really find something helpful. To be precise it's not the way on how to respond that makes me wonder but the Content i should provide.
So it would be really nice if someone could explain to me what these request are made for, what a response could look like and / or where i can get some more information (including examples) on these.
Ancient but still unanswered question, so let's try the basics:
where i can get some more information (including examples) on these.
Have you read the Official Specification on UPnP? That website has everything you need, specially the full specification PDF and some great tutorials
To be precise it's not the way on how to respond that makes me wonder but the Content i should provide
Some of those SOAP actions, particularly GetExternalIPAddress and GetGenericPortMappingEntry are meant for Internet Gateway Devices, i.e. Routers and such, not Media Servers.
I wonder why you are receiving such requests. How are you advertising your Device via SSDP? What Services you are listing in your root descriptor XML? Those actions are from WANIPConnection Service, one I doubt a Media Server wants to implement.
So, before ignoring such requests, you should really investigate why you're receiving them in the first place. Likely something is wrong in your SSDP reply.
Solved: pasting the bytes here made me realise that I was missing empty lines between chunks...
Does an HTTP/1.1 request need to specify a Connection: keep-alive header, or is it always keep-alive by default?
This guide made me think it would; that, when my http server gets a 1.1 request, it is keep-alive unless explicitly receiving a Connection: close header.
I ask since my the different client behaviour of ab and httperf is driving me mad enough to wonder my sanity on this one...
Here's what httperf --hog --port 42042 --print-reply body sends:
GET / HTTP/1.1
User-Agent: httperf/0.9.0
Host: localhost
And here's my server's response:
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
Content-Length: 18
12
Hello World 1
0
httpref promptly prints out the response, but then just sits there, neither side closing the connection and httpref not exiting.
Where's my bug?
From RFC 2616, section 8.1.2:
A significant difference between HTTP/1.1 and earlier versions of HTTP is that persistent connections are the default behavior of any HTTP connection. That is, unless otherwise indicated, the client SHOULD assume that the server will maintain a persistent connection, even after error responses from the server.