When sending an HTTP request with a Range header to Magnolia I get a Response with
Content-Length: 0:
curl -I -X GET \
http://localhost:8080/ \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Cache-Control: no-cache' \
-H 'Range: bytes=0-2000'
HTTP/1.1 206
Set-Cookie: SID=C36D961EC92D152724BBCD0C34EC6536; Path=/; HttpOnly
X-Magnolia-Registration: Registered
Accept-Ranges: bytes
Cache-Control: no-cache, no-store, must-revalidate, max-age=0
ETag: 8B4901E7DD862E5E74287A0F538DCDDFEB78DE77
Content-Range: bytes 0-2000/23529
Content-Encoding: gzip
Vary: Accept-Encoding
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Last-Modified: Thu, 19 Dec 2019 08:52:49 GMT
Content-Type: text/html;charset=UTF-8
Content-Length: 0
Date: Thu, 19 Dec 2019 08:52:49 GMT
However, when I disable the Magnolia Cache Module I get the expected response:
/server/filters/cache -> enabled: false
curl -I -X GET \
http://localhost:8080/ \
-H 'Accept-Encoding: gzip, deflate' \
-H 'Cache-Control: no-cache' \
-H 'Range: bytes=0-2000'
HTTP/1.1 206
Set-Cookie: SID=FF557EC1F0653E5CBD81A57D599091AE; Path=/; HttpOnly
X-Magnolia-Registration: Registered
Accept-Ranges: bytes
ETag: 2A9DE4F4B2ACDDE22BAC3C07784CD65693574B67
Content-Range: bytes 0-2000/2147483647
Content-Type: text/html;charset=UTF-8
Content-Length: 2001
Date: Thu, 19 Dec 2019 08:51:49 GMT
I got the problem that the Facebook crawler isn't able to detect any open graph meta tags when trying to crawl my website. I think the reason is the above described problem with sending range requests to Magnolia (What the Facebook crawler does).
My Open Graph tags are properly set (Working for opengraphcheck and Twitter Card Validator).
I'm using Magnolia 5.7.1.
The simplest work around is to configure request header voter to bypass cache when range header is present.
See RequestHeaderPatternSimpleVoter and/or RequestHeaderPatternRegexVoter for more details on how to set it, but I would still consider it workaround and not final solution.
It seems weird that such thing should be happening. Could you replicate it against e.g. https://demo.magnolia-cms.com?
I have Icecast 2.4.4 running on a Windows box at sub.domain.org. My website is on a different server at domain.org.
When I SSH into my Linux host shell and run curl to the mount point I get a response of 400, but if I do wget I get a response of 200. How can this be?
# wget https://sub.domain.org/live.mp3
--2018-12-19 17:52:58-- https://sub.domain.org/live.mp3 Resolving sub.domain.org... 111.111.111.111 Connecting to
sub.domain.org|111.111.111.111|:443... connected. HTTP request sent,
awaiting **response... 200 OK** Length: unspecified [audio/mpeg] Saving
to: `live.mp3'
[ <=> ] 96,600 3.93K/s ^C
# curl --head https://sub.domain.org/live.mp3
HTTP/1.0 **400 Bad Request**
Server: Icecast 2.4.4
Connection: Close Date: Thu, 20 Dec 2018
00:53:32 GMT Content-Type: text/html; charset=utf-8 Cache-Control:
no-cache, no-store Expires: Mon, 26 Jul 1997 05:00:00 GMT Pragma:
no-cache Access-Control-Allow-Origin: *
Because in case of cURL you are passing the --head parameter. This tells cURL to make a HTTP HEAD request instead of the HTTP GET request that wget performs.
Icecast does not support HTTP HEAD requests and thus the HTTP 400 response is fully justified.
In particular, I have this response header from an nginx server:
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Tue, 20 Mar 2018 10:28:24 GMT
Content-Type: text/html
Last-Modified: Thu, 28 Jan 2016 10:50:21 GMT
Connection: keep-alive
ETag: W/"56a9f26d-2d97"
Content-Encoding: gzip
Followed by some 3352 Bytes of compressed data. I'm trying to find out how does the client know where the body of this message ends and a new response begins.
It doesn't. The response requires either a Content-Length response header field, or it has to use "Transfer-Encoding: chunked".
Options I used:
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indi-
cated with a Location: header and a 3XX response code), this option will make curl redo the request
on the new place. If used together with -i, --include or -I, --head, headers from all requested
pages will be shown. When authentication is used, curl only sends its credentials to the initial
host. If a redirect takes curl to a different host, it won't be able to intercept the user+password.
See also --location-trusted on how to change this. You can limit the amount of redirects to follow
by using the --max-redirs option.
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will
do the following request with a GET if the HTTP response was 301, 302, or 303. If the response code
was any other 3xx code, curl will re-send the following request using the same unmodified method.
You can tell curl to not change the non-GET request method to GET after a 30x response by using the
dedicated options for that: --post301, --post302 and -post303.
-v, --verbose
Be more verbose/talkative during the operation. Useful for debugging and seeing what's going on
"under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data"
received by curl that is hidden in normal cases, and a line starting with '*' means additional info
provided by curl.
Note that if you only want HTTP headers in the output, -i, --include might be the option you're
looking for.
If you think this option still doesn't give you enough details, consider using --trace or --trace-
ascii instead.
This option overrides previous uses of --trace-ascii or --trace.
Use -s, --silent to make curl quiet.
Below is the output that I'm wondering about. In the response containing the redirect(301), all the headers are displayed twice, but only one of the duplicates has the < in front of it. How am I supposed to interpret that?
$ curl -ILv http://www.mail.com
* Rebuilt URL to: http://www.mail.com/
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 80 (#0)
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Location: https://www.mail.com/
Location: https://www.mail.com/
< Vary: Accept-Encoding
Vary: Accept-Encoding
< Connection: close
Connection: close
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
<
* Closing connection 0
* Issue another request to this URL: 'https://www.mail.com/'
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: *.mail.com
* Server certificate: thawte SSL CA - G2
* Server certificate: thawte Primary Root CA
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Vary: X-Forwarded-Proto,Host,Accept-Encoding
Vary: X-Forwarded-Proto,Host,Accept-Encoding
< Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Cache-Control: no-cache, no-store, must-revalidate
Cache-Control: no-cache, no-store, must-revalidate
< Pragma: no-cache
Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
< Content-Language: en-US
Content-Language: en-US
< Content-Length: 85237
Content-Length: 85237
< Connection: close
Connection: close
< Content-Type: text/html;charset=UTF-8
Content-Type: text/html;charset=UTF-8
<
* Closing connection 1
best guess: with -v you tell curl to be verbose (send debug info) to STDERR. with -I you tell curl to dump headers to STDOUT. and your shell, by default, combines STDOUT and STDERR. separate stdout and stderr, and you'll avoid the confusion.
curl -ILv http://www.mail.com >stdout.log 2>stderr.log ; cat stdout.log
Use:
curl -ILv http://www.mail.com 2>&1 | grep '^[<>\*].*$'
When cURL is called with the verbose command line flag, it sends the verbose output to stderr instead of stdout. The above command redirects stderr to stdout (2>&1), then we pipe the combined output to grep and use the above regex to only return the lines that begin with *, <, or >. All of the other lines in the output (including the dupes you were first concerned with) are removed from the output.
I have a question regarding Nexus RUT capability. After setting it up, getting http header read, adding this user name to security.xml and mapping the role to this user, I am able to authorize in Sonatype Nexus GUI.
Question is, how can I authorize when trying to deploy artifact to Nexus repository using, lets say, Maven?
curl -I http://localhost:8080/nexus/service/local/status
returns
HTTP/1.1 401 Unauthorized
Date: Thu, 11 Jun 2015 10:41:59 GMT
Server: Nexus/2.11.3-01
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
WWW-Authenticate: BASIC realm="Sonatype Nexus Repository Manager API"
Content-Length: 0
but
curl -I -H "X-Forwarded-User: admin" http://localhost:8080/nexus/content/
returns
HTTP/1.1 200 OK
Date: Thu, 11 Jun 2015 10:44:35 GMT
Server: Nexus/2.11.3-01
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Accept-Ranges: bytes
Last-Modified: Thu, 11 Jun 2015 10:44:35 GMT
Content-Length: 0
Using credentials in maven project and trying to deploy site after this tutorial, I am getting error
Uploading: .//project-summary.html to https://my.site.com/nexus/content/sites/site/
[WARNING] Required credentials not available for BASIC <any realm>#federation-sts.site.com:443
[WARNING] Preemptive authentication requested but no default credentials available
https://federation-sts.site.com/adfs/ls/?SAMLRequest=fZJdT4Mw........... - Status code: 200
Transfer finished. 5671 bytes copied in 0.404 seconds
Transfer error: java.io.IOException: Unable to create collection: https://my.site.com/nexus/; status code = 302
I would really appreciate your help, thanks!