Nexus Remote User Token authorization when deploying - nexus

I have a question regarding Nexus RUT capability. After setting it up, getting http header read, adding this user name to security.xml and mapping the role to this user, I am able to authorize in Sonatype Nexus GUI.
Question is, how can I authorize when trying to deploy artifact to Nexus repository using, lets say, Maven?
curl -I http://localhost:8080/nexus/service/local/status
returns
HTTP/1.1 401 Unauthorized
Date: Thu, 11 Jun 2015 10:41:59 GMT
Server: Nexus/2.11.3-01
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
WWW-Authenticate: BASIC realm="Sonatype Nexus Repository Manager API"
Content-Length: 0
but
curl -I -H "X-Forwarded-User: admin" http://localhost:8080/nexus/content/
returns
HTTP/1.1 200 OK
Date: Thu, 11 Jun 2015 10:44:35 GMT
Server: Nexus/2.11.3-01
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Accept-Ranges: bytes
Last-Modified: Thu, 11 Jun 2015 10:44:35 GMT
Content-Length: 0
Using credentials in maven project and trying to deploy site after this tutorial, I am getting error
Uploading: .//project-summary.html to https://my.site.com/nexus/content/sites/site/
[WARNING] Required credentials not available for BASIC <any realm>#federation-sts.site.com:443
[WARNING] Preemptive authentication requested but no default credentials available
https://federation-sts.site.com/adfs/ls/?SAMLRequest=fZJdT4Mw........... - Status code: 200
Transfer finished. 5671 bytes copied in 0.404 seconds
Transfer error: java.io.IOException: Unable to create collection: https://my.site.com/nexus/; status code = 302
I would really appreciate your help, thanks!

Related

Does Firebase support HTTP HEAD request?

My app has files on firebase storage that i need to serve up to another service. The files are publicly accessible but this service likes to make a HEAD request first which is denied by firebase (error 400).
An this be configured somehow? I believe that Google storage supports this.
eg: file get is ok:
$ curl https://firebasestorage.googleapis.com/v0/b/test-451f9.appspot.com/o/temp%2Fhello.txt?
alt=media -o -
Hello
but the HEAD request:
$ curl --head https://firebasestorage.googleapis.com/v0/b/test-451f9.appspot.com/o/temp%2Fhel
lo.txt?alt=media -o -
HTTP/2 400
x-guploader-uploadid: AEnB2UqWsCbhq_AKpXh29El8_aiJnZqDEUeGsn2i1j0ZPQie0-OB2AQjnKqi_ya50hIw7Yb4WmlKV19ilYQBk9KGdndj4oX9oQ
x-content-type-options: nosniff
content-type: application/json; charset=UTF-8
access-control-expose-headers: Content-Range, X-Firebase-Storage-XSRF
access-control-allow-origin: *
date: Mon, 10 Dec 2018 17:20:52 GMT
expires: Mon, 10 Dec 2018 17:20:52 GMT
cache-control: private, max-age=0
server: UploadServer
alt-svc: quic=":443"; ma=2592000; v="44,43,39,35"
fails.

I can't return get 200 response from curl command

I have a web application running in my tomcat server. When type the url in the browser, the app is working fine :
But when I do it with the curl command :
curl -IL http://localhost:8090/mysite
I get the following :
HTTP/1.1 405 Method Not Allowed
Server: Apache-Coyote/1.1
Allow: GET
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1047
Date: Sun, 20 Nov 2016 11:41:27 GMT
What I am missing ?
Try without "-I" - maybe the server doesn't support HEAD (which would be a severe bug).

RAdwords Error in 1:ncol(data) : argument of length 0

Using RAdwords, I can connect and get a token - list reports and list metrics. But when I try to pull data i get the error below the code - thanks for the help!
body <- statement(select=c('KeywordText','Clicks','Cost','Ctr'),
report="KEYWORDS_PERFORMANCE_REPORT",
where="Clicks > 100",
start="20150101",
end="20150301")
data <- getData(clientCustomerId='949-xxx-xxxx', google_auth=google_auth,
statement=body, transformation = TRUE)
e* upload completely sent off: 130 out of 130 bytes
HTTP/1.1 400 Bad Request
Content-Type: text/xml
Date: Fri, 06 Mar 2015 18:28:30 GMT
Expires: Fri, 06 Mar 2015 18:28:30 GMT
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Accept-Ranges: none
Vary: Accept-Encoding
Transfer-Encoding: chunked
Connection #0 to host adwords.google.com left intact
Error in 1:ncol(data) : argument of length 0
Update:
There is an update of the package on my github repository which might solve your problem:
https://github.com/jburkhardt/RAdwords
Could you please reinstall the package from github and see if your bug gets fixed?
Austin, referring to the R output you sent me per mail, your problem is:
> data <- getData(clientCustomerId='xxx-xxx-xxxx', google_auth=google_auth, statement=body)
* Hostname was NOT found in DNS cache
Please make sure that you use the Adwords Account Id. The MCC Id will not work!
This is not an error or bug of the RAdwords package per se. The problem rather has something to do with the curl settings on your system.
See here for similiar problems:
Curl Hostname was NOT found in DNS cache error
https://bbs.archlinux.org/viewtopic.php?id=175433

Nagios check_http gives 'HTTP/1.0 503 Service Unavailable' for HAProxy site

Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option

How do I prevent GAE from ungzipping a gzipped xml feed?

I have a script on GAE that requests an XML feed from a partner that's typically 40MB but only 5MB gzipped. GAE is automatically unzipping this content and throwing an error that the response is too big:
HTTP response was too large: 46677241. The limit is: 33554432.
The script is setup to uncompress the response itself. How do I prevent GAE from getting in the way and breaking?
Here's the response header from my partner:
HTTP/1.0 200 OK
Expires: Wed, 27 Jun 2012 05:42:07 GMT
Cache-Control: max-age=10368000
Content-Type: application/x-gzip
Accept-Ranges: bytes
Last-Modified: Wed, 22 Feb 2012 11:06:09 GMT
Content-Length: 5263323
Date: Tue, 28 Feb 2012 05:42:07 GMT
Server: lighttpd
X-Cache: MISS from static01
X-Cache-Lookup: MISS from static01:80
Via: 1.0 static01:80 (squid)
Most likely your partner's server responds with plain XML, because it thinks that http-client sending requests (i.e. GAE URL Fetch service) does not support gzipping. Hence "response was too large" error.
To announce that you actually want to receive gzipped content you need to set Accept-Encoding: gzip header when using URL fetch service.

Resources