R and SSL/curl on Ubuntu linux: failed SSL connect in R, but works in curl - r

I'm a bit at a loss on how to further investigate this, so pointers would be highly appreciated.
I'm running Ubuntu 17.04, and I believe roughly since around my upgrade time (was running 16.10 before) I can no longer update (or use anything "from the internet") anything from within R -- it fails on SSL for everything. All of the "normal" SSL traffic outside of R works fine.
For instance, doing install.packages("curl"), I get this error message:
Warning in install.packages :
URL 'https://cran.rstudio.com/src/contrib/PACKAGES.rds': status was 'SSL connect error'
Warning in install.packages :
URL 'https://cran.rstudio.com/src/contrib/PACKAGES.gz': status was 'SSL connect error'
Warning in install.packages :
URL 'https://cran.rstudio.com/src/contrib/PACKAGES': status was 'SSL connect error'
Warning in install.packages :
... [etc] ...
However, if I run curl -v "https://cran.rstudio.com/src/contrib/PACKAGES.rds" -o test.curl on command line, everything works.
* Trying 10.26.0.19...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to (nil) (10.26.0.19) port 3128 (#0)
* Establish HTTP proxy tunnel to cran.rstudio.com:443
* Proxy auth using Basic with user '[redacted]'
> CONNECT cran.rstudio.com:443 HTTP/1.1
> Host: cran.rstudio.com:443
> Proxy-Authorization: Basic [redacted]
> User-Agent: curl/7.52.1
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /home/csafferling/programs/anaconda3/ssl/cacert.pem
CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):} [5 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client hello (1):} [512 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):{ [76 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):{ [4787 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):{ [333 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):} [70 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Client hello (1):{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=Domain Control Validated; CN=cran.rstudio.com
* start date: Jun 30 19:59:41 2015 GMT
* expire date: Jun 30 19:59:41 2018 GMT
* subjectAltName: host "cran.rstudio.com" matched cert's "cran.rstudio.com"
* issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certs.godaddy.com/repository/; CN=Go Daddy Secure Certificate Authority - G2
* SSL certificate verify ok.} [5 bytes data]
> GET /src/contrib/PACKAGES.rds HTTP/1.1
> Host: cran.rstudio.com
> User-Agent: curl/7.52.1
> Accept: */*
> { [5 bytes data]
< HTTP/1.1 200 OK
< Content-Length: 251020
< Connection: keep-alive
< Date: Wed, 12 Jul 2017 14:11:48 GMT
< Server: Apache/2.2.22 (Ubuntu)
< Last-Modified: Wed, 12 Jul 2017 13:02:43 GMT
< ETag: "d78fc54-3d48c-5541e6e7d22c0"
< Accept-Ranges: bytes
< Cache-Control: max-age=1800
< Expires: Wed, 12 Jul 2017 14:41:48 GMT
< Age: 1045
< X-Cache: Hit from cloudfront
< Via: 1.1 67284fcf464f6f1529cc1e521669622c.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: CqpfjeemEcxkxFYJueqzwUEu8Yh-qSenHJJiR2BdmqmAYLpu2_54dA==
< { [15891 bytes data]
* Curl_http_done: called premature == 0 100 245k 100 245k 0 0 583k 0 --:--:-- --:--:-- --:--:-- 589k
* Connection #0 to host (nil) left intact
One thing I notice is that command-line curl uses the CAs of my anaconda install, which is very weird indeed. Perhaps R uses the default CAs, and they don't work? Like I said, only R fails to work with SSL, everything else works.
Any help is highly appreciated!

Dear Christoph Saffering,
My sense is that you have hit the CRAN ssh by default issue with RStudio / R.
Solution
Add the following to your target machines .Rprofile
options(download.file.method = "wget")
local({
r<- getOption("repos");
r["CRAN"] <-"https://cran.rstudio.com/"
options(repos=r)
})
Explanation
Secure Download Methods
When R transfers files over HTTP (e.g. using the install.packages or download.file function) a download method is chosen based on the download.file.method option. There are several methods available and the default behavior if no option is explicitly specified is to use R’s internal HTTP implementation. In many circumstances this internal method will not support HTTPS connections so you’ll need to override the default.
R 3.2
R 3.2 includes two new download methods (“libcurl” and “wininet”) that both support HTTPS connections. We recommend that you use these new methods when running under R 3.2. The requisite code to add to .Rprofile or Rprofile.site is as follows:
Windows
options(download.file.method = "wininet")
Note that in the upcoming R 3.2.2 release this will no longer be necessary, as the default method is equivalent to “wininet”.
OS X and Linux
options(download.file.method = "libcurl")
Note that if you built R from source the “libcurl” method may or may not have been compiled in. In the case that it wasn’t (i.e. capabilities("libcurl") == FALSE), you can follow the directions for earlier versions of R below to configure an alternate secure method.
R 3.1 and Earlier
Windows
utils::setInternet2(TRUE)
options(download.file.method = "internal")
Note that setInternet2(TRUE) is the default value in RStudio however is not for R GUI. If you don’t want to use setInternet2(TRUE) on Windows then the only other way to configure secure downloads is to have the “wget” or “curl” utility on your PATH as described for OS X and Linux below.
OS X
options(download.file.method = "curl")
Linux
options(download.file.method = "wget")
Note that the “curl” and “wget” methods will work on any platform so long as the requisite binary is in the system PATH. The recommendations above are based on the fact that “curl” is included in OS X and “wget” is included in most Linux distributions.
ref: https://support.rstudio.com/hc/en-us/articles/206827897-Secure-Package-Downloads-for-R

Related

Converting lftp command to curl

I have a very old shell script running on a production machine i just got access to from my customer and my job is to convert it to a curl equivalent. The script is pretty simple and all it does is downloading a file from remote FTP to a local filesystem:
lftp -u Username,'pass' xxx.xxx.xx.xx << !
echo 'Connected'
get dir/file.csv
exit
!
First of all - is that even possible to replace that with curl? It looks like a simple FTP fetching script to me but i might be not aware of any nuances of downloading FTP files using curl.
Second of all, here is what i tried so far based on dozen of threads i've found on the internet and none of these worked:
curl ftp://Username:pass#xxx.xxx.xx.xx/dir/file.csv --ftp-ssl
#=> curl: (67) Access denied: 550
curl ftps://Username:pass#xxx.xxx.xx.xx/dir/file.csv --ftp-ssl
#=> curl: (67) Access denied: 550
curl -P - --insecure "ftp://xxx.xxx.xx.xx/dir/file.csv" --user "Username:pass" --ftp-ssl
#=> curl: (67) Access denied: 550
Edit: after adding -v, i realized that there was some issue with the certificate so i added --insecure flag and it now tells that login was incorrect while I'm 100% sure both login and passwords are correct. Output:
* Trying xxx.xxx.xx.xx...
* TCP_NODELAY set
* Connected to xxx.xxx.xx.xx (xxx.xxx.xx.xx) port 21 (#0)
< 220 NASFTPD Turbo station 1.3.5a Server (ProFTPD)
> AUTH SSL
< 234 AUTH SSL successful
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: C=TW; ST=xxx; L=xxx; O=xxx, Inc.; OU=xxx; CN=xxx; emailAddress=xxx
* start date: Mar 11 10:45:27 2016 GMT
* expire date: Mar 9 10:45:27 2026 GMT
* issuer: C=TW; ST=xxx; L=Taipei; O=xxx, Inc.; OU=QTS; CN=xxx; emailAddress=xxx
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER Username
< 331 Password required for Username
> PASS s
< 530 Login incorrect.
* Access denied: 530
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, Client hello (1):
curl: (67) Access denied: 530
The issue was caused by the fact that password contained a $ character which made password parsing misbehaving and was using just a first letter of a password (next one was a dollar sign). Wrapping password inside single quotation marks solved the issue for me.

Not able to curl an application runs on aws ec2 via https, but http works

I have an ec2 instance whose public dns (ipv4) is ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com. I run a java application on the ec2 and I can fetch data via curl and http from my local laptop:
$ curl http://ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080/users/1/items/2759 -verbose
* Trying xx.xxx.xx.xx:8080...
* TCP_NODELAY set
* Connected to ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com (xx.xxx.xx.xx) port 8080 (#0)
> GET /users/1/items/2759 HTTP/1.1
> Host: ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Thu, 13 Feb 2020 06:34:50 GMT
<
< ...expected data...
However, https does not work
curl https://ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080/users/1/items/2759 -verbose
* Trying xx.xxx.xx.xx:8080...
* TCP_NODELAY set
* Connected to ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com (xx.xxx.xx.xx) port 8080 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /usr/local/anaconda3/ssl/cacert.pem
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
My ec2 security group inbound rule:
Why can't I access my java application endpoint via https? And how can I do so?

Empty response with Apollo graphql and Nginx

Our application exposes a graphql API through Apollo, Express and Nest.js. In the prod/staging environments, the application is deployed behind a Nginx ingress (kubernetes). The API works fine in the production environment and with kubectl port-forward. But, when we try to use the API behind Nginx, the HTTP response hangs (TCP timeout) when the response payload is higher then approximately 1 KB. Some conclusions we have:
The application is working properly (we see by the logs that it is returning the correct response);
We tried to increase Nginx proxy-buffer-size, without success;
We tried to return the same payload on an usual REST endpoint (no graphql, just Nest.js controller) and it worked!
So, it seems to be a problem related to the integration between Nginx as a proxy and Apollo server. Does anybody have any thoughts on that?
Here is an example of a curl request which causes the problem:
$ curl -v --location --request POST 'https://my-company.com/graphql' --header 'Content-Type: application/json' --data-raw '{"query":"query {\n myQueryHere \n}","variables":{}}'
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying <my-ip-here>...
* TCP_NODELAY set
* Connected to my-company (my-ip-here) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.mycompany.com
* start date: Dec 4 00:00:00 2019 GMT
* expire date: Jan 4 12:00:00 2021 GMT
* subjectAltName: host "my-company" matched cert's "*.my-company"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> POST /graphql HTTP/1.1
> Host: my-company
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 159
>
* upload completely sent off: 159 out of 159 bytes
* TLSv1.2 (IN), TLS alert, Client hello (1):
* Empty reply from server
* Connection #0 to host my-company left intact
curl: (52) Empty reply from server

Docker private registry /v2/_catalog not pulling repos but /v2/repo/tags/ works?

I'm trying to list the repositories in my private registry, but for an unknown reason polling /v2/_catalog doesn't return any results how ever specifying a repo does return results.
I can also push and pull to this repo without issue, so I'm fairly stumped.
The registry uses an S3 bucket for storage.
Uses htpasswd for access control
// Doesn't work =>
curl -kv -X GET https://registry.example.com/v2/_catalog --user user:pass
// Does Work =>
curl -kv -X GET https://registry.example.com/v2/nginx/tags/list --user user:pass
The output of the failing curl request is:
* Trying 000.000.000.000...
* TCP_NODELAY set
* Connected to registry.example.com (000.000.000.000 port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Feb 13 16:27:49 2019 GMT
* expire date: Feb 13 16:27:49 2020 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Server auth using Basic with user 'user'
* Using Stream ID: 1 (easy handle 0x7f83b1800400)
> GET /v2/_catalog HTTP/2
> Host: registry.example.com
> Authorization: Basic asdfasdfasdfasdfasdfasdfa
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 502
< server: nginx/1.15.8
< date: Wed, 13 Feb 2019 17:27:17 GMT
< content-type: text/html
< content-length: 157
< strict-transport-security: max-age=15724800; includeSubDomains
<
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.8</center>
</body>
</html>
* Connection #0 to host registry.example.com left intact
I am also looking answer for this issue,
I am trying to get list of repository on jfrog Docker local repository through v2/_catalog.
let say repository name: devops and image:mongo:3.6
jfrog registry url : http://192.168.19.99:8081
GET API is : curl -k -u test:123test123 -X GET http://192.168.19.99:8081/v2/_catalog
**Response returns empty**
However if put repository name after **v2/reponame/_catalog** then getting correct list of image
belongs to given repository.
e.g: curl -k -u test:123test123 -X GET http://192.168.19.99:8081/v2/devops/_catalog
{
"repositories" : [ "mongo" ]
}
But i am looking to get through **v2/_catalog**
If there any way to make it work using **v2/_catalog**
However i also tried with Nginx configuration but **v2/_catalog** returns empty ONLY
--
Thanks ```

Connectivity testing results in 401 HTTP error

I'm currently testing connectivity between two separate systems. When I attempt to make a web service call through our software, I've noticed that a 401 error repeatedly shows up in the logs:
Caused by: org.apache.cxf.transport.http.HTTPException: HTTP response '401: Unauthorized' when communicating with https://testsystem.endpoint/webservice
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1530)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1490)
From what I have read on SO, a 401 tells you that you haven't successfully logged in or authenticated to whatever server you are trying to connect to. I've notified those who control the target endpoint of this, and I've been told that I shouldn't need to login and that "everything is authenticated by the certificates." As I understand it, SSL/TLS certificates can certainly act as a way to authenticate in the sense that if you claim to be X in your certificate and your certificate has been signed by a trusted CA, then you are most likely X. However, I believe this is distinct from entering valid or invalid login credentials (which I believe is closer to the cause of the 401 error).
I tried curl'ing the endpoint using my public keypair as well as the root certificate for the certificate the target endpoint. I can see there appear to be two separate SSL handshakes, which I believe to be SSL renegotiation. I can clearly see a 401 error occurring again:
$ curl -vv --cert cert.pem --cacert root.pem https://testsystem.endpoint/webservice
* About to connect() to testsystem.endpoint/webserivce port 443 (#0)
* Trying TEST.IP.ADDRESS.FAKE... connected
* Connected to testsystem.endpoint/webserivce (TEST.IP.ADDRESS.FAKE) port 443 (#0)
Enter PEM pass phrase:
* successfully set certificate verify locations:
* CAfile: root.pem
CApath: none
* SSLv2, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES128-SHA
* Server certificate:
* subject: /OU=Domain Control Validated/CN=TEST.CN.FAKE
* start date: 2015-07-07 22:44:38 GMT
* expire date: 2016-07-19 16:55:05 GMT
* subjectAltName: testsystem.endpoint/webserivce matched
* issuer: /C=US/ST=fake/L=fake/O=Company.com, Inc./OU=http://certs.company.com/repository//CN=Company Secure Certificate Authority
* SSL certificate verify ok.
> GET /webservice HTTP/1.1
> User-Agent: curl/7.16.2 (x86_64-unknown-linux-gnu) libcurl/7.16.2 OpenSSL/0.9.8b zlib/1.2.3
> Host: testsystem.endpoint/webserivce
> Accept: */*
>
* SSLv3, TLS handshake, Hello request (0):
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS handshake, CERT verify (15):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
< HTTP/1.1 401 Unauthorized
< Content-Type: text/html
< Server: Microsoft-IIS/7.5
< X-Powered-By: ASP.NET
< Date: Sat, 05 Sep 2015 00:16:21 GMT
I don't believe my problem is necessarily related to SSL/TLS-handshaking: I have built up the full certificate chain for the other end (which is trusted on my end), and I can see that the first SSL/TLS handshake seems to work. I suppose my question is: Why is the other end 401'ing me when I'm using what I believe is a valid keypair?
Probably you are using a too old version of OpenSSL.
TLS 1.2 support has been introduced in 2012 with 1.0.1 version, while you are using 0.9.8b
Try to upgrade cUrl and OpenSSL.

Resources