I'd like to get a SOAP response from https://ec.europa.eu/taxation_customs/vies/checkVatTestService.wsdl, using the below example XML request:
library(RCurl)
xml.request = r'[<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" >
<soapenv:Header/>
<soapenv:Body>
<urn:checkVat xmlns:urn="urn:ec.europa.eu:taxud:vies:services:checkVat:types">
<urn:countryCode>NL</urn:countryCode>
<urn:vatNumber>800938495B01</urn:vatNumber>
</urn:checkVat>
</soapenv:Body>
</soapenv:Envelope>]'
myheader=c(Connection="close",
'Content-Type' = "application/xml",
'Content-length' =nchar(xml.request),
Accept = "multipart/*",
Accept = "text/xml")
data = getURL(url = "https://ec.europa.eu/taxation_customs/vies/checkVatTestService.wsdl",
postfields=xml.request,
httpheader=myheader,
verbose=TRUE)
I am getting this error, thanks for any help:
* Trying 2a01:7080:14:100::666:30:443...
* Connected to ec.europa.eu (2a01:7080:14:100::666:30) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: added 155 certificate(s) from CA file 'C:/Users/XX/AppData/Local/R/win-library/4.2/RCurl/etc/ca-bundle.crt'
* schannel: connection hostname (ec.europa.eu) did not match against certificate name (*.ec.europa.eu)
* schannel: connection hostname (ec.europa.eu) validated against certificate name (ec.europa.eu)
> POST /taxation_customs/vies/checkVatTestService.wsdl HTTP/1.1
Host: ec.europa.eu
Connection: close
Content-Type: application/xml
Content-length: 379
Accept: multipart/*
Accept: text/xml
* Mark bundle as not supporting multiuse
< HTTP/1.1 405 Method Not Allowed
< Cache-Control: no-store
< Date: Mon, 29 Aug 2022 16:38:23 GMT
< Content-Length: 43
< Content-Type: text/html; charset=UTF-8
< Allow: GET, HEAD
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< Server: Europa
< Connection: close
<
* Closing connection 0
* schannel: shutting down SSL/TLS connection with ec.europa.eu port 443
EDIT: Second method:
r <- POST("https://ec.europa.eu/taxation_customs/vies/checkVatTestService.wsdl", body = body)
stop_for_status(r)
Error: Method Not Allowed (HTTP 405).
content(r)
[1] <body><p>Request method 'POST' not supported</p></body>
The WSDL give the definition of the SOAP endpoint. That's not where you should be posting your request. In the content of the WSDL there is a <wsdlsoap:address location=""> attribute which appears to give the correct URL where you should send your post request. This version of the request seems towk work
library(RCurl)
xml.request = r'[<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" >
<soapenv:Header/>
<soapenv:Body>
<urn:checkVat xmlns:urn="urn:ec.europa.eu:taxud:vies:services:checkVat:types">
<urn:countryCode>NL</urn:countryCode>
<urn:vatNumber>800938495B01</urn:vatNumber>
</urn:checkVat>
</soapenv:Body>
</soapenv:Envelope>]'
myheader=c(Connection="close",
'Content-Type' = "text/xml",
Accept = "text/xml")
getURL(url = "http://ec.europa.eu/taxation_customs/vies/services/checkVatTestService",
postfields=xml.request,
httpheader=myheader,
verbose=TRUE)
# [1] "<env:Envelope xmlns:env=\"http://schemas.xmlsoap.org/soap/envelope/\">
# <env:Header/><env:Body><ns2:checkVatResponse
# xmlns:ns2=\"urn:ec.europa.eu:taxud:vies:services:checkVat:types\">
# <ns2:countryCode>NL</ns2:countryCode>
# <ns2:vatNumber>800938495B01</ns2:vatNumber><ns2:requestDate>2022-08-
# 29+02:00</ns2:requestDate><ns2:valid>false</ns2:valid><ns2:name></ns2:name>
# <ns2:address></ns2:address></ns2:checkVatResponse></env:Body></env:Envelope>"
Related
I have a Java client, that is making a POST call to the v1/graphql endpoint of a Hasura server (v1.3.3)
I'm making the HTTP call using the Square okhttp3 library (v4.9.1). The data transfer is happening over HTTP1.1, using chunked transfer-encoding.
The client is failing with the following error:
Caused by: java.net.ProtocolException: unexpected end of stream
at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) ~[okhttp-4.9.1.jar:?]
at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) ~[okhttp-4.9.1.jar:?]
at okio.RealBufferedSource.read(RealBufferedSource.kt:189) ~[okio-jvm-2.8.0.jar:?]
at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.refill(InflaterSource.kt:112) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.read(InflaterSource.kt:49) ~[okio-jvm-2.8.0.jar:?]
at okio.GzipSource.read(GzipSource.kt:69) ~[okio-jvm-2.8.0.jar:?]
at okio.Buffer.writeAll(Buffer.kt:1642) ~[okio-jvm-2.8.0.jar:?]
at okio.RealBufferedSource.readString(RealBufferedSource.kt:95) ~[okio-jvm-2.8.0.jar:?]
at okhttp3.ResponseBody.string(ResponseBody.kt:187) ~[okhttp-4.9.1.jar:?]
Request Headers:
INFO: Content-Type: application/json; charset=utf-8
INFO: Content-Length: 1928
INFO: Host: localhost:10191
INFO: Connection: Keep-Alive
INFO: Accept-Encoding: gzip
INFO: User-Agent: okhttp/4.9.1
Response headers:
INFO: Transfer-Encoding: chunked
INFO: Date: Tue, 27 Apr 2021 12:06:39 GMT
INFO: Server: Warp/3.3.10
INFO: x-request-id: d019408e-e2e3-4583-bcd6-050d4a496b11
INFO: Content-Type: application/json; charset=utf-8
INFO: Content-Encoding: gzip
This is the client code used for the making the POST call:
private static final MediaType MEDIA_TYPE_JSON = MediaType.parse("application/json; charset=utf-8");
private static OkHttpClient okHttpClient = new OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(5, TimeUnit.MINUTES)
.readTimeout(5, TimeUnit.MINUTES)
.addNetworkInterceptor(loggingInterceptor)
.build();
public GenericHttpResponse httpPost(String url, String textBody, GenericHttpMediaType genericMediaType) throws HttpClientException {
RequestBody body = RequestBody.create(okHttpMediaType, textBody);
Request postRequest = new Request.Builder().url(url).post(body).build();
Call postCall = okHttpClient.newCall(okHttpRequest);
Response postResponse = postCall.execute();
return GenericHttpResponse
.builder()
.body(okHttpResponse.body().string())
.headers(okHttpResponse.headers().toMultimap())
.code(okHttpResponse.code())
.build();
}
This failure is only happening for large response sizes. As per the server logs, the response size (after gzip encoding) is around 52MB, but the call is still failing. This same code has been working fine for response sizes around 10-15MB.
I tried replicating the same issue through a simple cURL call, but that ran successfully:
curl -v -s --request POST 'http://<hasura_endpoint>/v1/graphql' \
--header 'Content-Type: application/json' \
--header 'Accept-Encoding: gzip, deflate, br' \
--data-raw '...'
* Trying ::1...
* TCP_NODELAY set
* Connected to <host> (::1) port <port> (#0)
> POST /v1/graphql HTTP/1.1
> Host: <host>:<port>
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Type: application/json
> Accept-Encoding: gzip, deflate, br
> Content-Length: 1840
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
} [1840 bytes data]
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Date: Tue, 27 Apr 2021 11:59:24 GMT
< Server: Warp/3.3.10
< x-request-id: 27e3ff3f-8b95-4328-a1bc-a5492e68f995
< Content-Type: application/json; charset=utf-8
< Content-Encoding: gzip
<
{ [6 bytes data]
* Connection #0 to host <host> left intact
* Closing connection 0
So I'm assuming that this error is specific to the Java client.
Based on suggestions provided in similar posts, I tried the following other approaches:
Adding a Connection: close header to the request
Sending Transfer-Encoding: gzip header in the request
Setting the retryOnConnectionFailure for the OkHttp client to true
But none of these approaches were able to resolve the issue.
So, my questions are:
What could be the underlying cause for this issue? Since I'm using chunked transfer encoding here, I suppose it's not due to an incorrect content-length header passed in the response.
What are the approaches I can try for debugging this further?
Would really appreciate any insights on this. Thank you.
I had created simple server in terminal
#!/usr/bin/env python3
import sys, os, socket, ssl
import requests
import string
import time
from socketserver import ThreadingMixIn
from http.server import HTTPServer,BaseHTTPRequestHandler
from io import BytesIO
import json
import cgi
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers['Content-Length'])
body = self.rfile.read(content_length)
#self.send_header('Content-type', 'Application/json')
self.send_response(200)
self.end_headers()
response = BytesIO()
self.allow_reuse_address = True
self.wfile.write(b"""{"signingResponse": {"compactidentity": "..SdOwnT70ZZDAjgSmQVP-_0keB_pu4FjkBg5DZDyFf_V5k0EUAY0KCHr2g2a6wOSs-JhsehdYUnrYCfkYItzxLg;info=<http://52.23.250.93:8080/certs/shaken.crt>;alg=ES256;ppt=shaken\n", "TEST": "Nitish","identity": "eyJhbGciOiJFUzI1NiIsInBwdCI6InNoYWtlbiIsInR5cCI6InBhc3Nwb3J0IiwieDV1IjoiaHR0cDovLzUyLjIzLjI1MC45Mzo4MDgwL2NlcnRzL3NoYWtlbi5jcnQifQ.eyJhdHRlc3QiOiJBIiwiZGVzdCI6eyJ0biI6WyIxMjM1NTU1MTIxMiJdfSwiaWF0IjoxNDgzMjI4ODAwLCJvcmlnIjp7InRuIjoiMTIzNTU1NTEyMTIifSwib3JpZ2lkIjoiOGE4ZWM2MTgtYzZiOS0zMGFlLWI0MjctYWY0MTA0YjFjMDJjIn0.SdOwnT70ZZDAjgSmQVP-_0keB_pu4FjkBg5DZDyFf_V5k0EUAY0KCHr2g2a6wOSs-JhsehdYUnrYCfkYItzxLg;info=<http://52.23.250.93:8080/certs/shaken.crt>;alg=ES256;ppt=shaken\n", "requestid": "0"}} """)
httpd = ThreadingServer(('192.168.1.2', 8003), RequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, keyfile='/home/nakumar/key.pem', certfile='/home/nakumar/certificate.pem', server_side=True)
httpd.serve_forever()
Using above code i am trying to simulated the server
now when server receives request from client , it send back the responses and closed the connection , as shown below
Request
> POST /stir/v1/signing HTTP/1.1
Host: 192.168.1.2:8003
Accept: application/json
Content-Type: application/json
Content-Length: 331
Reponse
upload completely sent off: 331 out of 331 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: BaseHTTP/0.6 Python/3.5.2
< Date: Tue, 09 Oct 2018 12:43:21 GMT
<
* Closing connection 0
So we can see the connection close is coming from server after response is served,
is there way possible ,not to close the connection after response is served .
Curl Was closing the connection since content length was not present in the response body
after adding the same it started working
> POST /stir/v1/signing HTTP/1.1
Host: [FD00:10:6B50:4510:0:0:0:53]:8101
Accept: application/json
Content-Type: application/json
Content-Length: 325
* upload completely sent off: 325 out of 325 bytes
< HTTP/1.1 200 OK
< Server: HTTP/1.1 Python/3.5.2
< Date: Thu, 18 Oct 2018 09:13:11 GMT
< Content-type: Application/json
< Content-length: 150
<
* Connection #1 to host FD00:10:6B50:4510:0:0:0:53 left intact
I've been working on an R interface to a HTTP API with digest authentication and I've been running into a problem wherein the request will work absolutely fine on my non-Windows OSs, but I always get 401 status when running exactly the same code on Windows.
I'm currently trying to do it with RCurl, but the same thing was happening with httr when I tried that.
Also the API is unfotunately proprietary so I've had to change all the URLs, sorry.
On my non-Windows OSs I get the following behaviour:
rprompt> getURL('http://demo.someapi.net/some/url', userpwd="demo:demo", httpauth=1L, verbose=TRUE)
* Trying 195.224.16.34...
* Connected to demo.someapi.net (195.224.16.34) port 443 (#0)
* TLS 1.0 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate: *.someapi.net
* Server certificate: RapidSSL SHA256 CA - G3
* Server certificate: GeoTrust Global CA
> GET /some/url HTTP/1.1
Host: demo.someapi.net
Accept: */*
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache
< Content-Type: text/html
< Server: Microsoft-IIS/7.5
< WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo0NTozMw", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth"
< X-Powered-By: ASP.NET
< Date: Mon, 11 Jul 2016 08:44:33 GMT
< Content-Length: 1293
<
* Ignoring the response-body
* Connection #0 to host demo.someapi.net left intact
* Issue another request to this URL: 'https://demo.someapi.net/some/url'
* Found bundle for host demo.someapi.net: 0x7f96c8d55af0
* Re-using existing connection! (#0) with host demo.someapi.net
* Connected to demo.someapi.net (195.224.16.34) port 443 (#0)
* Server auth using Digest with user 'demo'
> GET /some/url HTTP/1.1
Host: demo.someapi.net
Authorization: Digest username="demo", realm="company.product", nonce="MTEvMDcvMjAxNiAwODo0NTozMw", uri="/some/url", cnonce="YjRkMDQxYmM4MDFkYTMxOWZhNTViNGNmYTM5YzQyNGI=", nc=00000001, qop=auth, response="5d9643d083b2380f12d71855a98ceac3", opaque="0000000000000000", algorithm="MD5"
Accept: */*
< HTTP/1.1 200 OK
< Cache-Control: private
< Content-Length: 981
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-IIS/7.5
< X-AspNet-Version: 4.0.30319
< X-Powered-By: ASP.NET
< Date: Mon, 11 Jul 2016 08:44:33 GMT
<
* Connection #0 to host demo.someapi.net left intact
and evertything works exactly as we expect it to. On Windows however we get this:
rprompt> getURL('http://demo.someapi.net/some/url', userpwd="demo:demo", httpauth=1L, verbose=TRUE)
* Trying 195.224.16.34...
* Connected to demo.someapi.net (195.224.16.34) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: C:/Users/username/Documents/R/win-library/3.3/RCurl/etc/ca-bundle.crt
CApath: none
* SSL connection using TLSv1.0 / ECDHE-RSA-AES256-SHA
* Server certificate:
* subject: OU=GT56411961; OU=See www.rapidssl.com/resources/cps (c)15; OU=Domain Control Validated - RapidSSL(R); CN=*.someapi.net
* start date: 2015-01-26 09:31:11 GMT
* expire date: 2018-03-28 16:30:51 GMT
* subjectAltName: demo.someapi.net matched
* issuer: C=US; O=GeoTrust Inc.; CN=RapidSSL SHA256 CA - G3
* SSL certificate verify ok.
> GET /some/url HTTP/1.1
Host: demo.someapi.net
Accept: */*
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache
< Content-Type: text/html
< Server: Microsoft-IIS/7.5
< WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo1MjowOA", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth"
< X-Powered-By: ASP.NET
< Date: Mon, 11 Jul 2016 08:51:07 GMT
< Content-Length: 1293
<
* Ignoring the response-body
* Connection #0 to host demo.someapi.net left intact
* Issue another request to this URL: 'https://demo.someapi.net/some/url'
* Found bundle for host demo.someapi.net: 0xaa60b80
* Re-using existing connection! (#0) with host demo.someapi.net
* Connected to demo.someapi.net (195.224.16.34) port 443 (#0)
* Server auth using Digest with user 'demo'
> GET /some/url HTTP/1.1
Authorization: Digest username="demo",realm="",nonce="MTEvMDcvMjAxNiAwODo1MjowOA",uri="/some/url",cnonce="553f542ddef0e3c265e50539297bad81",nc=00000001,algorithm=MD5,response="1ec58793bb1d8142f09af112b905fa36",qop="auth",opaque="0000000000000000"
Host: demo.someapi.net
Accept: */*
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache
< Content-Type: text/html
< Server: Microsoft-IIS/7.5
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo1MjowOA", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth"
< X-Powered-By: ASP.NET
< Date: Mon, 11 Jul 2016 08:51:07 GMT
< Content-Length: 1293
<
* Connection #0 to host demo.someapi.net left intact
which just returns a 401 landing page HTML.
The issue seems to be that the realm field is empty, but I have no idea how to fix this or even how to work around it.
It should be noted that both .NET's webclient and Python's requests library handles things fine, but unfortunately this has to be done in R.
I'm happy to use any R packages that are needed to help solve this.
Thanks.
For anyone else who ends up with a similar problem, you can get around it by using httr and using your own handles.
make.request <- function (url, user, pass) {
handle <- httr::handle(url)
response <- GET(url=NULL, authenticate(user, pass, type="digest"), handle=handle)
# error checking and stuff...
}
I'm not sure what the issue is with RCurl though.
when executing, with a key that has worked in the past but that I haven't used for a few weeks, the following cURL
curl -v -k -s -H "Content-Type: application/json" https://vision.googleapis.com/v1/images:annotate?key=MyKey --data-binary #a.json
where a.json is
{"requests": [{"image": {"content": "SUkqADwmAAD////8gYEoGct1VHdGU..."}, "features": [{"type": "TEXT_DETECTION", "maxResults": 1}]}]}
returns
* Trying 173.194.205.239...
* Connected to vision.googleapis.com (173.194.205.239) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.googleapis.com
* Server certificate: Google Internet Authority G2
* Server certificate: GeoTrust Global CA
> POST /v1/images:annotate?key=MyKey HTTP/1.1
> Host: vision.googleapis.com
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 13454
> Expect: 100-continue
>
* Done waiting for 100-continue
* We are completely uploaded and fine
< HTTP/1.1 400 Bad Request
< Vary: X-Origin
< Vary: Referer
< Content-Type: application/json; charset=UTF-8
< Date: Wed, 10 Feb 2016 18:02:12 GMT
< Server: ESF
< Cache-Control: private
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< Alternate-Protocol: 443:quic,p=1
< Alt-Svc: quic=":443"; ma=604800; v="30,29,28,27,26,25"
< Accept-Ranges: none
< Vary: Origin,Accept-Encoding
< Transfer-Encoding: chunked
<
{
"error": {
"code": 400,
"message": "Request Issue Failed.",
"status": "INVALID_ARGUMENT"
}
}
* Connection #0 to host vision.googleapis.com left intact
Version 1 of the Google Cloud Vision API (beta) does not permit TIFF [1]. Here is a list of the currently supported formats:
JPEG
PNG8
PNG24
GIF
Animated GIF (first frame only)
BMP
WEBP
RAW
ICO
[1] https://cloud.google.com/vision/docs/image-best-practices#image_types
Turns out this is because I was sending base64 encoded tif images. Works fine for PNGs. Pretty sure I was told this in the docs.
I´m trying to scrape a web page using a proxy but something isn´w working.
Here is an httr attemp to set the proxy options, below i try with RCurl.
I´ve read several answers regarding the subject but they don´t seem to be working.
Any suggestions?
### httr attempt
set_config(
use_proxy(url="proxy.xxx.com.ar", port=8080,
username = "xxxx\\xxxx", password = "xxxxx"),
override = TRUE
)
a <- GET("http://google.com/", verbose())
-> GET http://google.com/ HTTP/1.1
-> Proxy-Authorization: Basic dG1vdmlsZXNcbWFyYmVsOkFyYWNhbGFjYW5hMjM=
-> User-Agent: curl/7.19.7 Rcurl/1.95.4.1 httr/0.4.0.99
-> Host: google.com
-> Accept: */*
-> Accept-Encoding: gzip
-> Proxy-Connection: Keep-Alive
->
<- HTTP/1.1 407 Proxy Authentication Required
<- Server: pxsip02-srv.xxxxx.com.ar
<- Date: Mon, 11 Aug 2014 15:11:14 GMT
<- Content-Length: 309
<- Content-Type: text/html
<- Connection: Keep-Alive
<- Keep-Alive: timeout=60, max=8
<- Proxy-Authenticate: NTLM
<-
content(a)
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<head><title>Authentication Error</title></head>
<body>
<h1>Authentication Error</h1>There has been an error validating your user credentials. If the error persists,contact your network administrator.<br>Proxy authentication required<br><hr>
<br>Details: 407 Proxy Authentication Required</body>
</html>
### RCurl attempt
library("RCurl")
opts <- list(
proxy = "proxy.xxxxx.com.ar",
proxyusername = "xxxxxx\\xxxxx",
proxypassword = "xxxxxx",
proxyport = 8080,
capath = system.file("CurlSSL", "cacert.pem", package = "RCurl"),
verbose=TRUE, proxyauth=TRUE, useragent= "", header = TRUE
)
options( RCurlOptions = opts)
getURL("http://stackoverflow.com")
* About to connect() to proxy proxy.xxxxx.com.ar port 8080 (#0)
* Trying 10.167.195.11... * connected
* Connected to proxy.xxxxxx.com.ar (10.167.195.11) port 8080 (#0)
* Proxy auth using Basic with user 'xxxxxxx\xxxxx'
> GET http://stackoverflow.com HTTP/1.1
Proxy-Authorization: Basic VE1PVklMRVNcTUFSQkVMOkFyYWNhbGFjYW5hMjM=
Host: stackoverflow.com
Accept: */*
Proxy-Connection: Keep-Alive
[1] "HTTP/1.1 407 Proxy Authentication Required\r\nServer: pxsip02-srv.xxxx.com.ar\r\nDate: Mon, 11 Aug 2014 15:15:29 GMT\r\nContent-Length: 309\r\nContent-Type: text/html\r\nConnection: Keep-Alive\r\nKeep-Alive: timeout=60, max=8\r\nProxy-Authenticate: NTLM\r\n\r\n<html><head><title>Authentication Error</title></head><body><h1>Authentication Error</h1>There has been an error validating your user credentials. If the error persists,contact your network administrator.<br/>Proxy authentication required<br/><hr/><br/>Details: 407 Proxy Authentication Required</body></html>"
< HTTP/1.1 407 Proxy Authentication Required
< Server: pxsip02-srv.xxxxxxx.com.ar
< Date: Mon, 11 Aug 2014 15:15:29 GMT
< Content-Length: 309
< Content-Type: text/html
< Connection: Keep-Alive
< Keep-Alive: timeout=60, max=8
< Proxy-Authenticate: NTLM
<
* Connection #0 to host proxy.xxxxxx.com.ar left intact
Here is the update of the previous question. I add it as another answer so it´s easier to follow.
GET("http://google.com/",
config = list(
use_proxy(url="proxy.xxx.com.ar", port=8080,
username = "xxxx\\xxxx", password = "xxxxx",
proxyauth = 1)
)
)
The error messege:
Error in use_proxy(url = "proxy.xxxx.com.ar", port = 8080, username = "xxxxxx\\xxxx", :
unused argument (proxyauth = 1)