cURL authorization on ASP.NET website - asp.net
Can anyone help me to figure out what am I doing wrong?
What do we have:
A website where I need to log in via cURL.
Credentials of two accounts that have to be logged in.
What did I get
I managed to log in only one account via cURL, but can't log in the second one. However I can log in with both of them via browser.
What did I try
this one works fine
curl --verbose --location -b ~/cookie.txt -c ~/cookie.txt
--data "tbLogin=login1&tbPassword=password1&btSubmit=Войти"
http://online.tmtr.ru/login.aspx
this one doesn't
curl --verbose --location -b ~/cookie.txt -c ~/cookie.txt
--data "tbLogin=login2&tbPassword=password2&btSubmit=Войти"
http://online.tmtr.ru/login.aspx
The only difference is in the logins and passwords.
I also tried to use separate cookie files for each account.
Here are the logs
Working account:
* About to connect() to online.tmtr.ru port 80 (#0)
* Trying 109.73.3.134...
* connected
* Connected to online.tmtr.ru (109.73.3.134) port 80 (#0)
> POST /login.aspx HTTP/1.1
> User-Agent: curl/7.26.0
> Host: online.tmtr.ru
> Accept: */*
> Cookie: _some_cookie_data_
> Content-Length: 59
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 59 out of 59 bytes
* additional stuff not fine transfer.c:1037: 0 0
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Location: /main.aspx
< Server: Microsoft-IIS/8.5
< X-AspNet-Version: 2.0.50727
* Replaced cookie _some_cookie_data_
< Set-Cookie: _some_cookie_data_
< X-Powered-By: ASP.NET
< Date: Wed, 02 Dec 2015 13:49:39 GMT
< Content-Length: 129
<
* Ignoring the response-body
* Connection #0 to host online.tmtr.ru left intact
* Issue another request to this URL: 'http://online.tmtr.ru/main.aspx'
* Violate RFC 2616/10.3.3 and switch from POST to GET
* Re-using existing connection! (#0) with host (nil)
* Connected to (nil) (109.73.3.134) port 80 (#0)
> GET /main.aspx HTTP/1.1
> User-Agent: curl/7.26.0
> Host: online.tmtr.ru
> Accept: */*
> Cookie: _some_cookie_data_
>
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Server: Microsoft-IIS/8.5
< X-AspNet-Version: 2.0.50727
< X-Powered-By: ASP.NET
< Date: Wed, 02 Dec 2015 13:49:39 GMT
< Content-Length: 150950
<
<!-- Here goes the html of the page with user's account -->
Problem account:
* About to connect() to online.tmtr.ru port 80 (#0)
* Trying 109.73.3.134...
* connected
* Connected to online.tmtr.ru (109.73.3.134) port 80 (#0)
> POST /login.aspx HTTP/1.1
> User-Agent: curl/7.26.0
> Host: online.tmtr.ru
> Accept: */*
> Cookie: _some_cookie_data_
> Content-Length: 56
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 56 out of 56 bytes
* additional stuff not fine transfer.c:1037: 0 0
* additional stuff not fine transfer.c:1037: 0 0
* additional stuff not fine transfer.c:1037: 0 0
* additional stuff not fine transfer.c:1037: 0 0
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Location: /main.aspx
< Server: Microsoft-IIS/8.5
< X-AspNet-Version: 2.0.50727
* Replaced cookie _some_cookie_data_
< Set-Cookie: _some_cookie_data_
< X-Powered-By: ASP.NET
< Date: Wed, 02 Dec 2015 13:45:10 GMT
< Content-Length: 129
<
* Ignoring the response-body
* Connection #0 to host online.tmtr.ru left intact
* Issue another request to this URL: 'http://online.tmtr.ru/main.aspx'
* Violate RFC 2616/10.3.3 and switch from POST to GET
* Re-using existing connection! (#0) with host (nil)
* Connected to (nil) (109.73.3.134) port 80 (#0)
> GET /main.aspx HTTP/1.1
> User-Agent: curl/7.26.0
> Host: online.tmtr.ru
> Accept: */*
> Cookie: _some_cookie_data_
>
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 302 Found
< Cache-Control: private
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=utf-8
< Location: /error.aspx?aspxerrorpath=/main.aspx
< Server: Microsoft-IIS/8.5
< X-AspNet-Version: 2.0.50727
< X-Powered-By: ASP.NET
< Date: Wed, 02 Dec 2015 13:45:10 GMT
<
* Ignoring the response-body
* Connection #0 to host (nil) left intact
* Issue another request to this URL: 'http://online.tmtr.ru/error.aspx?aspxerrorpath=/main.aspx'
* Re-using existing connection! (#0) with host (nil)
* Connected to (nil) (109.73.3.134) port 80 (#0)
> GET /error.aspx?aspxerrorpath=/main.aspx HTTP/1.1
> User-Agent: curl/7.26.0
> Host: online.tmtr.ru
> Accept: */*
> Cookie: _some_cookie_data_
>
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Cache-Control: private
< Content-Type: text/html; charset=utf-8
< Server: Microsoft-IIS/8.5
< X-AspNet-Version: 2.0.50727
< X-Powered-By: ASP.NET
< Date: Wed, 02 Dec 2015 13:45:10 GMT
< Content-Length: 405
<
<!-- Here goes the html of the page with error -->
As I mentioned before, I can successfully log in with both of the accounts via the browser.
How can I figure out why the server rejects one of the accounts via curl but doesn't via browser?
I also tried to send request via Postman extention for the Chrome browser and it works just fine too.
It is because, when you do login using the second account, it is using the cookie from the previously saved cookie. And when server sees a valid cookie, it just redirect you into a different location (or something else decided by the server).
To overcome this, just flush the cookie file before the second curl.
echo ''> ~/cookie.txt
I've contacted with the owner of that site and it turned out that there is a custom validation of the "accept language" header. I didn't send this header at all. I don't know why some requests passed validation and some not, but after I added this header to my requests all gone well :)
Related
Curl doesn't return anything
I have a web app hosted at A.B.C.D:5601 and when I try curl A.B.C.D:5601 it doesn't print out anything on the screen however when I open the same link using a browser it does open the webapp. But it gets forwarded to A.B.C.D:5601/foo/bar and it opens fine. Here is the output of curl -v * Trying A.B.C.D:5601... * TCP_NODELAY set * Connected to A.B.C.D port 5601 (#0) > GET / HTTP/1.1 > Host: A.B.C.D:5601 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 302 Found < location: /spaces/enter < x-content-type-options: nosniff < referrer-policy: no-referrer-when-downgrade < cache-control: private, no-cache, no-store, must-revalidate < content-length: 0 < Date: Tue, 08 Jun 2021 03:01:11 GMT < Connection: keep-alive < Keep-Alive: timeout=120 < * Connection #0 to host A.B.C.D left intact Why is curl not giving me the response back?
Curl is telling you you are being redirected to /spaces/enter. You can tell Curl to automatically follow redirects: curl -vL [url]
RCurl (with digest authentication) not setting realm correctly on Windows
I've been working on an R interface to a HTTP API with digest authentication and I've been running into a problem wherein the request will work absolutely fine on my non-Windows OSs, but I always get 401 status when running exactly the same code on Windows. I'm currently trying to do it with RCurl, but the same thing was happening with httr when I tried that. Also the API is unfotunately proprietary so I've had to change all the URLs, sorry. On my non-Windows OSs I get the following behaviour: rprompt> getURL('http://demo.someapi.net/some/url', userpwd="demo:demo", httpauth=1L, verbose=TRUE) * Trying 195.224.16.34... * Connected to demo.someapi.net (195.224.16.34) port 443 (#0) * TLS 1.0 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA * Server certificate: *.someapi.net * Server certificate: RapidSSL SHA256 CA - G3 * Server certificate: GeoTrust Global CA > GET /some/url HTTP/1.1 Host: demo.someapi.net Accept: */* < HTTP/1.1 401 Unauthorized < Cache-Control: no-cache < Content-Type: text/html < Server: Microsoft-IIS/7.5 < WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo0NTozMw", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth" < X-Powered-By: ASP.NET < Date: Mon, 11 Jul 2016 08:44:33 GMT < Content-Length: 1293 < * Ignoring the response-body * Connection #0 to host demo.someapi.net left intact * Issue another request to this URL: 'https://demo.someapi.net/some/url' * Found bundle for host demo.someapi.net: 0x7f96c8d55af0 * Re-using existing connection! (#0) with host demo.someapi.net * Connected to demo.someapi.net (195.224.16.34) port 443 (#0) * Server auth using Digest with user 'demo' > GET /some/url HTTP/1.1 Host: demo.someapi.net Authorization: Digest username="demo", realm="company.product", nonce="MTEvMDcvMjAxNiAwODo0NTozMw", uri="/some/url", cnonce="YjRkMDQxYmM4MDFkYTMxOWZhNTViNGNmYTM5YzQyNGI=", nc=00000001, qop=auth, response="5d9643d083b2380f12d71855a98ceac3", opaque="0000000000000000", algorithm="MD5" Accept: */* < HTTP/1.1 200 OK < Cache-Control: private < Content-Length: 981 < Content-Type: application/json; charset=utf-8 < Server: Microsoft-IIS/7.5 < X-AspNet-Version: 4.0.30319 < X-Powered-By: ASP.NET < Date: Mon, 11 Jul 2016 08:44:33 GMT < * Connection #0 to host demo.someapi.net left intact and evertything works exactly as we expect it to. On Windows however we get this: rprompt> getURL('http://demo.someapi.net/some/url', userpwd="demo:demo", httpauth=1L, verbose=TRUE) * Trying 195.224.16.34... * Connected to demo.someapi.net (195.224.16.34) port 443 (#0) * successfully set certificate verify locations: * CAfile: C:/Users/username/Documents/R/win-library/3.3/RCurl/etc/ca-bundle.crt CApath: none * SSL connection using TLSv1.0 / ECDHE-RSA-AES256-SHA * Server certificate: * subject: OU=GT56411961; OU=See www.rapidssl.com/resources/cps (c)15; OU=Domain Control Validated - RapidSSL(R); CN=*.someapi.net * start date: 2015-01-26 09:31:11 GMT * expire date: 2018-03-28 16:30:51 GMT * subjectAltName: demo.someapi.net matched * issuer: C=US; O=GeoTrust Inc.; CN=RapidSSL SHA256 CA - G3 * SSL certificate verify ok. > GET /some/url HTTP/1.1 Host: demo.someapi.net Accept: */* < HTTP/1.1 401 Unauthorized < Cache-Control: no-cache < Content-Type: text/html < Server: Microsoft-IIS/7.5 < WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo1MjowOA", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth" < X-Powered-By: ASP.NET < Date: Mon, 11 Jul 2016 08:51:07 GMT < Content-Length: 1293 < * Ignoring the response-body * Connection #0 to host demo.someapi.net left intact * Issue another request to this URL: 'https://demo.someapi.net/some/url' * Found bundle for host demo.someapi.net: 0xaa60b80 * Re-using existing connection! (#0) with host demo.someapi.net * Connected to demo.someapi.net (195.224.16.34) port 443 (#0) * Server auth using Digest with user 'demo' > GET /some/url HTTP/1.1 Authorization: Digest username="demo",realm="",nonce="MTEvMDcvMjAxNiAwODo1MjowOA",uri="/some/url",cnonce="553f542ddef0e3c265e50539297bad81",nc=00000001,algorithm=MD5,response="1ec58793bb1d8142f09af112b905fa36",qop="auth",opaque="0000000000000000" Host: demo.someapi.net Accept: */* < HTTP/1.1 401 Unauthorized < Cache-Control: no-cache < Content-Type: text/html < Server: Microsoft-IIS/7.5 * Authentication problem. Ignoring this. < WWW-Authenticate: Digest realm="company.product", nonce="MTEvMDcvMjAxNiAwODo1MjowOA", opaque="0000000000000000", stale=false, algorithm=MD5, qop="auth" < X-Powered-By: ASP.NET < Date: Mon, 11 Jul 2016 08:51:07 GMT < Content-Length: 1293 < * Connection #0 to host demo.someapi.net left intact which just returns a 401 landing page HTML. The issue seems to be that the realm field is empty, but I have no idea how to fix this or even how to work around it. It should be noted that both .NET's webclient and Python's requests library handles things fine, but unfortunately this has to be done in R. I'm happy to use any R packages that are needed to help solve this. Thanks.
For anyone else who ends up with a similar problem, you can get around it by using httr and using your own handles. make.request <- function (url, user, pass) { handle <- httr::handle(url) response <- GET(url=NULL, authenticate(user, pass, type="digest"), handle=handle) # error checking and stuff... } I'm not sure what the issue is with RCurl though.
Why Twitter's t.co shows up in Referer, but no URL shorteners ever do?
I've noticed a whole bunch of Referer links like t.co/oPQO7Xdz in my access_log files, but no other URL shorteners ever show up. Why?
The URL shorteners never show up because HTTP 301 Moved Permanently et al redirects in HTTP are not designed to influence the Referer HTTP Request Header (apparently, not even if the header is blank, potentially due to the fact that it'll cause inconsistency in behaviour otherwise). However, Twitter does not issue 301 Moved Permanently redirects with its t.co service if it sees what it deems is a popular desktop or mobile User-Agent. As the redirect is thus done outside of the HTTP stack, the Referer field in the subsequent brand-new HTTP Request would then be composed to include the prior HTML page that was responsible for the redirection, causing a t.co entry to appear in access_log. % curl -v -A"iPhone;" t.co/oPQO7Xdz * About to connect() to t.co port 80 (#0) * Trying 104.244.42.5... * connected * Connected to t.co (104.244.42.5) port 80 (#0) > GET /oPQO7Xdz HTTP/1.1 > User-Agent: iPhone; > Host: t.co > Accept: */* > < HTTP/1.1 200 OK < cache-control: private,max-age=300 < content-length: 258 < content-security-policy: referrer always; < content-type: text/html; charset=utf-8 < date: Thu, 07 Jul 2016 05:24:16 GMT < expires: Thu, 07 Jul 2016 05:29:16 GMT < server: tsa_o < set-cookie: muc=1f43e292-e319-4818-ba81-f12d16e5b629; Expires=Tue, 19 Jun 2018 05:24:16 UTC; Domain=t.co < x-connection-hash: 0dc5a2a6a7e83ac2d7fb207eb0cedf84 < x-response-time: 115 < x-xss-protection: 1; mode=block < * Connection #0 to host t.co left intact <head><meta name="referrer" content="always"><noscript><META http-equiv="refresh" content="0;URL=http://mdoc.su/n/curl"></noscript><title>http://mdoc.su/n/curl</title></head><script>window.opener = null; location.replace("http:\/\/mdoc.su\/n\/curl")</script>* Closing connection #0 Compare this to what were to occur otherwise (and which is the only way that most other URL shorteners redirect, which would preserve whichever Referer appears when the HTTP request first hits the HTTP stack of the browser): % curl -v t.co/oPQO7Xdz * About to connect() to t.co port 80 (#0) * Trying 104.244.42.69... * connected * Connected to t.co (104.244.42.69) port 80 (#0) > GET /oPQO7Xdz HTTP/1.1 > User-Agent: curl/7.26.0 > Host: t.co > Accept: */* > < HTTP/1.1 301 Moved Permanently < cache-control: private,max-age=300 < content-length: 0 < date: Thu, 07 Jul 2016 05:24:40 GMT < expires: Thu, 07 Jul 2016 05:29:40 GMT < location: http://mdoc.su/n/curl < server: tsa_o < set-cookie: muc=2c727b50-311f-4043-9861-9f703996a8a8; Expires=Tue, 19 Jun 2018 05:24:40 UTC; Domain=t.co < x-connection-hash: 5583cc49ddbcefe8fac9ba392ca868fd < x-response-time: 103 < * Connection #0 to host t.co left intact * Closing connection #0
Opengraph HTTP 301 redirect wrong location
Open the opengraph debugger page and try it with this url: http://www.jetradar.com/?marker=12345 - I get this as a result: http://screencloud.net/v/rHlW - for some reason it tries this obviously wrong redirect url "http://www.jetradar.com/,%20http://www.jetradar.com/" though ask the server via curl - you get nothing suspicious. $ curl -v http://www.jetradar.com/?marker=12345 * Adding handle: conn: 0x20e9b20 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x20e9b20) send_pipe: 1, recv_pipe: 0 * About to connect() to www.jetradar.com port 80 (#0) * Trying 5.10.84.53... * Connected to www.jetradar.com (5.10.84.53) port 80 (#0) > GET /?marker=12345 HTTP/1.1 > User-Agent: curl/7.32.0 > Host: www.jetradar.com > Accept: */* > < HTTP/1.1 301 Moved Permanently * Server nginx/1.5.12 is not blacklisted < Server: nginx/1.5.12 < Date: Fri, 07 Nov 2014 12:43:03 GMT < Content-Type: text/html; charset=utf-8 < Transfer-Encoding: chunked < Connection: keep-alive < Status: 301 Moved Permanently < Location: http://www.jetradar.com/ < X-UA-Compatible: IE=Edge,chrome=1 < Set-Cookie: marker=12345; path=/; expires=Sun, 07-Dec-2014 12:43:03 GMT < X-Request-Id: ac8d046ea03191f637cbdf8d9129a1a0 < X-Runtime: 0.011112 < X-Rack-Cache: miss < X-Powered-By: Phusion Passenger 4.0.19 < Location: http://www.jetradar.com/ < X-Page-Speed: 1.8.31.4-4009 < Cache-Control: max-age=0, no-cache < * Connection #0 to host www.jetradar.com left intact <html><head/><body>You are being redirected.</body></html> Any ideas? Hello from jetradar team to stackoverflow community:)
HTTP GET Request Structure
Consider the following hyperlink: <a href="http://www.cs.rutgers.edu/∼shklar/"> What HTTP/1.0 request will get submitted by the browser? What HTTP/1.1 request will get submitted by the browser? Will these requests change if the browser is configured to contact an HTTP proxy? If yes, how?
While you could use tcpdump to dump the actual network traffic, curl is surely more handy to test the HTTP conversation from the command line. An HTTP/1.0 request: curl -v -0 http://www.cs.rutgers.edu/∼shklar/ * About to connect() to www.cs.rutgers.edu port 80 (#0) * Trying 128.6.4.24... * connected * Connected to www.cs.rutgers.edu (128.6.4.24) port 80 (#0) > GET /∼shklar/ HTTP/1.0 > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5 > Host: www.cs.rutgers.edu > Accept: */* > < HTTP/1.1 404 Not Found < Date: Wed, 31 Oct 2012 17:57:31 GMT < Server: Apache/1.3.26 (Unix) < Content-Type: text/html; charset=iso-8859-1 < Connection: close An HTTP/1.1 request: curl -v http://www.cs.rutgers.edu/∼shklar/ * About to connect() to www.cs.rutgers.edu port 80 (#0) * Trying 128.6.4.24... * connected * Connected to www.cs.rutgers.edu (128.6.4.24) port 80 (#0) > GET /∼shklar/ HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5 > Host: www.cs.rutgers.edu > Accept: */* > < HTTP/1.1 404 Not Found < Date: Wed, 31 Oct 2012 17:59:47 GMT < Server: Apache/1.3.26 (Unix) < Content-Type: text/html; charset=iso-8859-1 < Transfer-Encoding: chunked Use the -x (or --proxy) <[protocol://][user#password]proxyhost[:port]> switch to use a proxy and see the results. More about curl here: http://curl.haxx.se/docs/manpage.html