I’m trying to access an endpoint, which requires a client cert.
I’m starting from a .p12, which I was able to quickly import to Google Chrome, and can successfully access the endpoint. So the client certificate and endpoint are compatible.
However, I’m struggling to get Python Requests module (with Python 2.7) to successfully access the same endpoint.
My steps have been:
openssl pkcs12 -in my.p12 -out certificate.pem –nodes prompts me for a password, then creates certificate.pem
print(requests.get("<https://endpoint>", cert="certificate.pem").content) returns You don't have permission to access "http" on this server. (and a HTTP response of 403)
My PEM file contains three sets of -----BEGIN CERTIFICATE-----, and then -----BEGIN PRIVATE KEY-----.
All 4 BEGINs are preceded by Bag Attributes – removing these lines doesn’t make a difference.
I'm doing the key creation with a Ubuntu VM, but running the Python from a Windows machine - not sure if this makes a difference.
I’d welcome any ideas; particularly to understand if the issue is around the conversion to PEM, or if it’s with the request call.
The error is not indicative of a problem with the client certificate.
If your client certificate were the problem the documentation suggests your error would have been prefixed with "SSLError": http://docs.python-requests.org/en/master/user/advanced/#client-side-certificates
The relevant error is likely in the part you are censoring for privacy reasons. Having achieved authentication, the web server is rejecting your request for some other reason.
Possibly you are calling requests.get('https://website.com', ...
You may need to call requests.get('https://website.com/', ...
Or directly request a file resource within the website. When testing with Chrome, a non-displayed trailing '/' may have been used when Chrome made the request to the web server. Try adding / to the end of the address.
Certainly you shouldn't be using the "<" ">" tags shown in your example.
I found https://gist.github.com/erikbern/756b1d8df2d1487497d29b90e81f8068, with the delete=False param as suggested in those comments, and pyOpenSSL, now works.
Related
FOREWORD
This may well be the weirdest problem I have ever witnessed in 15 years. It is 100% reproducible on a specific machine that sends a specific request when authenticated as a specific user, if the request is sent from Chrome (it doesn't happen from Edge, it doesn't happen from cURL or Postman). I can't expect an exact solution to my disturbingly specific issue, but any pointers about what could theoretically cause it are more than welcome.
WHAT HAPPENS
We have several PCs in our factory, that communicate with a central HTTP server (hosted on premise, if that even matters: they're on the same LAN). Of course, we have users who could work on any of these machines.
When a certain user does a specific action on a certain one of those machines, she gets a message about an "HTTP error". The server responds with a 400, specifying that the JSON in the request is ill-formed. Fine, let's look at the JSON: it's an 80-characters string, and it looks very well-formed. I check its length, and it is in fact an 80-character string, and the request has a Content-Length of 80. All is fine, but the server responds with the 400.
The same user on a different machine, or a different user on the same machine, or any other user on any other machine can do the very same action and the very same corresponding HTTP request. The same user, on that machine, can do the action fine using Edge instead of Chrome (despite both being Chromium-based). If I "export" the request from the browser's Dev Tools into any format (cURL bash, cURL cmd, JS fetch...), the request in Chrome and the one in Edge look the same.
Our UI sends the request using Axios. If I send it with fetch, I still get the error. If I serialize the JSON myself and send the string (instead of letting Axios/fetch handle the serialization), I still get the error. If I send that same request using any other client (cURL from command line, Postman...) I don't get the error - same as in Edge.
WHAT I FINALLY NOTICED (and how I hacked the issue into submission)
The server is ASP.NET Core (using .Net 5), so I added a middleware to record the received request. Apparently, in the specified conditions, the server receives a request body that is different from what was sent by the client. Say the client sends:
{"key1":"value1","key2":"value2"}
Well, the server receives:
{"key1":"value1","key2":"value2"
Notice the newline at the beginning and the missing closing brace at the end. The body apparently gets an extra character at the start, and the final character is lost - either because it is not actually sent/received or because the Content-Length dictated it to be truncated.
This clearly explains the failed deserialization (the string is in fact invalid JSON) and the resulting 400 response.
Since this bug had been blocking or hindering production for several days, I wrote a "healer" middleware, that tries to deserialize the JSON string received (if the Content Type indicates JSON, of course); if it fails, it looks for a single non-opening-brace character at the start of the string, and if it finds it it rewrites the body by removing that character and appending a closing brace. It lets the healed request go down the pipeline and notifies me via e-mail.
THE AFTERMATH
All has been working fine since I released the fix, and we even asked or system managers to replace the PC that was causing problems, since we could only think of a vicious issue with OS/browser setup or configuration that caused conflicts.
However, when they replaced it, I started getting the notification e-mail again... this time from other two users, always on that same machine, each of them having the same issue (that is being healed, btw), each of them on a different request (but always the same request for each user). The requests point to different URLs and their bodies have different lengths and complexity (JSON-wise). I haven't tried all the tests I did before (different browser, cURL, fetch...) but the diagnosis of the problem is the same, and it is being handled by the healer middleware.
A colleague reported that they already had a similar problem several months ago, which they didn't investigate back then. They're not sure it was the very same workstation, but they replaced the PC and the error didn't happen any more. It seems to be pretty much random, and I still have no idea what could cause such a behaviour.
Here is some more info about the platform, if any of this is relevant:
clients: Windows 10 PCs, using Chrome in kiosk mode, launched by a batch that is located on a network share;
UI: React, sending HTTP requests with Axios;
server: .Net 5 ASP.NET Core service.
UPDATE
I've recorded the network traffic using Wireshark on the client PC. Here is what I got:
So apparently the request is already modified when it leaves the client host.
How do I get my custom header all the way to my Rails application when running behind nginx and Phusion Passenger? It is possible, please see details below, but when I just use the headers pane in Paw it is not passed through.
I am using Paw to test and develop some API endpoints in a Rails application. Everything works as expected in my development environment, which is a Rails 6 application running on macOS using Puma. For security, I use a custom header that contains a personal auth token. When I examine the Rails request object, specifically request.headers, I am able to see all the headers including my custom header and I can authenticate based on its value.
The problem comes when running on my staging system, where I have very little control of the environment. Here, the same Rails application is running under Phusion Passenger behind nginx. When I hit the same endpoint with the same request, just changing the host in the request, the custom header is not present. I verified this by writing all headers to a file for every request in staging.
Where has the header gone? Because the environment is different in staging, I suspect that nginx or Fusion Passenger is receiving the header but not passing it through to my Rails application. I can't verify this since I have no access to the logs other than my Rails application's logs. The application is designed to get requests from an external service, so I send some requests through that service and the header is present. That is very strange, so some headers are being passed through and some are not.
Paw (header defined under headers pane).
I checked with cURL using:
curl -X 'https://example.com/ivr/main_menu' -H 'X_JSW_AUTH_TOKEN':'my_tkn'
With Ruby's Net::HTTP:
uri = URI('https://example.com/ivr/main_menu')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
req = Net::HTTP::Post.new(uri)
req.add_field "X_JSW_AUTH_TOKEN", "my_tkn"
res = http.request(req)
With HTTPie:
http POST 'https://example.com/ivr/main_menu' 'X_JSW_AUTH_TOKEN':'my_tkn'
With the http gem from httprb:
resp = HTTP.headers(X_JSW_AUTH_TOKEN: "my_tkn").post("https://example.com/ivr/main_menu")
It seems the answer has nothing to do with Paw. There's pretty solid evidence of this when cURL, HTTPie, and Net::HTTP all have the same results as Paw.
The problem is how nginx treats headers with underscores by default by ignoring them. See "Why do HTTP servers forbid underscores in HTTP header names" for more information.
Ruby's HTTP, which was able to provide the header to my application, did so because it replaces underscores ("_") with hyphens ("-") in header names, so these headers made it through to the Rails application. Rails then replaces hyphens in the header names of requests with underscores. So, both the sender (HTTP) and receiver (Rails) were making substitutions behind the scenes. This made it way harder to troubleshoot.
Many thanks to Chris Oliver for the answer.
I am an amateur historian trying to access newspaper archives. The server where the scans are located "works" using an outdated tif viewer that doesn't seem to actually work at all anymore. I can access the files individually in chrome without logging in, but when I try to use wget or curl, I'm told that viewing the file is unauthorized, even when I use my login info, and even when using my cookies from chrome.
Here is an example of one of the files: https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF
When I put this into chrome, it automatically downloads the file even though I cannot access the directory itself, but when I use wget, I get the following response: "401 unauthorized Username/Password Authentication Failed."
This is the basic wget command I'm using (if I can get it to work at all, then I'll input a list of the other files):
wget --no-check-certificate https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF
I've tried variations with and without cookies, with a blank user, with and without login credentials, As I'm sure you can tell, I'm new to this sort of thing but eager to learn.
From what I can see, authentication on your website is done with HTTP basic. This kind of authentication is not using HTTP cookies, it is using HTTP Authorization header. You can pass HTTP basic credentials to wget with the following arguments.
wget --http-user=YourUsername --http-password=YourPassword https://ulib.aub.edu.lb/nahar/images2/7810W2/78101001.TIF
I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.
When a server allows access via Basic HTTP Authentication, what is the experience expected to be in a web browser?
Ignoring the web browser for a moment, here's how to create a Basic Auth request with curl:
curl -u myusername:mypassword http://somesite.example
But what about in a Web Browser? What I've seen on some websites, is I visit the URL, and then the server returns response code 401. The browser then displays a username/password prompt.
However, on somesite.example, I'm not getting an authorization prompt at all, just a page that says I'm not authorized. Did somesite not implement the Basic Auth workflow correctly, or is there something else I need to do?
To help everyone avoid confusion, I will reformulate the question in two parts.
First: "how can make an authenticated HTTP request with a browser, using BASIC auth?".
In the browser you can do a HTTP basic auth first by waiting the prompt to come, or by editing the URL if you follow this format: http://myusername:mypassword#somesite.example
NB: the curl command mentionned in the question is perfectly fine, if you have a command-line and curl installed. ;)
References:
https://en.wikipedia.org/wiki/Basic_access_authentication#URL_encoding
https://en.wikipedia.org/wiki/Uniform_Resource_Locator#Syntax
https://www.rfc-editor.org/rfc/rfc3986#page-18
Also according to the CURL manual page https://curl.haxx.se/docs/manual.html
HTTP
Curl also supports user and password in HTTP URLs, thus you can pick a file
like:
curl http://name:passwd#machine.domain/full/path/to/file
or specify user and password separately like in
curl -u name:passwd http://machine.domain/full/path/to/file
HTTP offers many different methods of authentication and curl supports
several: Basic, Digest, NTLM and Negotiate (SPNEGO). Without telling which
method to use, curl defaults to Basic. You can also ask curl to pick the
most secure ones out of the ones that the server accepts for the given URL,
by using --anyauth.
NOTE! According to the URL specification, HTTP URLs can not contain a user
and password, so that style will not work when using curl via a proxy, even
though curl allows it at other times. When using a proxy, you _must_ use
the -u style for user and password.
The second and real question is "However, on somesite.example, I'm not getting an authorization prompt at all, just a page that says I'm not authorized. Did somesite not implement the Basic Auth workflow correctly, or is there something else I need to do?"
The curl documentation says the -u option supports many method of authentication, Basic being the default.
Have you tried?
curl somesite.example --user username:password
You might have old invalid username/password cached in your browser. Try clearing them and check again.
If you are using IE and somesite.example is in your Intranet security zone, IE may be sending your Windows credentials automatically.
WWW-Authenticate header
You may also get this if the server is sending a 401 response code but not setting the WWW-Authenticate header correctly - I should know, I've just fixed that in out own code because VB apps weren't popping up the authentication prompt.
If there are no credentials provided in the request headers, the following is the minimum response required for IE to prompt the user for credentials and resubmit the request.
Response.Clear();
Response.StatusCode = (Int32)HttpStatusCode.Unauthorized;
Response.AddHeader("WWW-Authenticate", "Basic");
You can use Postman a plugin for chrome.
It gives the ability to choose the authentication type you need for each of the requests.
In that menu you can configure user and password.
Postman will automatically translate the config to a authentication header that will be sent with your request.